Randomized Approximation Algorithms For Set MulticoverProblems With Applications To Reverse Engineering of Protein Andgene Networks
Randomized Approximation Algorithms For Set MulticoverProblems With Applications To Reverse Engineering of Protein Andgene Networks
Bhaskar DasGupta
Eduardo Sontag
Abstract
In this paper we investigate the computational complexities of a combinatorial problem that
arises in the reverse engineering of protein and gene networks. Our contributions are as follows:
We abstract a combinatorial version of the problem and observe that this is equivalent
to the set multicover problem when the coverage factor k is a function of the number
of elements n of the universe. An important special case for our application is the case in
which k = n 1.
We observe that the standard greedy algorithm produces an approximation ratio of (log n)
even if k is large i.e. k = n c for some constant c > 0.
Let 1 < a < n denotes the maximum number of elements in any given set in our set
multicover problem. Then, we show that a non-trivial analysis of a simple randomized
polynomial-time approximation algorithm for this problem yields an expected approximation ratio E[r(a, k)] that is an increasing function of a/k. The behavior of E[r(a, k)]
is roughly as follows: it is about ln(a/k) when a/k is at least about e2 7.39, and
for smaller values of a/k it decreases towards 2 exponentially with increasing k with
lima/k 0 E[r(a, k)] 2. Our randomized algorithm is a cascade of a deterministic and
a randomized rounding step parameterized by a quantity followed by a greedy solution
for the remaining problem.
Introduction
Let [x, y] is the set {x, x + 1, x + 2, . . . , y} for integers x and y. The set multicover problem is a
well-known combinatorial problem that can be defined as follows.
Problem name: SCk.
Instance < n, m, k >: An universe U = [1, n], sets S1, S2, . . . , Sm U with m
j=1Sj = U
and a coverage factor (positive integer) k.
Department of Computer Science and Engineering, Pennsylvania State University, University Park, PA 16802.
Email: [email protected]. Supported by NSF grant CCR-O208821.
Department of Mathematics,
Rutgers University,
New Brunswick,
NJ 08903.
Email:
[email protected]. Supported in part by NSF grant CCR-0206789.
Valid Solutions: A subset of indices I [1, m] such that, for every element x U,
|j I : x Sj| k.
Objective: Minimize |I|.
SC1 is simply called the Set Cover problem and denoted by SC; we will denote an instance of
SC simply by < n, m > instead of < n, m, 1 >.
Both SC and SCk are already well-known in the realm of design and analysis of combinatorial
algorithms (e.g., see [13]). Let 3 a < n denote the maximum number of elements in any set,i.e.,
a = maxi[1,m]{|Si|}. We summarize some of the known relevant results for them below.
Fact 1
(a) [4] Assuming NP 6 DTIME(nlog log n), instances < n, m > of the SC problem cannot be approximated to within a factor of (1 ) ln n for any constant 0 < < 1 in polynomial time.
(b) [13] An instance < n, m, k > of the SCk problem can be (1 + ln a)-approximated in O(nmk)
time by a simple greedy heuristic that, at every step, selects a new set that covers the maximum
number of those elements that has not been covered at least k times yet. It is also possible to design
randomized approximation algorithms with similar expected approximation ratios.
1.1
Summary of Results
The combinatorial problems investigated in this paper that arise out of reverse engineering of gene
and protein networks can be shown to be equivalent to SCk when k is a function of n. One case
that is of significant interest is when k is large,i.e., k = n c for some constant c > 0, but
the case of non-constant c is also interesting (cf. Questions (Q1) and (Q2) in Section 2). Our
contributions in this paper are as follows:
In Section 2 we discuss the combinatorial problems (Questions (Q1) and (Q2)) with their
biological motivations that are of relevance to the reverse engineering of protein and gene
networks. We then observe, in Section 2.3, using a standard duality that these problems are
indeed equivalent to SCk for appropriate values of k.
In Lemma 2 in Section 3.1, we observe that the standard greedy algorithm SCk produces an
approximation ratio of (log n) even if k is large, i.e. k = n c for some constant c > 0.
Let 1 < a < n denotes the maximum number of elements in any given set in our set multicover problem. In Theorem 3 in Section 3.2, we show that a non-trivial analysis of a
simple randomized polynomial-time approximation algorithm for this problem yields an expected approximation ratio E[r(a, k)] that is an increasing function of a/k. The behavior of
E[r(a, k)] is roughly as follows: it is about ln(a/k) when a/k is at least about e2 7.39,
and for smaller values of a/k it decreases towards 2 exponentially with increasing k with
lima/k0 E[r(a, k)] 2. More precisely, E[r(a, k)] is at most
1 + ln a,
if k = 1
a
k}
1.2
Motivations
In this section is to define a computational problem that arises in the context of experimental
design for reverse engineering of protein and gene networks. We will first pose the problem in
linear algebra terms, and then recast it as a combinatorial question. After that, we will discuss its
motivations from systems biology. Finally, we will provide a precise definition of the combinatorial
problems and point out its equivalence to the set multicover problem via a standard duality.
Our problem is described in terms of two matrices A Rnn and B Rnm such that:
A is unknown;
B is initially unknown, but each of its columns, denoted as B1, B2, . . . , Bm, can be retrieved
with a unit-cost query;
the columns of B are in general position, i.e., each subset of k n columns of B is linearly
independent;
the zero structure of the matrix C = AB = (cij) is known, i.e., a binary matrix C0 = c0ij
{0, 1}nm is given, and it is known that cij = 0 for each i, j for which c0ij = 0.
The objective, roughly speaking, is to obtain as much information as possible about A (which, in
the motivating application, describes regulatory interactions among genes and/or proteins), while
performing few queries (each of which may represent the measuring of a complete pattern of gene
expression, done under a different set of experimental conditions).
Notice that there are intrinsic limits to what can be accomplished: if we multiply each row of
A by some nonzero number, then the zero structure of C is unchanged. Thus, the best that we can
hope for is to identify the rows of A up to scalings (in abstract mathematical terms, as elements of
the projective space Pn1). To better understand these geometric constraints, let us reformulate
the problem as follows. Let Ai denote the ith row of A. Then the specification of C0 amounts to the
specification of orthogonality relations Ai Bj = 0 for each pair i, j for which c0ij = 0. Suppose that
we decide to query the columns of B indexed by J = {j1, . . . , j`} . Then, the information obtained
, where indicates orthogonal complement, and
about A may be summarized as Ai HJ,i
HJ,i = span {Bj, j Ji} ,
3
Ji = {j | j J and c0ij = 0} .
(1)
Suppose now that the set of indices of selected queries J has the property:
each set Ji, i = 1, . . . , n, has cardinality n k,
(2)
for some given integer k. Then, because of the general position assumption, the space HJ,i has
has dimension at most k.
dimension n k, and hence the space HJ,i
1, hence each A is
The most desirable special case is that in which k = 1. Then dim HJ,i
i
uniquely determined up to a scalar multiple, which is the best that could be theoretically achieved.
Often, in fact, finding the sign pattern (such as (+, +, , 0, 0, , . . .)) for each row of A is the
main experimental goal (this would correspond, in our motivating application, to determining if the
regulatory interactions affecting each given gene or protein are inhibitory or catalytic). Assuming
= {0} does not hold (which would determine A = 0), once that
that the degenerate case HJ,i
i
has been picked, there are only two sign patterns
an arbitrary nonzero element v in the line HJ,i
possible for Ai (the pattern of v and that of v). If, in addition, one knows at least one nonzero sign
in Ai, then the sign structure of the whole row has been uniquely determined (in the motivating
biological question, typically one such sign is indeed known; for example, the diagonal elements aii,
i.e. the ith element of each Ai, is known to be negative, as it represents a degradation rate). Thus,
we will be interested in this question:
find J of minimal cardinality such that |Ji| n 1, i = 1, . . . , n.
(Q1)
If queries have variable unit costs (different experiments have a different associated cost), this
problem must be modified to that of minimizing a suitable linear combination of costs, instead of
the number of queries.
More generally, suppose that the queries that we performed satisfy (2), with k > 1 but small
k. It is not true anymore that there are only two possible sign patterns for any given Ai, but the
number of possibilities is still very small. For simplicity, let us assume that we know that no entry
of Ai is zero (if this is not the case, the number of possibilities may increase, but the argument is
very similar). We wish to prove that the possible number of signs is much smaller than 2n. Indeed,
suppose that the queries have been performed, and that we then calculate, based on the obtained
(assume dim H = k; otherwise pick a smaller k). Thus, the vector
Bjs, a basis {v1, . . . , vk} of HJ,i
J,i
k
X
Ai is known to have the form
rvr for some (unknown) real numbers 1, . . . , k. We may assume
Pk
P r=1
that 1 6= 0 (since, if Ai = k
r=2 rvr, with small enough , has the same
r=2 rvr, the vector v1 +
sign pattern as Ai, and we are counting the possible sign patterns). If 1 > 0, we may divide by 1
and simply count how many sign patterns there are when 1 = 1; we then double this estimate to
include the case 1 < 0. Let vr = col (v1r, . . . , vnr), for each
r = 1, . . . , k.
Since no coordinate of Ai
[
[
k1
is zero, we know that Ai belongs to the set C = R
\ L1 . . . Ln where, for each 1 s n,
P
k1
Ls is the hyperplane in R
consisting of all those vectors (2, . . . , k) such that k
r=2 rvsr = vs1.
On each connected component of C, signs patterns are constant. Thus the possible number of sign
patterns is upper bounded by the maximum possible number of connected regions determined by
n hyperplanes in dimension k 1. A result of L. Schlafli (see [3, 10], and also [11] for a discussion,
proof, and relations to Vapnik-Chervonenkis dimension) states that this number is bounded above
by (n, k 1), provided that k 1 n, where (n, d) is the number of possible subsets of an nd
en d
X
n
nd
2
element set with at most d elements, that is, (n, d) =
. Doubling
i
d!
d
i=0
the estimate to include 1 < 0, we have the upper bound 2(n, k 1). For example, (n, 0) = 1,
(n, 1) = n + 1, and (n, 2) = 21 (n2 + n + 2). Thus we have an estimate of 2 sign patterns when
k = 1 (as obtained earlier), 2n + 2 when k = 2, n2 + n + 2 when k = 3, and so forth. In general,
the number grows only polynomially in n (for fixed k).
These considerations lead us to formulating the generalized problem, for each fixed k: find J
of minimal cardinalityTsuch that |Ji| n k for all i = 1, . . . , n. Recalling the definition (1) of
Ji, we see that Ji = J Ti, where Ti = {j | c0ij = 0}. Thus, we can reformulate our question purely
combinatorially, as a more general version of Question (Q1) as follows. Given sets
Ti {1, . . . , m} ,
i = 1, . . . , n.
Ti| n k, 1 i n.
(Q2)
For example, suppose that k = 1, and pick the matrix C0 {0, 1}nn in such a way that the columns
of C0 are the binary vectors representing all the (n1)-element subsets of {1, . . . , n} (so m = n);
in this case, the set J must equal {1, . . . , m} and hence has cardinality n. On the other hand, also
with k = 1, if we pick the matrix C0 in such a way that the columns of C0 are the binary vectors
representing all the 2-element subsets of {1, . . . , n} (so m = n(n 1)/2), then J must again be the
set of all columns (because, since there are only two zeroes in each column, there can only be a
total of 2` zeroes, ` = |J|, in the submatrix indexed by J, but we also have that 2` n(n 1), since
each of the n rows must have n 1 zeros); thus in this case the minimal cardinality is n(n 1)/2.
2.1
This problem was motivated by the setup for reverse-engineering of protein and gene networks
described in [7, 8] and reviewed in [12]. We assume that the time evolution of a vector of state
variables x(t) = (x1(t), . . . , xn(t)) is described by a system of differential equations:
x 1 = f1(x1, . . . , xn, p1, . . . , pm)
x 2 = f2(x1, . . . , xn, p1, . . . , pm)
..
.
x n = fn(x1, . . . , xn, p1, . . . , pm)
(in vector form, x = f(x, p)), where p = (p1, . . . , pm) is a vector of parameters, representing for
instance the concentrations of certain enzymes which are maintained at a constant value during
of p, which represents wild type (that is,
a particular experiment. There is a reference value p
. That is, f(
) = 0. We are interested in
normal) conditions, and a corresponding steady state x
x, p
), or at least about
obtaining information about the Jacobian of the vector field f evaluated at (
x, p
). For example, if fi/xj > 0, this means that xj has a
the signs of the derivatives fi/xj(
x, p
positive (catalytic) effect upon the rate of formation of xi. The critical assumption, indeed the main
point of [7, 8], is that, while we do not know the form of f, we do know that certain parameters
pj do not directly affect certain variables xi. This amounts to a priori biological knowledge of
specificity
of enzymes and similar data. This knowledge will be summarized by the binary matrix
0
C = c0ij {0, 1}nm, where c0ij = 0 means that pj does not appear in the equation for x i, that
is, fi/pj 0.
The experimental protocol allows us to make small perturbations in which we change one of the
parameters, say the kth one, while leaving the remaining ones constant. (A generalization would
5
allow for the simultaneous perturbation of more than one parameter.) For the perturbed vector
, we measure the resulting steady state vector x = (p). (Mathematically, we suppose that
pp
there is a unique steady state (p) of the
for each vector of parameters p in a neighborhood of p
system, where is a differentiable function. In practice, each such perturbation experiment involves
letting the system relax to steady state, and the use of some biological reporting mechanism, such
as microarrays, in order to measure the expression profile of the variables xi.) For each of the
possible m experiments, in which a given pj is perturbed, we may estimate the n sensitivities
bij =
i
1
( (
p + pjej) i(
p)) , i = 1, . . . , n
(
p)
j pj i
p
pj
(where ej Rm is the jth canonical basis vector). We let B denote the matrix consisting of the
j pj, which is undesirable numerically,
bijs. (See [7, 8] for a discussion of the fact that division by p
is not in fact necessary.) Finally, we let A be the Jacobian matrix f/x and let C be the negative
of the Jacobian matrix f/p. From f((p), p) 0, taking derivatives with respect to p, and using
the chain rule, we get that A = BC. This brings us to the problem stated in this paper. (The
general position assumption is reasonable, since we are dealing with experimental data.)
2.2
2.3
We can establish a 1-1 correspondence between an instance < m, n, k > of CPk and an instance
< n, m, n k > of SCnk by defining Si = { j | i Tj} for each i [1, m]. It is easy to verify that
U 0 is a solution to the instance of CPk if and only if the collection of sets Su for each u U 0 is a
solution to the instance of SCnk.
3.1
Johnson [5] provides an example in which the greedy heuristic for some instance of SC over n
elements has an approximation ratio of at least log2 n. This approach can be generalized to show
the following result.
Lemma 2 For any fixed c > 0, the
greedy
nc
heuristic (as described in Fact 1(b)) has an approxi1
mation ratio of at least 2 o(1) 8n2 log2 n = (log n) for some instance < n, m, n c > of
SCnc.
another collection of sets which will force the greedy heuristic to use at least
(nc) 21 o(1) log2 n sets. Partition [1, ] into p = 1+log4 disjoint sets P1, P2, . . . , Pp such that
|Pi| = 43i for i [1, p]. Observe that p > log4 n. Similarly, partition [+1, 2] into p = 1+log4
disjoint sets Q1, Q2, . . . , Qp such that |Qi| = 43i for i [1, p]. Let S 0 = {S1, S2, . . . , Snc} S.
Now, for each Pi Qi and each distinct Sj S 0 , create a set Ti,j = Pi Qi Sj. We
1claim that
greedy
will pick the sets T1,1, . . . , T1,nc, T2,1, . . . , T2,nc, . . . , Tq,1, . . . , Tq,nc with q = 2 o(1) log2 n <
p. This can be shown by induction as follows:
The greedy must start by picking the sets T1,1, . . . , T1,nc in some arbitrary order. Until all
these sets have been picked, the unpicked ones have at least 43 2 = 23 elements that have not
been covered nc times, whereas each set in the optimal cover has at most + = +3+log2
elements and is sufficiently large.
Inductively, suppose that the greedy has picked all sets Ti,j with i < q when it considers a
Tq,r for possible consideration. Obviously Tq,r contains at least 43q 2 = 46q elements that
are not yet covered n c times. On the other hand, the number of elements that are not yet
covered n c times in any set from our optimal cover is at most
!
q1
X 3
4
1
= + q1 = + q
+ 1
i
4
4
4
i=1
2
4n
> + 44q provided q < log4 2
.
Since
log
>
log
>
4
4 5(2+log2 n)
4n
6
4
1
1
log4 10log
n = 2 o(1) log2 n, the inequality 4q > + 4q holds for q 1, 2 o(1) log2 n .
and
6
4q
3.2
As stated before, an instance < n, m, k > of SCk can be (1 + ln a)-approximated in O(mnk) time
for any k where a = maxSS {|S|}. In this section, we provide a randomized algorithm with an
expected performance ratio better than (1 + ln a) for larger k. Let S = {S1, S2, . . . , Sm}.
Our algorithm presented below as well as our subsequent discussions and proofs are formulated
with the help of the following vector notations:
All our vectors have m coordinates with the ith coordinate indexed with the ith set Si of S.
if V S, then v
{0, 1}m
1 if Si V
0 if Si 6 V
six k
for each i U
xA {0, 1} for each A S
A linear programming (LP) relaxation of the above formulation is obtained by replacing each
constraint xA {0, 1} by 0 xA 1. The following randomized approximation algorithm for SCk
can then be designed:
1.
2.
3.
4.
5.
6.
Select
an appropriate positive constant > 1 in the following manner:
if k = 1
ln a
ln(a/(k 1)) if a/(k 1) e2 and k > 1
=
2
otherwise
Find a solution x to the LP relaxation via any polynomial-time algorithm for solving
linear programs (e.g. [6]).
(deterministic rounding) Form a family of sets C 0 = {A S : xA 1}.
(randomized rounding) Form a family of sets C 1 S C 0 by independent
random choices such that Pr[A C 1] = xA.
(greedy selection) Form a family of sets C 2 as:
while si(c0 + c1 + c2) < k for some i U, insert to C2 any A Si C0 C1 C2.
Return C = C 0 C 1 C 2 as our solution.
E[r(a, k)]
1 + ln a,
if k = 1
a
k}
Let OPT denote the minimum number of sets used by an optimal solution. Obviously, OPT 1x
and OPT nk
a . A proof of Theorem 3 follows by showing the following upper bounds on E[r(a, k)]
and taking the best of these bounds for each value of a/k:
a,
1 + ln (k1)/5
1+e
ln(a/(k 1)),
(k1)/5
2 + 2 e
,
2 + e2 + e9/8 a
k,
if
if
if
if
k=1
a/(k 1) e2 and k > 1
a/(k 1) < e2 and k > 1
a/(k 1) < e2 and k > 1
The case of k = 1 was known before and included for the sake of completeness only.
3.2.1
Proof of E[r(a,
1 + ln a if k = 1,
k)](k1)/5
E[r(a, k)] 1 + e
ln(a/(k 1)) if a/(k 1) e2 and k > 1, and
(k1)/5
E[r(a, k)] 2 + 2 e
if a/(k 1) < e2 and k > 1
For our analysis, we first define two following two vector notations:
xA if xA 1
0 if xA 1
0
1
xA =
xA =
0 otherwise
xA otherwise
Note that c0A = dx0Ae x0A. Thus 1x0 1c0 1x0. Define bonus = 1x0 1c0. It is easy to
see that E[1(c0 + c1)] = 1x bonus.
The contribution of set A to bonus is x0A c0A. This contribution to bonus can be distributed
equally to the elements if A. Since |A| a, an element i [1, n] receives a total of at least bi/a
of bonus, where bi = si(x0 c0) The random process that forms set C 1 has the following goal
from the point of view of element i: pick at least gi sets that contain i, where gi = k sic0
These sets are obtained as successes in Poisson trials whose probabilities of success add to at least
pi = (k six0). Let yi be random function denoting the number that element i contributes to the
size of C 2; thus, if in the random trials in Step 4 we found h sets from Si then yi = max{0, k h}.
P
i 0
bi
i
i
0
Thus, E[r(a, k)] = E[1(c0 + c1 + c2)] 1x + n
i=1 E[y a ] Let q = 1s (c x ). We can
parametrize the random process that forms the set C 2 from the point of view of element i as follows:
gi is the goal for the number of sets to be picked;
pi = (k six0) = gi + ( 1)qi is the sum of probabilities with which sets are picked;
bi/a is the bonus of i, where bi = si(x0 c0) ( 1)(k gi qi);
qi 0, gi 0 and gi + qi k;
yi measures how much the goal is missed;
to bound E[r(a, k)] we need to bound E[yi
bi
a ].
In this section we prove some inequalities needed to estimate E[yi ba ] tightly. Assume that
we haveP
a random function X that is a sum of N independent 0-1 random variables Xi. Let
E[X] = i Pr[Xi = 1] = and g < be a positive integer. We define g-shortage function as
Yg = max{g X, 0}. Our goal is to estimate E[Yg].
Pg1 gi i
Lemma 4 E[Yg] < e i=0
i! .
Proof. Suppose that for some positive numbers p, q and some Xi we have Pr[Xi = 1] = p + q.
Consider replacing Xi with two independent random functions Xi,0 and Xi,1 such that Pr[Xi,0 =
1] = p and Pr[Xi,1 = 1] = q. We can show that after this replacement E[Yg] increases as follows.
In terms of our random functions before the replacement we define rj = Pr[X Xi = g j]. Let X 0
be the sum of our random functions after the replacement and Yg0 be defined in terms of X 0 . Let
Pg1
Pg1
Pg1
a = j=1
jrj, b = j=2
(j 1)rj, and c = j=3
(j 2)rj. Then,
E[Yg0 ] = (1 p)(1 q)a + p(1 q)b + (1 p)qb + pqc
= (1 p q + pq)a + (p + q 2pq)b + pqc
E[Yg] = (1 p q)a + (p + q)b
E[Yg0 ]
N!
Nj j
j
1
=
(N j)!j!
N
N
e j!
where the last equality follows from standard estimates in probability theory.
From now on we will assume the worst-case distribution of Yg, so we will assume that the
above inequality
in Lemma 4 is actually an equality (as it becomes so in the limit), i.e., we assume
Pg1
g
gi i
E[Yg ] = e
i=0 i! . For a fixed , we will need to estimate the growth of E[Yg ] as a function
of g. Let g() = egE[Ygg].
Lemma 5 g(1) =
Proof.
Pg1 gi
i=0
i!
g(1) =
gi =
gg
(g1)!
g1 i
X
g (g i)
i=0
i!
=
=
=
g1 i+1
X
g
i=0
g1
X
i=0
g1
X
i=0
g1
X
i!
gi+1
i!
gi+1
i!
gi+1
i=0
gg+1
i!
g1 i
X
gi
i=0
g1
X
i=1
g2
X
i=0
i!
gi
(i 1)!
gi+1
i!
g1
X gi+1
i=0
gg+1
i!
g!
g!
gg
(g 1)!
g+1 ()
g ()
is a decreasing function of .
Pg1 ai i
i
Proof. By definition, g() =
i=0 i! where ai = g (g i). Let f() = g+1() and
t() = g(). We need to show that, for a given fixed g, f()/t() is a decreasing function of .
0
0 ()f()
. We claim that the numerator f 0 ()t() t 0 ()f()
The derivative of f()/t() is f ()t()t
(t())2
is a polynomial whose coefficients are always negative, which then proves that the derivative is
negative (for all > 0). To prove this, it suffices to show that if p() = f 0 ()t() t 0 ()f() then
p(k)(0) < 0 for all k.
(k1)
Note that t(k)(0) = kg
(0) for all k, hence
kgk1(g + 1 k), if 0 k g
(g + 1)k(g + 1 k), if 0 k g (k)
(k)
t (0) =
f (0) =
0,
if k > g
0,
if k > g
10
1
1
(because d 1), so all we need to show is that ln g+1
< g1
, or, equivalently, 1 + g1 < e g1 .
g
But, obviously,
1
g
<
1
g1
< e g1 .
E[Ygg ]
(g1)
E[Yg1
g
g
.
e g1
g
g1
E[Ygg ]
(g1)
E[Yg1
]
()
(1)
g
g
= e g1
() e
g1 (1) =
The last lemma characterises the impact of extra probabilities on the expected value.
Lemma 8
E[Ygg+q ]
E[Ygg ]
< eq(11/)
Proof. The ratio is eq times the ratio of two polynomials. The terms of the upper polynomial
g1
g1
q
are larger than the terms of the lower by a factor at most g+q
=
1
+
< eq/.
g
g
3.2.1.2 Putting All the Pieces Together
In this section we put all the pieces together from the previous two subsections to prove our
claim on E[r(a, k)]. We assume that 2 if k > 1. Because we perform analysis from the point
of view of a fixed element i, we will skip i as a superscript as appropriate. As we observed in
b
] and b ( 1)(k g q). We will also use the
Section 3.2.1, we need to estimate E[y a
notations p and q as defined there.
11
( 1)(k 1)
b
] 0 e
a
a
e
k1
1
a
a
e( 1)
k1
a
+ ln( 1) ln
k1
(3)
Lemma 9
E[Yg
E[Y1 ]
e(g+q1)/5.
E[Yg
E[Ygg ]
< e
1
(1)q 1
1
e(1)q(1 2 )
eq/2
< eq/5
(by Lemma 8)
(since 2)
(since 2)
E[Ygg ]
E[Y1 ]
= g
i=2
E[Yii ]
(i1)
E[Yi1
If i = 2, then
E[Y22 ]
E[Y1 ]
()
= e(2 + 2). Since e(2 + 2) is a decreasing function of
= e 12 ()
E[Y33 ]
E[Y1 ]
Similarly, if i = 4, then
e3/5.
Similarly, if i = 5, then
945e8 < e4/5.
()
= e2 3 + 6 + 92 2 < 33e4 < e2/5.
= e2 13 ()
E[Y44 ]
E[Y1 ]
E[Y55 ]
E[Y1 ]
()
= e3 4 + 12 + 162 +
= e3 14 ()
()
= e4 15 ()
= e4 5 + 20 +
32 3
3
<
532 6
3 e
75 2
125 3
625 4
2 + 3 + 24
<
<
<
<
Thus,
E[Ygg ]
E[Y1 ]
(by Lemma 7)
i i
(since i1
is a decreasing function of i)
(since e is a decreasing function of )
e(g1)/5.
g+(1)q
Thus,
i i
i1
6
e 56
6
e22 65
e1/5
E[Yg
E[Y1 ]
e(g+q1)/5.
1
( 1)(k 1)
t
E[y] b/a et/5
(k 1 t) =
et/5 1 +
a
a
k1
This is a convex function of t, so its maximal value must occur at one of the ends of its range.
e(k1)/5. As a result, our expected
When t = 0 we have 0, and when t = k 1 we have (1)(k1)
a
performance ratio for k > 1 is given by
(k1)/5
OPT + nk
a e
i
(k1)/5
X
b
(1
)OPT
+ e (k1)/5
E[r(a, k)] 1x+
E[yi ]
1+
) ln(a/(k
1) OPT if a/(k 1) e2
a
e (k1)/5
i=1
2 1+e
OPT
if a/(k 1) < e2
3.2.2
a
k
if a/(k 1) < e2
Each set in A C0 C1 is selected with probability min{xA, 1}. Thus E[|C0 C1|] 1x OPT.
Next we estimate an upper bound on E[|C2|]. For each element i [1, n] let the random variable
Ps i
be max{k di, 0} where di is the number of sets in C0 C1 that contain i. Clearly, |C2| i=1 i.
Thus it suffices to estimate E[i]. Because our estimate will not depend on i, we will drop this
13
index i from i for notational simplifications. Assume that i [1, n] is contained in k f sets from
C0 for some f < k. Then, 1 f and
E[]
Pr[ j] =
f
X
Pr[ j] =
Pr[ f j]
(4)
j=0
j=1
j=1
f1
X
Let the random variable y denote the number of sets in C1 that contain i. Considering the constraint
six k and the fact that we select a set A S\C0 with probability xA, it follows that E[y] f.
Now,
Pr[ f j] = Pr[y < (1 j)f]
(5)
where
(1 j)f = j fj = f j j =
f j
f
(6)
f2j
(f j)2
f
j2
= j+
= (, f, j)
2
2f
2
2f
P
(,f,j).
Combining Equations (4), (5), (7) and (8) we get E[] < f1
j=0 e
Pf1
j=0 e
(,f,j).
(7)
(8)
Proof. X(2, 1) = e1 < e2 +e9/8. The following series of arguments shows that X(2, f) X(2, 2)
for all f > 2:
2
j2
4f .
2
j2
1
(2, p + 1, j) (2, p, j) = 1 + j4 p+1
p1 = 1 4p(p+1)
> 43 for all p 1.
(2, f, j) = f j +
Clearly,
X(2, f) =
<
f2 (2,f1,j)
e(2,f1,j)(2,f,j) + e(2,f,f1)
j=0 e
e3/4S(2, f 1) + e(2,f,f1) < 21 X(2, f 1) + e(2,f,f1)
Hence
X(2, f 1) X(2, 2)
X(2, f 1) X(2, 2)
X(2, 3) < 0.44 < X(2, 2).
(2, 4, 3) > 1.56 > 0.78 + 0.694 > ln X(2, 2) + ln 2. Moreover, since (2, f, f 1) increases
with f, this implies (2, f, f 1) > ln X(2, 2) + ln 2 for all f 4. Thus, X(2, f) < X(2, 2) for
all f 4.
Now we are able to complete the proof on our claimed expected performance
ratios as follows.
2
2
If a/(k 1) e then with = 2 we get E[[|C0 C1|] 2 OPT and E[[|C2|] e + e9/8 a =
2
e + e9/8 a
k OPT by Lemma 10.
14
Our results raise the following research problems for further investigation:
Can we generelize our algorithm and bounds on E[r(a, k)] for the general case of the weighted
set multicover problem i.e. when each set has a non-negative real number (weight) associated
with it and the goal is to minimize the sum of weights of the selected sets. In our application,
this would correspond to the case when each query has a variable cost associated with it.
How close can the quantity lima/k0 E[r(a, k)] be to 1? Some preliminary improved calculations indicate that it might be possible to show that lima/k0 E[r(a, k)] 1.35, but we have
not been able to prove it completely yet. Conversely, a result showing that lima/k0 E[r(a, k)]
1 + for some > 0 for any randomized approximation algorithm under suitable complexitytheoretic assumptions would be interesting as well.
References
[1] N. Alon and J. Spencer, The Probabilistic Method, Wiley Interscience, New York, 1992.
[2] H. Chernoff. A measure of asymptotic efficiency of tests of a hypothesis based on the sum of
observations, Annals of Mathematical Statistics, 23: 493509, 1952.
[3] T. Cover, Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition, IEEE Trans. Electronic Computers EC-14, pp. 326334, 1965.
Reprinted in Artificial Neural Networks: Concepts and Theory, IEEE Computer Society Press,
Los Alamitos, Calif., 1992, P. Mehra and B. Wah, eds.
[4] U. Feige. A threshold for approximating set cover, JACM, Vol. 45, 1998, pp. 634-652.
[5] D. S. Johnson. Approximation Algorithms for Combinatorial Problems, Journal of Computer
and Systems Sciences, Vol. 9, 1974, pp. 256-278.
[6] N. Karmarkar. A new polynomial-time algorithm for linear programming, Combinatorica, 4:
373395, 1984.
[7] B. N. Kholodenko, A. Kiyatkin, F. Bruggeman, E.D. Sontag, H. Westerhoff, and J. Hoek,
Untangling the wires: a novel strategy to trace functional interactions in signaling and gene
networks, Proceedings of the National Academy of Sciences USA 99, pp. 12841-12846, 2002.
[8] B. N. Kholodenko and E.D. Sontag, Determination of functional network structure from local
parameter dependence data, arXiv physics/0205003, May 2002.
[9] R. Motwani and P. Raghavan, Randomized Algorithms, Cambridge University Press, New
York, NY, 1995.
[10] L. Schlafli, Theorie der vielfachen Kontinuitat (1852), in Gesammelte Mathematische Abhandlungen, volume 1, pp. 177392, Birkhauser, Basel, 1950.
[11] E. D. Sontag, VC dimension of neural networks, in Neural Networks and Machine Learning
(C.M. Bishop, ed.), Springer-Verlag, Berlin, pp. 69-95, 1998.
[12] J. Stark, R. Callard and M. Hubank, From the top down: towards a predictive biology of
signaling networks, Trends Biotechnol. 21, pp. 290-293, 2003.
15
16