0% found this document useful (0 votes)
59 views14 pages

An Optimized Divide-and-Conquer Algorithm For The Closest-Pair Problem in The Planar Case

Uploaded by

riaz qamar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views14 pages

An Optimized Divide-and-Conquer Algorithm For The Closest-Pair Problem in The Planar Case

Uploaded by

riaz qamar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/47557816

An Optimized Divide-and-Conquer Algorithm for the Closest-Pair Problem in


the Planar Case

Article  in  Journal of Computer Science and Technology · October 2010


DOI: 10.1007/s11390-012-1272-6 · Source: arXiv

CITATIONS READS
8 1,282

2 authors:

José C. Pereira Fernando G. Lobo


Universidade do Algarve Universidade do Algarve
34 PUBLICATIONS   77 CITATIONS    65 PUBLICATIONS   3,541 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

GLP-Tool View project

All content following this page was uploaded by Fernando G. Lobo on 05 June 2014.

The user has requested enhancement of the downloaded file.


An Optimized Divide-and-Conquer Algorithm for
the Closest-Pair Problem in the Planar Case
José C. Pereira Fernando G. Lobo
DEEI-FCT CENSE and DEEI-FCT
arXiv:1010.5908v1 [cs.CG] 28 Oct 2010

Universidade do Algarve Universidade do Algarve


Campus de Gambelas Campus de Gambelas
8005-139 Faro, Portugal 8005-139 Faro, Portugal
[email protected] [email protected]

Abstract
We present an engineered version of the divide-and-conquer algorithm for finding the clos-
est pair of points, within a given set of points in the XY-plane. In this version of the algorithm,
only two pairwise comparisons are required in the combine step, for each point that lies in the
2δ -wide vertical slab. The correctness of the algorithm is shown for all Minkowski distances
with p > 1. We also show empirically that, although the time complexity of the algorithm is
still O(n lg n), the reduction in the total number of comparisons leads to a significant reduction
in the total execution time, for inputs with size sufficiently large.

1 Introduction
The Closest-Pair problem is considered an “easy” Closest-Point problem, in the sense that there
are a number of other geometric problems (e.g. nearest neighbors and minimal spanning trees)
that find the closest pair as part of their solution (Bentley, 1980, p. 226). This problem and its
generalizations arise in areas such as statistics, pattern recognition and molecular biology.
At present time, many algorithms are known for solving the Closest-Pair problem in any di-
mension k > 2, with optimal time complexity (see Smid (2000) for an overview of Closest-Point
problem algorithms and generalizations). The Closest-Pair is also one of the first non-trivial com-
putational problems that was solved efficiently using the divide-and-conquer strategy and it be-
came since a classical, textbook example for this technique.
In this paper we consider only algorithms for the Closest-Pair problem that can be imple-
mented in the algebraic computation tree model. For this model any algorithm has time complex-
ity Ω(n lg n). With more powerful machine models, where randomization, the floor function, and
indirect addressing are available, faster algorithms can be designed (Smid, 2000).

1
Historical Background
An algorithm with optimal time complexity O(n lg n) for solving the Closest-Pair problem in
the planar case appeared for the first time in 1975, in a computational geometry classic paper by
Shamos (1975). This algorithm was based on the Voronoi polygons.
The first optimal algorithm for solving the Closest-Pair problem in any dimension k > 2 is due
to Bentley and Shamos (1976). Using a divide-and-conquer approach to initially solve the problem
in the plane1 , those authors were able to generalize the planar process to higher dimensions by
exploring a sparsity condition induced over the set of points in the k-plane.
For the planar case, the original procedure and other versions of the divide-and-conquer algo-
rithm usually compute at least seven pairwise comparisons for each point in the central slab, within
the combine step (see Kleinberg and Tardos (2005), Bentley and Shamos (1976), and Cormen et al.
(2001), for instance).
In 1998, Zhou, Xiong, and Zhu2 presented an improved version of the planar procedure, where
at most four pairwise comparisons need to be considered in the combine step, for each point lying
on the left side (alternatively, on the right side) of the central slab. In the same article, Zhou et al.
introduced the “complexity of computing distances”, which measures “the number of Euclidean
distances to compute by a closest-pair algorithm” (Jiang and Gillespie, 2007). The core idea behind
this definition is that, since the Euclidean distance is usually more expensive than other basic
operations, it may be possible to achieve significant efficiency improvements by reducing this
complexity measure.
More recently, Ge, Wang, and Zhu used some sophisticated geometric arguments to show
that it is always possible to discard one of the four pairwise comparisons in the combine step,
thus reducing significantly the complexity of computing distances, and presented their enhanced
version of the Closest-Pair algorithm, accordingly(Ge et al., 2006) .
In 2007, Jiang and Gillespie presented another version of the Closest-Pair divide-and-conquer
algorithm which reduced the complexity of computing distances by a logarithmic factor. However,
after performing some algorithmic experimentation, the authors found that, albeit this reduction,
the new algorithm was “the slowest among the four algorithms”(Jiang and Gillespie, 2007) that
were included in the comparative study. The experimental results also showed that the fastest
among the four algorithms was in fact a procedure named Basic-2, where two pairwise compar-
isons are required in the combine step, for each point that lies in the central slab and, therefore, has
a relative high complexity of computing distances. The authors conclude that the simpler design in
the combine step, and a consequent correct imbalance in trading expensive operations with cheaper
ones are the main factors for explaining the success of the Basic-2 algorithm.
In fairness to all parts involved, we must say that the main results presented in this paper,
including the design of the Basic-2 algorithm, were obtained in a completely independent
In this paper we present a detailed version of the Basic-2 algorithm. We show that only two
pairwise comparisons are required in the combine step, for each point that lies in the central slab,
and that this number of comparisons is minimal. This result and the subsequent correctness of the
Basic-2 algorithm is shown for all Minkowski distances3 with p > 1.
1 According to Bentley (1980), Shamos attributes the discovery of this procedure to H.R. Strong.
2 The article in question was published in chinese. See Ge et al. (2006) and Jiang and Gillespie (2007) for some
explicit references.
3 In fairness to all parts involved, we must say that all the main results presented in this paper, including the design

2
The rest of the paper is organized as follows. In Section 2, we review the classic Closest-
Pair algorithm as presented by Bentley and Shamos (1976). In Section 3, we present our detailed
version of the Basic-2 algorithm and give the correspondent proof of correctness. In Section 4, we
present a comparative empirical study between the classic Closest-Pair and the Basic-2 algorithm,
and discuss the experimental results obtained with distinct Minkowski distances.

2 The divide-and-conquer algorithm in the plane


The following algorithm for solving the planar version of the Closest-Pair problem was first pre-
sented by Bentley and Shamos (1976).
Let P be a set of n ≥ 2 points in the XY-plane. The closest pair in P can be found in O(n lg n)
time using the divide-and-conquer algorithm shown in Figure 1.

Closest-Pair(P )
1 Presort points in P along the x-coordinate.
2 Split the ordered set P into two equal-sizeda subsets by the vertical line l ,
defined by the equation x = xmedian .
3 Solve the problem recursively in the left and right subsets. This will give the
left-side and right-side minimal distances dL and dR , respectively.
4 Find the minimal distance dLR among the pair of points in which one point
lies on the left of the dividing vertical line and the other point lies to the right.
5 The final answer is the minimum between dL , dR , and dLR .
a
give or take (exactly) one point, when the number of points is odd.

Figure 1: Pseudocode for the divide-and-conquer Closest-Pair algorithm, first presented by Bentley
and Shamos (1976) .

Since we are splitting a set of n points in two sets of n/2 points each, the recurrence relation
describing the running time of the C LOSEST-PAIR algorithm is T (n) = 2T (n/2) + f (n), where
f (n) is the running time for finding the distance dLR in step 4.
At first sight it seems that something of the order of n2 /4 distance comparisons will be required
to compute dLR . However, Bentley and Shamos (1976) noted that the knowledge of both distances
dL and dR induces a sparsity condition over the set P.
Let δ = min(dL , dR ) and consider the vertical slab of width 2δ centered at line l. If there is any
pair in P closer than δ , both points of the pair must lie on opposite sides within the slab. Also,
because the minimum separation distance of points on either side of l is δ , any square region of
the slab, with side 2δ , “can contain at most a constant number c of points” (Bentley and Shamos,
1976), depending on the used metric4 .
As a consequence of this sparsity condition, if the points in P are presorted by y-coordinate,
the computation of dLR can be done in linear time. Therefore, we obtain the recurrence relation
of the Basic-2 algorithm, were obtained in a completely independent fashion from any previous work by other authors.
It was only during the process of putting our ideas in to writing that we came across the articles by Zhou et al, Ge et al.
(2006), and Jiang and Gillespie (2007) which, obviously, take precedence and deserve due credit.
4 In the original article, Bentley and Shamos (1976), used the Minkowski distance d and obtained the value c = 12.

1
T (n) = 2T (n/2) + O(n), giving an O(n lg n) asymptotically optimal algorithm.











Figure 2: “Hopscotching” ∆-points in ascending order. For each point visited on either side,
the BASIC -2.S4 algorithm computes the distance for the two closer, but not lower, points on the
opposite side.

3 The Basic-2 algorithm


In this Section we discuss a detailed version of the Basic-2 algorithm, which was first presented by
Jiang and Gillespie (2007).
The Basic-2 algorithm is an optimized version of the Bentley and Shamos procedure for the
planar case discussed in section 2. In fact, the Basic-2 algorithm is the same as the C LOSEST-PAIR
algorithm (see Figure 1), with the sole difference that the computation of the distance dLR in step
4 now requires only two pairwise comparisons per point to find the closest pair within the central
slab. The pseudocode for computing the dLR distance in the Basic-2 algorithm is shown in Figure
3.
The time complexity of the BASIC -2.S4 algorithm is obviously O(n), since it traverses once
the arrays YL and YR in a “hopscotch” manner (see Figure 2), and it takes constant time on each
iteration.
Since we are only interested in performing step 4 of the Closest-Pair algorithm, in the following
we assume that xmedian , dL , and dR are already computed. We also assume that the array Y contains
a y-sorted partition of all points in P, i.e., the first and second halves of Y contain the points that
are to the left and to the right of l, respectively, and both halves are sorted along the y-coordinate5 .
We denote the vertical slab centered at line x = xmedian of width 2δ by the symbol ∆ and the
central line by l.
5 The structure of the array Y may seem difficult to obtain, and therefore, be an extra source of complexity in the
overall algorithm. However, this is not the case because the structure arises as a natural consequence from the need to
maintain the y-presorting throughout the recursive calls in O(n) time.

4
Basic-2.S4(xmedian , dL , dR , Y )
1 dmin ← min(dL , dR )
2 YL ← set of y-sorted points in ∆ lying to the left of l
3 YR ← set of y-sorted points in ∆ lying to the right of l
4 lef t ← f irst[YL ]
5 right ← f irst[YR ]
6 while there are points in YL and YR
7 do dist ← distance(lef t, right)
8 if dist < dmin
9 then closestP air ← (lef t, right)
10 dmin ← dist
11 if ycoord [lef t] ≤ ycoord [right]
12 then if there is at least one more point in YR
13 then dist ← distance(lef t, next[YR ])
14 if dist < dmin
15 then closestP air ← (lef t, next[YR ])
16 dmin ← dist
17 lef t ← next[YL ]
18 else if there is at least one more point in YL
19 then dist ← distance(next[YL ], right)
20 if dist < dmin
21 then closestP air ← (next[YL ], rigth)
22 dmin ← dist
23 right ← next[YR ]
24 return closestP air and dmin

Figure 3: Pseudocode for step 4 of the BASIC -2 algorithm.

Before we show the correctness of the algorithm we first prove the following

Lemma 1. Let d p : R2 × R2 → R denote the Minkowski p-distance, 1 6 p 6 ∞. Let P0 =


(x0 , y0 ) ∈ YL (respectively,YR ) be an arbitrary point lying in the central slab ∆, and let Y0 ⊆
YR (respectively,YL ) be the array of points that lie opposite and above P0 , sorted along the y-
coordinate. The closest point to P0 , in respect to d p , is either the first or the second element of
Y0 .
1
Proof. We first give proof for the Minkowski distance for p = 1, defined by d1 (A, B) = |xA − xB | +
|yA − yB |. We assume, without loss of generality, that P0 = (0, 0) ∈ YL and, as a consequence,
that Y0 ⊆ YR . Let A = (a, b), 0 6 a 6 δ , b > 0 be the first point in Y0 . We note that, because Y0
is sorted along the y-coordinate, it is sufficient to consider the case where b = 0, since all others
cases with b > 0 can be obtained by making an upper translation. This translation does not disrupts
the relative positions of the elements in Y0 and therefore, all arguments presented for b = 0 will
remain valid. So, let A = (a, 0) and let P = (x, y), 0 6 x 6 δ , y > 0, be any other point in Y0 . We
must consider three cases (ilustrated in Figures 4, 5, and 6).

5





 





 


Figure 4: Possible location for the first three points, A, P, and Q in Case 1. When A is centered it
is possible for both A and Q to be the closest points to P0 , in Y0 . However, this is a limit-case.

Case 1: a = δ2 . We find that

06x6δ ⇔
δ δ δ
⇔ 0− 6 x− 6 δ − ⇔
2 2 2
δ δ δ
⇔ − 6 x− 6 ⇔
2 2 2
δ δ
⇔ |x − | 6
2 2

On the other hand, we have

d1 (A, P) > δ ⇔
δ
⇔ |x − | + y > δ ⇔
2
δ δ δ
⇔ y > δ − |x − | > δ − =
2 2 2
Therefore,
δ δ
d1 (P0 , P) = x + y > x + > = d1 (P0 , A)
2 2
which is to say that A is a closest point to P0 , in Y0 .
We note that, if we take y = δ2 , the first three points in Y0 may have coordinates
A = ( δ2 , 0); P = (δ , δ2 ); Q = (0, δ2 ), respectively. This is the limit-case depicted in Figure 4,
where A and Q are both the closest points to P0 , in Y0 .

6













Figure 5: Possible location for the first three points, A, P, and Q in Case 2. When A is close to P0
all other points in Y0 are “pushed” away by the sparsity of ∆.

Case 2: 0 6 a < δ2 . We consider two possibilities

i) Let y > δ2 , then

δ δ
d1 (P0 , P) = x + y > x + > > a = d1 (P0 , A).
2 2

ii) Let y 6 δ2 , then

d1 (A, P) > δ ⇔ |x − a| + y > δ ⇔


δ δ
⇔ |x − a| > δ − y > δ − = ⇔
2 2
δ δ
⇔ x−a > ∨ x−a 6 − ⇔
2 2
δ δ
⇔ x > a+ ∨ x 6 a− < 0 ⇒
2 | {z 2 }
Contradiction
δ
⇒ x > a+
2
Therefore,
δ δ
d1 (P0 , P) = x + y > a + + y > > a = d1 (P0 , A).
2 2
Considering i) and ii) we conclude that A is the closest point to P0 , in Y0 .

7











Figure 6: Possible location for the first three points, A, P, and Q in Case 3. When A lies further
from P0 it is possible for another point to be closer to P0 . However, this point is necessarily the
next lowest point, which is to say, this point is the second element in Y0 .

δ
Case 3: 2 < a 6 δ . We must consider two possibilities

i) Let x > a, then


d1 (P0 , P) = x + y > a + y > a = d1 (P0 , A).
and, as in the previous cases, A is the closest point to P0 , in Y0 .

ii) Let x < a, then

d1 (A, P) > δ ⇔ a − x + y > δ ⇔ y > δ − a + x

This means that it is possible to have at least one point P = (x, y) ∈ Y0 such that
d1 (P0 , P) = x + y < a = d1 (P0 , A), and 0 6 x < a, δ − a + x 6 y 6 a − x. Let
Q = (x1 , y1 ) ∈ Y0 , and assume that y1 6 y (i.e., Q precedes P in Y0 ). We know that

d1 (A, Q) > δ ⇔ |x1 − a| + y1 > δ ⇔


⇔ x1 > a + δ − y1 ∨ x1 6 a − δ + y1 (1)

From the first inequality of (1) we find that

x1 > a + δ − y1 > a + δ − y > x + y + δ − y = δ + x > δ

and this means that, in this case, for Q to precede P in Y0 , it must lie outside the ∆ slab,
which is a contradiction.
The second inequality of (1) holds only if we choose δ − a 6 y1 6 y to guarantee that
0 6 x1 6 y1 − (δ − a). However, for this choice of the Q coordinates, and from the

8
sparsity of ∆ we get

d1 (P, Q) > δ ⇔ |x1 − x| + y − y1 >δ ⇔


⇔ x1 − x > δ + y1 − y ∨ x1 − x 6 −δ + y − y1 ⇒
⇔ x1 > x + δ + y1 − (δ − a + x) ∨ x1 6 (x + y) − δ − y1 ⇔
⇔ x1 > y1 + a ∨ x1 < a − δ − y1 ⇔
⇔ x1 > y1 + a − δ ∨ x1 < a − δ 6 0 (2)
| {z }
Contradiction

The first inequality of (2) is a contradiction with our current working hypothesis (i.e.,
with the second inequality of (1)) and the second inequality of (2) implies that Q 6∈ Y0 .
So, if there is one point P ∈ Y0 closer to P0 than point A, then no other point, but A, can
precede P in Y0 . Now, suppose that there is another point Q ∈ Y0 that is also closer to P0
than point A. This means that, as with P, no other point can precede Q in Y0 . However,
since both P and Q are in Y0 , one must precede the other, which is a contradiction.
Therefore, in this case, the only point possibly closer to P0 than point A is the second
element of Y0 .

From all the previous three cases we may conclude that the closest point to P0 , in Y0 , is either
the first or the second element of Y0 . This proves Lemma 1 for the Minkowski distance d1 .

Figure 7: Unit circles for various Minkowski p-distances.

To obtain proof for all other Minkowski distances d p , p > 1, we take into account the fact that
the convex neighborhoods generated by the Minkowski distances possess a very straightforward
order relation, where larger values of p correspond to larger unit circles, as shown in Figure 7. This
ordering means that the sparsity effect within the ∆ slab will be similar, but somewhat stronger, for
larger values of p. Therefore, the precedent analysis of d1 not only remains valid for p > 1 but,
in a sense, the corresponding geometric relations between elements of Y0 are expected to be more
“tight” for all other Minkowski distances.
The preceding Lemma 1 establishes an upper bound of two for the number of pairwise com-
parisons needed to be computed for each point in the ∆ slab. Also, the analysis made in Case 3 of
the corresponding proof shows that this bound is tight, i.e., we have established the following

Corollary 2. For each point in the central slab ∆, the minimum number of pairwise comparisons
required to compute dLR , in the divide-and-conquer C LOSEST-PAIR algorithm, is two.

9
Correctness
To prove the correctness of the BASIC -2.S4 algorithm we consider the following
Loop Invariant. At the start of each iteration of the main loop, left and right are references to the
points of YL and YR , respectively, with minimum y-coordinates, that still need to be compared with
points on the opposite side. Also, dmin corresponds to the value of the minimum distance found
among all pairs of points previously checked.
We show that all three properties Initialization, Maintenance, and Termination hold for this
loop invariant.
Initialization At the start of the first iteration, the loop invariant holds since left and right are
references to the first elements of YL and YR , and both arrays are y-sorted in ascending order
(by construction). Also, no pair with points on opposite sides of line l was checked yet, so
the current minimum distance is the minimum between the left-side and right-side minimal
distances dLmin and dRmin , respectively. This value is stored in dmin .
Maintenance Assuming that the loop invariant holds for all previous iterations we now enter the
next iteration. The first thing the loop does is to compute the distance between the points
referenced by left and right and, if that is the case, it updates the value of dmin . Next, the loop
determines which of the two points has smaller y-coordinate. Let us assume, without loss of
generality, that in this iteration left is the lowest point. Since left ∈ YL , the algorithm checks
to see if there is at least one more point in YR (denoted by next[YR ]), following the point
right. If there is such point, the algorithm computes the distance between left and next[YR ],
and updates the value of dmin accordingly.
By our hypothesis we know that right and next[YR ] are the points on the right-hand side
with the smallest y-coordinates that are still greater, at most equal, to left’s y-coordinate.
Therefore, and taking into account Lemma 1, we conclude that the value of dmin corresponds
to the minimum between the previous minimal distance and the minimum distance computed
for all pairs of points that contain the left point.
The iteration ends by incrementing the reference left to the next point in YL , which means
that the original left point will no longer be available for comparison. Note also that the right
reference remains the same so that the corresponding point will still be compared with other
points on future iterations (see Figure 8). We may conclude that the new pair of references
left and right, and the new value dmin still satisfy the loop invariant.
Termination The loop ends when one of the references, left or right, reaches the end of the cor-
responding array, YL or YR , respectively. Let us assume, without loss of generality, that the
left reference reaches the end of the array YL and terminates the loop. This means that it was
the left reference that was incremented at the last iteration and so, it was this reference that
corresponded to the lowest point. Accordingly, the loop computed the distances between left
and the two closer, but not lower, points in YR and updated dmin . As a consequence, all re-
maining pairs of points are composed by the point left and points that belong to the array YR
and lie in higher, more distant positions. Therefore, we have computed all distances between
pairs of opposite points that may lie at a distance smaller than the current minimal distance,
and so the value dmin corresponds to the minimal distance between all pairs of points in ∆.

10
 
 
 

 


 

  

 





 
  
 

  
  
  

  
  








 


   

  
 

   

Figure 8: Some iterations of the main loop in BASIC -2.S4. (a) Compute distance between left and
right, the first elements of YL and YR , respectively. (b)-(c) Point left is lower than point right so,
compute distance between left and the next element in YR . Update left to the next element in YL .
Compute distance between left and right. (d)-(e) Point right is lower than point left so, compute
distance between rightand the next element in YL . Update right to the next element in YR . Compute
distance between left and right.

The BASIC -2.S4 algorithm is correct and, therefore, the Basic-2 algorithm is also correct, for any
Minkowski distance with p > 1.

4 Empirical Time Analysis


The Basic-2 algorithm has been implemented and tested with randomly generated inputs, starting
with 125 thousand points and doubling the input size until 16 million points. For each input
size, 50 independent executions were performed. For each specific generated input, we applied
the Basic-2 algorithm as well as the standard divide-and-conquer algorithm, described in Cormen
et al. (2001), which computes seven pairwise comparison for each point in the central slab. We
refer to the standard algorithm as the Basic-7, following the naming convention presented in Jiang
and Gillespie (2007).
Both algorithms were implemented in the C programming language. The source code is avail-

11
able at http://w3.ualg.pt/~flobo/closest-pair/.
The recursion stopped for a number of points less or equal to 10. The algorithms were tested
with Minkowski distances d1 , d2 , d3.1415 and d∞ . For each independent run, we measured the
execution time of the entire algorithm. The results are summarized in Figure 9.

Figure 9: Running time ratios of Basic-2 over Basic-7 for various Minkowski distances.

As expected, and in accordance with the results obtained by Jiang and Gillespie (2007), our
simulation shows that Basic-2 is faster than the standard divide-and-conquer algorithm, Basic-7.
Although BASIC -2.S4 introduces a few extra relational comparisons (see pseudocode in Figure
3), these are negligible compared to the savings that occur due to the reduction in the total number
of distance function calls.
The Basic-2 algorithm is about 20% faster than the standard divide and conquer algorithm
for the larger input sizes, when using the Minkowski distances d1 and d∞ . The speedup is more
pronounced for the case when using the Minkowski distance d2 , with Basic-2 being nearly 36%
faster for the larger input sizes. The reason for the greater speedup when using Minkowski distance
d2 is because the computation of the distance function is more expensive in this case, and therefore,
savings in distance function calls have a more profound effect on the overall execution time of the
algorithm. This fact is fully confirmed by the results obtained when using the somewhat exotic
distance d3.1415 . Due to the non-integer value of p, the cost of computing this kind of distance
function is highly inflated and so it is possible to observe speedups over 100% for the Basic-2
algorithm.

12
5 Final Remarks
In this paper we analyzed the Basic-2 algorithm, which is an optimized version of the Bentley and
Shamos procedure for the planar case where the computation of the distance dLR requires only
two pairwise comparisons per point to find the closest pair within the central slab. The Basic-2
algorithm was first presented by Jiang and Gillespie (2007).
In Corollary 2 we proved that the two comparisons per point are a minimum for the divide-and-
conquer C LOSEST-PAIR algorithm, for all Minkowski distances with p > 1. This result is a direct
consequence of the strength of the sparsity condition, which is induced over the set of points in the
plane by the knowledge of dL and dR .
We consider that the generalization of Corollary 2 to higher dimensions (in particular to the 3D
space) is of interest, considering not only possible applications but also the theoretical significance
of such an achievement. However, we note that, even in the 3D case this procedure may have rather
less efficiency gains because of the lack of a natural order relation among the points lying in the
central slab, and the consequent increase in the problem’s complexity.

References
Bentley, J. L. (1980) , Communications of the ACM 23(4), 214

Bentley, J. L. and Shamos, M. I. (1976) , In STOC ’76: Proceedings of the eighth annual ACM
symposium on Theory of Computing, pp. 220–230, New York, NY, USA: ACM

Cormen, T. H., Leiserson, C. E., Rivest, R. L., and Stein, C. (2001) , Introduction to Algorithms,
MIT Press, 2nd edition

Ge, Q., Wang, H., and Zhu, H. (2006) , Journal of Computer Science and Technology 21(1), 27

Jiang, M. and Gillespie, J. (2007) , Journal of Computer Science and Technology 22(4), 532

Kleinberg, J. and Tardos, E. (2005) , Algorithm Design, Boston, MA, USA: Addison-Wesley
Longman Publishing Co., Inc.

Shamos, M. I. (1975) , In STOC ’75: Proceedings of seventh annual ACM symposium on Theory
of Computing, pp. 224–233, New York, NY, USA: ACM

Smid, M. (2000) , In Handbook of Computational Geometry. (J.-R. Sack and J. Urrutia eds.), pp.
877–935, Amsterdam: Elsevier Science

13

View publication stats

You might also like