APPLIED COMPUTATIONAL GEOMETRY:
TOWARDS ROBUST SOLUTIONS OF BASIC PROBLEMS
David Dobkin
Deborah Silver
CS-TR-192-88
December 1988Applied Computational Geometry:
‘Towards Robust Solutions of Basic Problems *
David Dobkin
Department of Computer Science
Princeton University
Princeton, NJ 08544
Deborah Silver
Department of Electrical and Computer Engineering
Rutgers University
Piscataway, NJ 08904
Abstract: Geometric computations, like all nericl procedures, are extremely prone to roundaf ere Homever,
virtually none ofthe numerical analysis erate directly applies to geomatrc calculations, Even for line intersection,
‘the most basie geometric operation, there is no robust and eicent algorithm. Compounding the dificlie, many
_comeric algorithms perform erations of calaltions renting previusly computed data. In this paper, we explore
some ofthe main isues in geometric computations and the methods that have been proponed to handle roundoff
‘errors. In particular, we focus on one meth and apply itt a general iterative intersection problem. Our iii
‘results seem promising and wll opel lead 1 robust solutions for more complex problems of applied compsta-
‘ional geometry.
1. Introduetion & Motivation
As algorithmic techniques in computational geometry and graphics algorithms mature, attention is
focussed on the problem of technology transfer. The goal is to determine which theoretically fast algo-
ithms actually work well in practice and to find methods of turning the efficient into the practical. Most
‘models of computation assume that arithmetic is done flawlessly. This is precisely expressed by the follow-
ing quotation:
“As is the rule in compuational geometry problems with discrete ouiput, we assume all the
computations are performed with exact (infnite-precision) arithmetic. Without this assumption
itis virtually impossible to prove the correctness of any geometric algorithms.”” (Mair87a]
‘Unfortunately, that assumption is seldom valid inthe real world. Roundoff error plagues all compu-
tation intensive procedures and geometric algorithms are no exceptions. Thus, the central problem of the
technology transfer lies in the generation of fast algorithms which are robust. The definition of robust,
according to Webster's dictionary, is “full of health and strength vigorous; hardy", and that is exactly
‘what should be expected from any numerical algorithm. Basically, the computed output should be
verifiably correct for all cases, “Correct” here is a relative term depending upon the application. For
‘graphical output, anywhere from 3-12 significant digits of precision may suffice, whereas for other areas,
‘more may be needed. Having a program compute twenty digits of precision where only three are needed is
"This work was suppor in put by a National Science Foundation grant CCR-$7-00917 and bya Hewlet-Packard Facal=
ty Development Grant tothe second author.‘overly costly and time consuming. Verification of output is equally important. How many significant digits
are in the result, or how much error has accumulated in the computed values? If a good error estimate can
be calculated easily, then a program can target its operations to a user specified end precision, Needless to
say, the program should handle all cases and, if itis unable to compute an answer, should inform the user
instead of generating a random answer, dumping core, or causing infinite looping.
However, this is not a trivial issue. Forrest argues that there are no robust algorithms for even the
simple and basic problem of line segment intersection (For874]. The recent flurry of activity on this
problem confirms his belief. Knot and Jou [Knot87a] give methods for robustly determining if two line
segments intersect and for computing their intersection, and there are cases where robustness costs as much
1s a factor of 100 in speed! Compounding the difficulties, many graphics and geometric algorithms perform
iterations of calculations reusing computed results as inpat for subsequent calculations. Not only must each
individual computation be robust, but the whole series of calculations must be robust as well. For example,
in several hidden line-surface elimination algorithms, polygon intersection is performed by calculating the
intersection of the computed intersection of various polygons with other polygons. These cascading calcu-
lations suffer from roundoff error as well as from compating with inexact data asthe calculations progress
(propagation error), For instance, the calculated point of intersection of two line segments may be used as 2
‘vertex of another line segment. Since this vertex is “rounded” and not “exact”, the next series of calcula-
‘ions involving this point cannot be “exact”, not necessarily because of the roundoff error generated from
this particular set of calculations, but because the data is wrong. The “real” endpoint may be above,
‘below, or to the side ofthe calculated one, so thatthe calculated ln isa shifted version of the “real” one
causing any further computations with that line segment to be off no matter how exact the arithmetic func-
tions are (of course, if all the calculations are precise this problem’ would not exist). As the calculations
‘progress, the line segments are continually shifted, and the final results may be nowhere near the true
results. Ultimately, these errors become apparent by producing visible glitches in picture outputs or causing
program failures when the computed topology becomes inconsistent with the underlying geometry.
[Rams82a, Sega85a, Mile86a]
‘This is similar to the following problem, which we consider in our paper:
‘Suppose we are given a set of line segments along with a series of computations to be done on
these segments. This computation will involve creating new segments having endpoints which
are intersections of existing line segments. An additional part of the input is a specification of
the precision to which the original inputs are known and the precision desired forthe final out-
put.
‘Our model of computation assumes that calculations can be done at any precision but there is a cost
function dependent upon the precision of the computation. Furthermore, since the computation tee is
known, backtracking is permitted in order to achieve greater precision. ‘The cost of this backup is defined
{as the additional cost to redo the computations atthe higher precision added to the cost of the computation
already done. Finally, there is no advantage to achieving extra precision, however, a computation is
deemed 10 be unacceptable if it does not achieve the desired precision. Basically, we envision three
processes: one that does the actual calculations; a second to record the history of the computations; and a
third vo determine the precision and set the appropriate flags when necessary.
We claim that this is valid model for hidden surface elimination and many other computations in
Computer graphics (ray wacing, CAD-CAM). Indeed, our attention was focussed on this problem because
of our frustration with ad hoc methods being used to achieve desired precision in hidden surface routines
‘we were writing as part of our graphics efforts. There are two versions of the problem stated above. In one,
the entire computation tree is known in advance and for the other, the computation tee is determined as the
‘computing evolves. In what follows, we focus on the first which is the simpler ofthe two.
In this article, an initial attempt at approaching roundoff issues in cascading geometric computation
is presented. The organization of this paper is as follows. The second section presents a brief review of
some existing methods dealing with roundoff error in geometric computations, and a description of a sam-
ple geometric problem with one of the methods singled out for its applicability to this problem. The third,fourth, and fifth sections contain the application, analysis, and conclusion. The results of this work are four-
fold. First, we have explored the various approaches to the issue of robustness in general and have demon-
strated a method of computing precision in an ongoing geometric computation. We have also analyzed the
cost of backtracking and means of avoiding it. Third, we have proposed an empirical solution for a cas-
ccaded line intersection procedure. And lastly, we have presented insights into the problem which will hope-
fully spur additional research and applications.
2. Literature Survey
‘The most widely applied solution to the problem of roundoff error is the ad hoc approach: calling the
local guru to pull a fix out of his/her magic box. This usually entails arbitrarily increasing precision, reord-
‘ring calculations, tweaking specific numbers, or arbitrarily selecting epsilon values, and in most instances,
will only solve a set of problems temporarily and does not attack the underlying cause of the roundoft
error. Needless to say, this approach is far from robust and consistent.
Line intersection calculation can be viewed as being either of geometric or of numeric flavor, and the
‘attempts at coping with the roundoff error problem have taken one of these two approaches. The geometric
flavored solutions strive to maintain correct topological information using finite precision. This is accom-
plished with special functions and data structures to keep the geometric objects in a consistent state. To
‘overcome floating point eror, epsilon procedures are used to handle the ambiguous cases and to keep the
objects “far enough” apart. Milenkovic [Mile86a] proposed two methods for verifiable implementations
of geometric algorithms using finite precision. The first is data normalization which alters the objects by
veriex shifting and edge cracking 10 maintain a distance of at least ¢ (determined by machine roundoff
error) between the geometric structures. The second method is called the hidden variable method, which
‘constructs configurations of objects that belong in an infinite precision domain, without actually represent-
ing these infinite precision objects, by modeling approximation to geometric lines with monotonic curves.
‘Segal and Sequin’s [Sega88a] method introduces a minimum feature size and face thickness to objects and
then either merges or pulls apart those objects that lie within the minimum feature size of each other. Hoff-
‘mann, Hoperoft, and Karasick [Hoff88a] add symbolic reasoning to compensate for numerical uncertain-
ties when performing set operations on polyhedral solids. Related to line intersection,
Ramshaw [Rams82a] shows how floating point line segments can appear to “‘braid”” by intersecting each
ther more than once. To correct this, Greene and Yao [Gree86a] transform geometric objects from the
continuous domain to the discrete domain and perform all the calculations in the discrete domain. Line seg-
‘ments are treated as a sct of raster points (these are the points used by line drawing algorithms) and the Tine
is the shortest path within this envelope of points. The line-path is controlled with hooks which serve to
direct the line o pass through specified grid points in order o insure that it will intersect certain lines while
not crossing others.
‘The numeric flavored solutions consider line intersection primarily as a set of numerical calculations,
‘as opposed to operations on geometric objects, and borrow from classical numerical analysis, ic.
roundoff-error analysis. Pioneered by Wilkinson [Wilk63a], this approach involves forward and backward
‘error analysis and determination of condition numbers for a particular set of calculations. The condition
rhumbers can alert the programmer or user to possible bad sets of data or unstable algorithms (Mill80a}. A.
by-product of the condition numbers are some common-sense issues, such as reordering calculations to
avoid “undesirable” calculations (adding together very large and small numbers, subtractive cancellation,
tc.) Although this type of analysis is basic to a first approach at combatting roundoff error, it is not help-
fal in dealing with the buildup of unavoidable roundoff error. Unfortunately, we know of no numerical
analysis literature regarding the accumulation of roundoff error in cascading processes such as we con-
sider.
Many tackle the difficulty of roundoff error by proposing modified floating point systems. Kulisch
and Miranker introduce a dot product function that performs the dot product of two vectors rounding only
at the end instead of after each individual multiplication and addition [Kuli8ta}. Ottmann, Thiemt, and
Ullrich (Ottm87a] show how to implement “stable’” geometric primitives with this dot product function.
‘A popular approach is interval arithmetic [Moor66a] which treats a rounded real number as an interval
between its two bounding representable real numbers, and calculations are performed on this interval
‘widening the resulting interval as necessary. Madur and Koparkar [Maduf4a] apply interval arithmetic tothe processing of geometric objects and attempt to narrow the computed interval of some common
‘geometric procedures. In their work, geometric functions are defined with interval computations and new
algorithms are devised using these functions for such tasks as curve drawing, surface shading, and intersec-
tion detection. Knott and Jou [Knot87a} also use interval arithmetic to determine correctly whether two
line segments intersect and, if that fails, resort to multiple-precision floating-point arithmetic. Another
‘method is that of Vignes and La Porte [Vign74a] which takes a stochastic approach 10 evaluating the
‘number of significant digits in a computed result. Their method generates a subset of all the possible com-
‘putable results of a function and uses that subset to determine propertis of the entire set. Attempting to
avoid floating point computations altogether (which seems to be widely recommended advice), additional
proposals include systems based upon rational arithmetic and symbolic computation. In addition, different
floating point systems (implementation in hardware) vary in their performance with regards to roundoff
error, and there has been work documenting those differences (Kaha$8a, 85a]. (For a more comprehen-
sive survey see [Hoff88b, Silv88a] ).
Although useful in various situations, the aforementioned methods are of necessity flawed when
applied to cascading intersection calculations. The geometric solutions are hard to implement and not
directly applicable to this problem. The numerical solutions are equally fraught with difficulties. Rational
arithmetic and symbolic computations are slow, and the numerators and denominators (represented by
‘numbers or symbols) grow very large very fast. Condition numbers do not give an accurate description of
the exact accumulation of errors as the iterations increase, in addition to being difficult to calculate if all the
cascading iterations are taken into consideration. Interval arithmetic gives overly pessimistic results since
the intervals grow much larger as the computations progress. While the geometric objects being operated
con in cascading intersections generally become smaller, the intervals become bigger, causing in many
instances, the intervals to be lager than the objects being manipulated (techniques for narrowing the com-
puted intervals must be used to obtain satisfactory result).
2.1. The Pentagon Problem
Before attempting to fully analyze a proposed solution, itis necessary to precisely formulate a partic
‘ular problem as a testbed for an accurate assessment of a possible approach. However, in most geomettic
algorithms the results of a computation are not known in advance and can only be checked by more numer-
ical computations (making the testing suspect) or by viewing the results (making the viewer suspect). In
‘our work, we have used the pentagon problem for experimentation. Although it is not a very practical
problem, it captures the essence of cascaded intersections while enabling accurate testing of final and inter-
‘mediate results. Furthermore, the pentagon problem has a simple structure and so can be easily studied, yet
displays the unstable behavior of related (but more complex) iterative algorithms, especially those that are
geometric in nature,
‘The pentagon problem involves taking a pentagon stored as a set of five vertices (ten floating point
‘numbers) and iterating in and out a certain number of times to get back to the original pentagon. The in
iteration computes the intersection of the pentagon’s diagonals resulting in a smaller inverted pentagon,
‘The operation can be repeated on the ““new"” pentagon to get an even smaller pentagon. The inverse of
‘operation, the out iteration, projects alternate sides of the pentagon and finds the intersection point which is
Just a vertex of the larger pentagon (see Figure 1). In each iteration following the first, the data used are
those calculated by the previous iteration. An iteration in and then ou is an identity function; therefore,
after an equal amount of ins and outs the differences between the computed pentagon and the original pen-
tagon can be determined. Owing to roundoff error in finite precision arithmetic, the computed vertices
Aiffer from the original vertices after a number of iterations in and out. Sometimes iti impossible to main-
tain any precision inthe calculated data.
‘The key to the difficulty in the pentagon problem, as well as some other geometric algorithms, lcs in
the fact that the entire set of iterations must be considered one unit, although the exact series of ins and outs
may not be known in advance. Namely, an accurate assessment of previous-error-generated must be
tracked and worked into the calculations to determine error accumulated ata particular level. Many of the
‘mentioned methods aim towards the individual iterations and do not easily accommodate cascading calcu-
lations without being overly pessimistic. However, one method that docs enable casy and accurate
tion of error generated during compounded calculations is the Permutation-Perturbation method of
‘ignesFigure 1 ania Weration
and La Porte [Vign74a]. In what follows, we describe their method in detail, and discuss its application to
the problem of cascading intersections with emphasis on efficiency, accuracy, and ther tradeoffs.
2.2, The Permutation-Perturbation Method
Because of perturbation and permutation, the results ofa set of mathematical compuatons per
formed on a computer are not unique. Perturbation refers othe rounding ofa computed value up or down
when being asigned to a variable, Therefore, any aimee computer operation can have one of wo valid
answers (one by lac, the other by exces) if an algorithm bas k operations, thre area possible 2 vals
Since computer peations are not ascii, reartanging the aime operations in an alti may
generate diferent resus; this s known as permatation. 4 Let Py be the total numberof different posi-
bie permutations ofthe operators ina parcular algorithm. When permutation and perturbation are applied
inal possible combination, the total st, R of different computable solution toa particalar function, can
bo derived, Ris of size 2" Pa
“The numberof significant digits ofa computed value can be determined by 7
(oti
il
where is the number of significant digits, x is the computed value, and x isthe “exact” result. This
quantity is generally expressed with the absolute error (e~,) divided by x, inscad of, which i valid
when x, is a reasonable approximation (se below).
Vignes and La Porte use this formal to determine the precision of computed value, However, they
attempt 0 estimate the err since the exact ero may not be known at computation time. The formula used
is & whore iste mean of he population R and estan deviation. Both 8 and canbe evaluated
samples from R. (For more complete detail
probabilistcally by drawing
see [Vign78a,Faye8Sa, Vign74a]. )
11] for example th four numbers
sense10% — 9112410 -977BIGR -9915«I0
when add let o right, using four dig arithmetic, results in the exact sum .675SxIO'. However, ading fom right to lft
produces 7000%10. This example also illustaesrbtactive cancelation) [Vand7Bs]
(2) Thisis the formula used for computing he reative err.‘This analysis rests upon two hypotheses: 1) that R= r isthe exact result) to within an error smaller
‘than or at worst comparable with 8 (see [Mail79a) ), and 2) that R is better approximated by a continuous
distribution than by a small discrete population. The first implication can be false on those rare occasions
when the computation in question comes as close to a singularity as it can without actual collision, The
second may cause problems when only a few among the many rounding errors contribute the bulk of the
‘error in the final result (undersampling R) [Kaha88b]. However, for the majority of cases, this method is,
likely to give a fair indication of a computation’s typical accuracy.
3. Application
‘The application of the Permutation-Perturbation method to the pentagon problem was straightfor-
‘ward. On cach iteration, all intersection points were calculated three to four times using different
permutations/perturbations of the intersection code (care must be taken when performing permutations),
and the number of significant digits of the average was computed with the formula of La Porte and Vignes
(see above). This was done for all five pentagon vertices (ten values - five x and five y) and the average of
the number of significant digits ofthe vertices was plotted against the iteration number; the resulting curve
represented the decline (or increase) of significant digits in the vertices as the iterations progressed (see
Figure 2).
(Note: A typical run of our hidden surface elimination program with 20,000 triangles involves over
200,000 iterations similar to those in the penfagon problem. At cach iteration as many as four new triangles
can be ereated. Some of the triangles go through many more than ten such iterations, so the in-10 out-10
scheme is possibly overly conservative.) Different combinations of iterations were performed: in ten then
‘out ten; out-10 in-10; in-S out in-S out-S; out-5in-S; etc...)
Normally, the best computed result of an iteration was the average of the three (Four) calculated
results of the different permutations of the intersection code (Mail79a]. However, this average value was
‘not used as input data forall the intersection calculations of the next iteration. Namely, the three results of
the previous iterations were stored and used in the next iteration (each different permutation used one of
the results), This is equivalent to performing all the iterations as one computation but stopping it along the
‘way for precision determination, ‘Thereby enabling the Permutation-Perturbation algorithm to artificially
keep track of the calculations done thus far and use that “knowledge”” in the significant digit calculation at
any level
Alter all the iterations were completed, the error incurred during the computations was calculated
(since the original pentagon was given). The exact error was then compared with the computed predicted
‘error to analyze the performance of the Permutation-Perturbation method. Fortunately, the Permutation-
Perturbation method proved to be an accurate predictor of roundoff error buildup inthe calculated results
and was within one digit of the actual error (see Figure 2).
4, Analysis of Results (for the in-10 out-10 series)
kis clear from the plots of significant digits vs. iteration number thatthe pentagons displayed simi-
larities, thus certain conclusions can be drawn. All the curves were downward sloping, i. the number of
significant digits inthe calculations decreased as more calculations were performed on the data, Seen from
a different perspective, the error increased as more computations were executed (as expected). The initial
decline of significant digits (or increase in error) began by slowly curving downwards and then, ater a
number of iterations (different for each pentagon), displayed Joglinear (the significant digit is a log value)
‘behavior (see the results of [Mara73a] ). In general, the out iterations caused a steeper decline in the
‘number of significant digits than the in iterations, mainly because the pentagon grows during the ou itera-
tion causing any error in the input data to be magnified. Interestingly, when in iterations were performed
after out iterations (for example, ifthe series was in-5 out-S iS out-5) the plots exhibited a slight inerease
in the numberof significant digits.
[D1 Tis not always posible vo extend oawards stating withthe inti pentagon ogi wo sides ae parle to each
aber. However, sometimes it may be possible bu the result i not convex =e, if two semi-adjacent edges form angles of
Jess than ninety degrees withthe mide adjacent edge.‘terion plots and their
come
. -
2 a
i x
2 x
3 x
18. is aos
&e ie =
10. Sc 10 a
$ oe $ a
-
r Por
02468 "Wim 121415 1820
ee
eae
Figure 2: in-10 out-10 series
Furthermore, the iterations are rotation invariant, namely, a pentagon and its rotated version resulted
in similar plots. For the same reason, pentagons that were close to regular (equal angles) performed better
than degenerate pentagons implying that the shape of the pentagon was responsible for the curve as
‘opposed to the actual coordinates of the vertices. Other series combinations (in-5 out-5 etc.) performed
{nthe same manner as the in-10 out-10 series; the curves would decline during the out series and ether sta-
bilize or slightly improve initially during the in series (following out iterations) if the eror generated provi-
‘ously was not overwhelming.
Based upon the experimentation, the Permutation-Perturbation method proved to be helpful in
predicting the amount of error accumulated during cascading line intersection calculations. Although it has
an associated cost of a factor of three or four times that of doing nothing, it has many significant advan-
tages to it over the other methods. Fist, itis mathematically easy 10 understand and implement (ihe code
for this method is less than 50 lines of C) which is no small achievement when dealing with numerical
algorithms. It requires no special mathematical functions forthe basic arithmetic operations and no special
hhardware (which may or may not exist) Unlike the geometric flavored methods, no normalization or object
rearranging is required. t can be implemented with any algorithm without modification to the method or
recalculation of the mathematics involved (unlike condition numbers), and the method’s calculations do not
get messier as the program’s computations progress. There are no special cases (such as division by zero
in interval arithmetic) since the Permutation-Perturbation method isnot interested inthe individual compu-
tations. It is also an “on-line” algorithm and can be used during the programs normal run, not only as an
4] inesecting perpendicular lines gives 2 more accurate result than intersecting, those at are ote 10
penile [For85a].‘error estimator but also to set the “fuzz values in a program. Finally, this method provides an accurate
estimate of the errors accumulated during the computations without being overly pessimistic or optimistic.
4.1. Multiple Precision
Iis almost impossible to avoid increasing precision in order to boost accuracy. Assuming the cost of
increased precision is somehow related to the amount of increase, one would like to avoid overkill, ic.
using much more precision than is actually necessary. If something is known about the accuracy of the data,
and the degradation of precision likely to occur with the computations to be performed, then hopefully that,
knowledge can help determine the precision to use. The section that follows discusses the issues involved
in attempting to increase precision in conjunction with an accuracy measure, ‘The increase in precision can
‘be accomplished with any multiple precision package. Unfortunately, most are implemented in software
and are therefore slow and cumbersome to use.
42. Combining the two
The first problem that arises is merging the accuracy measure and multiple precision package. The
tools must be put together in an efficient manner to produce a viable and effective combination. The most
obvious (and costly) route to a workable mix is the following:
1, Estimate an initial precision;
2. teach step of the computation, determine the number of significant digits remain
3. Ifthe number in step 2 becomes too low, increase the intial precision estimate and start again;
‘This algorithm works but raises more questions than it answers, such as, What should the initial pre
cision be? What is “too low"? And, how much precision is needed forthe increase? etc... There is also
‘potential for gross inefficiency. Ifthe “new” precision in step 3. is inadequate, the whole algorithm must
'be repeated (over and over) until the correct precision is attained. Many of these problems are dependent
‘upon the particular set of calculations and the computing environment being, used forthe implementation,
Even for simple iterative procedures, these questions remain,
Before proceeding, itis essential to further define two problem areas, namely, processes based on
reusing computed data (i.e. cascading calculations) and cost functions. The simplest cascading process
has three basic properties: the total number of iterations is known in advance; all the iterations accumulate
caror in a similar manner; and the sequence of calculations already performed is available at little oF no
cost. Thus, the time/space tradeoff for recording the computation history to ease rollbacks can be ignored.
In what follows, we assume that a rollback to a previous computation is always free. Note thatthe penta-
‘gon probiem has these three qualities and is therefore an ideal model fr intial experimentation.
‘The next problem concems the multiple precision package. We would like to have a polynomial
function f (p) such that computations of precision p require f (p) operations. In real environments, this is
not necessarily valid. Computer hardware supports certain precisions (typically single and double, ocea-
sionally quad) for which computations are fast, while other calculations are done via software and are
much slower, In this case f (p) grows quadratically for small p, and decreases towards O (p log p) as p
increases (not including the extra time needed for communication between software processes). To sim-
plify our development, (p) is assumed to be either quadratic or linear.
With these issues clarified, we are now able to proceed with the implementation. Here again,
‘numerous problems arse, which we state along with some of our empirical observations:
43. Problem I:
‘How can the precision needed for future calculations be predicted? Some type of formula must
bbe used to determine when, where and how to increase precision. The formula must account for the current
precision, future computations, and future precision. If the rate of decline is stable and can be calculated
during the computations, or if it is known in advance, then the number of significant digits of the final
[5] In [Knoe87] code foramuliplepecsion program is piven. See also (Sch,‘computed result can be estimated using the formula
final_precision = a-ixm
where ais the number of significant digits curently (in the execution), i is the number of iterations left to
be performed, and m isthe rate of decline per iteration.
44, Problem 2
Assuming the diagnosis of Problem 1 is accurate, itis necessary to provide a remedy or cure for the
ccases where insufficient precision for future calculations is predicted. ‘There arc two possibilities: back-
tacking and performing previous calculations with higher precision; or increasing the precision of calcula-
tions done from that point henceforth. The problem with both of these solutions are apparent. Do they
work? If so, which one is preferable? And, how much precision is needed for the increase?
Based upon some initial experimentation with the pentagon problem, increasing precision for future
calculations without backtracking was ineffective in most instances in stabilizing the significant digit
decline (although it did slow down the rate of decline). Therefore, rolling back iterations was required. But
‘major questions still remain unanswered, such as, how far to backtrack, and how much additional precision
is necessary for redoing the calculations. Making the wrong decision has a serious effect on the perfor-
‘mance of the system causing constant zigeaging or thrashing, ic. repeated backtracking with increased pro-
cision until the correct amount is finally attained. An example of this is illustrated in Figure 1. The desired
fend precision for the execution (in-10 out-10) is 6 significant digits. Every time the program predicted that
the current precision was not high enough 10 guarantee 6 digits after the complete twenty iterations, the
precision was increased by two digits and backtracking was performed. As is evident from the graph, ths is
not the most efficient way to achieve the desired precision.
45. Problem 3
‘Ultimately, efficiency has to be a central component of the solution to this problem. The algorithm
proposed for Problem 2 did not take efficiency into account. Therefore, it was sufficient to add precision
‘and backtrack as offen as was necessary. Ideally, backtracking costs should be taken into account. To do
0, a formula is needed which balances the cost of additional iterations at higher precision vs. the cost of
‘overestimating the necessary precision. In our initial experiments, we observed no difference in compara-
tive computational costs between cost functions which were lincar and quadratic. Both suggested the same
basic conclusion, namely, that backtracking should be avoided if possible, and thrashing should always be
avoided.
Based upon the cost function, other questions arise. For example, to avoid zigragging, it may be
‘more beneficial to run the program in a “diagnostic”” mode first and then run it again with the precision
‘deemed necessary by the first run. This decision is heavily dependent on the cost function used. In partic-
ular, using a linear cost Function for this two phase approach does not make sense, whereas for a squared
‘cost function it becomes a more reasonable solution (if the precision to use for the second run can be deter-
mined). However, whether thrashing will occur with a particular set of input parameters (aumber of digits
for the increase, the starting precision, ct.) cannot be known in advance.
‘Since the pentagon problem displays a linear degradation curve for the in-10 out-10 series, once the
initial decline begins, the end precision can be calculated easily. Furthermore, ifthe precision calculated is
‘deemed insufficient, a correct starting precision can be calculated and the program restarted.
‘There are a number of advantages to executing in this manner. Because the decline begins relatively
soon after the first couple iterations, only those need be repeated, thereby removing the additional expense
‘of running the predictor method for the whole set of equations. In general, ifthe decline of precision can be
‘caught early the cost of fixing it will probably be less. Furthermore, by restarting the entire sequence of cal-
culations, a log or history file is not required to keep track of which calculations were already performed
(or need repeating)-10-
precision changes listed above bles
sccomey
(a. ig)
a
L
8 Teele gereree aoeeeeecaeee eeeee eeee
o 2 4 6 8 nou 6 Bo»
iteration number G10 ou-10 sees)
Figure 1: zigzagging example
46. Problem 4
{is important to note that the method described here accomplishes two things. First it produces the
“best’’ answer, and second, it estimates the accuracy of that answer. There may be methods which perform
the first function more efficiently (see [85a] ), however, they do not perform the second function as well,
Furthermore, the Vignes’ method is stochastic and is therefore prone to potential pitfalls which are
inherent in any such approach. There is a tradeoff in numerical analytic techniques between reliability and
accuracy. While condition numbers and interval arithmetic are reliable, they are worst case approximations
and in many instances cannot be casily applied. This technique is accurate, although there may be unusual
ceases that slip through. However, itis very likely to give a fair indication of a computation’s typical accu-
racy under normal circumstances (Kaha88b]. A further advantage is that itis less cumbersome to use than
the other approaches. The possibility to do better with some other method is as yet an unexplored option,
4.7, Problem §
Lastly, there is the issue of generalizing our results for problems which are more complex than the
Pentagon problem. Unless we know otherwise, we can only assume that each iteration docs computations
‘with the same tendency towards roundoff errors. Ifthe exact cascading process is unknown, we may want
to nun preliminary tests to gain some insight into the expected number of iterations that will be applied 10
individual data during the cascading process. Otherwise, every time the program is executed with asite
different set of data, an initial run at low precision can be performed to estimate this amount (in diagnostic
mode).
For the pentagon problem the cascading sequence determined the behavior and characteristic of the
corresponding precision plot. Namely, the in-10 out-10 series displayed linear decline and so the correct
precision was easy to calculate (see above). For the in-out-in-out.. series the graph would slightly improve
uring an in iteration following an out iteration. Therefore, a formula for predicting the final precision
would have to be adjusted for this increase. However, since the characteristics of the general behavior of
the pentagons is known for all these cases (most of the iteration sequences), itis easy to account for the
cases in a program which performs the pentagon problem. Any problem can be studied in this manner (the
‘Permutation-Perturbation method accommodates this easily) and the different cases noted.
‘The final issue is the time/space tradeoff of storing the cascade if the intersection pattem is not
known. At one extreme, we could store nothing and restart the process whenever precision gets too low.
‘At the other extreme, the entire sequence of calculations can be recorded simplifying backtracking, as in
the pentagon problem (although for the pentagon problem each iteration was essentially the same as the
previous one). Further study is necessary to fully resolve these questions.
‘We have begun to test this method with a hidden surface elimination routine (based on polygon inter-
sections) and have found that the precision degrades slower than for the pentagon problem. The reason for
this is that generally when a “‘new”” line is formed only one of the vertices is “‘newly”” computed, while
the other is from the original line,
(One of the main difficulties in implementing many numerical processes with floating point arithmetic
is that various epsilon or fuzz values must be set to compensate for inexact computations. The epsilon
values appear throughout a program to guide decision making functions. Unfortunately, most of these
epsilon values are set arbitrarily at the start of program execution and do not account for changes in the
precision of the data that results from performing repeated calculations (c.g. cascaded calculations). If an
accurate estimate of the data precision is available, the epsilon values can be set based on that precision and
ccan be “upgraded” to reflect the change in the accuracy of the data,
Furthermore, this method is very useful as a debugging tool to track and identify numerical problems
which cause program failures. Generally, one computation does not cause problems, but a series of calcula-
tions slowly lead to the buildup of catastrophic error. The Permutation-Perturbation method allows the
tacking of this buildup enabling more detailed analyzation ofthe algorithm used.
‘5. Conclusions and Leftovers
AS is the nature of experimental work in computer science, as opposed to research done in more
theoretical arcas, different results are observed and, because of that, various solution are proposed, Our aim
here has been to explore some of these differences on a basic problem of significant practical import and
provide the groundwork for additional research in this area.
‘The initial results are promising, showing that more work needs to be done to precisely formulate the
‘exact requirements of a system like this. Using an error estimator coupled with the right “guesswork” as
to when to increase precision and by how much, leads to a robust solution for cascading line intersection,
as well as for other more complex problems of computational geometry. The problem of completely
removing the “guesswork” remains open. Unfortunately, the cost is high regardless of how itis measured,
Dut this is only to be expected. As Demme! [Demm86a] observes, “In short, there is no free lunch when
trying to write reliable code."* However, our solution seems to be less expensive than many others,
6. Acknowledgements
‘We would like to thank Prof. William Kahan for his helpful comments and suggestion.-12-
esr end precision foal graph) = 5
26.
a
2
20.
18
16.
4
2.
104
s
6] ner oo 57400
a a
a
i a
02468 "24 1618 0
stating precision = 10
sd-n precision =2
18. 18
16. a emg?
uf uw
nf n4
w-| 10-4
4 a
«4 6} ner ct=11260
a 4
cqotine = 135, corn =e
24 24
°. L °. L
TTT 199 TTT Telaladenontedetel
02468 nu i6 8» 02468 21416 8%
stating precision = 15 staing precision = 10
sidd-on precision = 10, ‘add-en precision =7
with lower boon threold = 4
roluck our if vale bas fewer
signfcat digits than the threshold, eventhough the average i above
Figure 4: Cost Funetion Examples-B-
References
85a, “IEEE Standard for Binary Floating-Point Arithmetic,”” ANSI/IEEE Std 754-1985, 1985.
Demms86a,
Demmel, J., “On Error Analysis in Arithmetic With Varying Relative Precision,"” TR 251, NYU
Dept. of Computer Science, October 1986.
Faye8Sa,
Faye, JP. and J. Vignes, “Stochastic Approach Of The Permutation-Perturbation Method For
Round-Off Error Analysis.” Applied Numerical Mathematics 1, pp. 349-362, 1985.
Forr8Sa,
Forrest, A. R., “Computational Geometry in Practice,” in Fundamental Algorithms for Computer
Graphics, ed. R. A. Earnshaw, Springer-Verlag . 1985.
Forr87a,
Forrest, A. R., “Geometric Computing Environments: Computational Geometry Meots Software
Engineering.," Proceedings of the NATO Advanced Study Institute on TFCG & CAD, p. Il Cioceo,
Italy, July 1987,
Gree86a,
Greene, D. and F. Yao, “Finite Resolution Computational Geometry,” 271k Annual FOCS Confer-
‘ence Proceedings, pp. 143-152, Oct. 1986.
Hoft88b.
Hofmann, C., “The Problem of Accuracy and Robustness in Geometric Computation,” ‘Tech.
Report CSD-TR-771 (CAPO Report CER-87-24), Computer Science Dept. Purdue University, April
1988
Hoff88a.
Hoffmann, C., J. Hoperoft, and M. Karasick, “Towards Implementing Robust Geometric Computa-
tions," Proceedings of the ACM Symposium on Computational Geometry, June 1988.
Kaha88a.
Kahan, W., “A Computer Program with Almost No Significance," Work in progress, 1988.
Kaha88b.
Kahan, W., Private Communication, October 1988,
Knot87a,
Knott, G. and E. Jou, ‘A Program to Determine Whether Two Line Segments Intersect,” Technical
Report CAR-TR-306, CS-TR-1884,DCR-86-05557, Computer Science Dept. University of Maryland
at College Park, August 1987.
Kuligta,
Kulisch, U. W. and W. L. Miranker, Computer Arithmetic in Theory and Practice, Academic Press,
New York, NY, 1981.
Madu84a,
‘Madur, S. and P, Koparkar, “Interval Methods for Processing Geometric Objects,” IEEE Computer
Graphics and Application, pp. 7-17, February 1984.
Mait79a.
‘Maille, M., ““Methodes devaluation de la precision d'une mesure ou dun calcul numerique.,”” Rap-
Port LIT P., Universite P, et M. Curie, Paris, France, 1979.
Mair87a,
Mairson, H. and J. Stolf, “Reporting and Counting Intersections Between Two Sets of Line Seg-
ments,"* Proceedings of NATO ASI on TECG & CAD, July 1987.
‘Mara73a,
Marasa, J. and D. Matula, “A Simulative Study of Correlated Error Propagation in Various Finite-
Precision Arithmetics," [EEE Transactions on Computers, vol. c-22, no 6, pp. 587-597, June 1973.“Me
Mile86a,
Milenkovie, V., “*Verifiable Implementations of Geometric Algorithms Using Finite Precision Arith-
metic,” Ind, Workshop on Geometric Reasoning, Oxford, England, 1986.
Mill80a,
Miller, Webb and Celia Wrathall, Software For Roundoff Analysis of Matrix Algorithms, Academic
Press, NYC, 1980.
Moor66a.
Moore, R. E.,nterval Analysis, Prentice Hall, Engelwood Cliffs, NJ, 1966.
Oums7a.
Oumann, T., G. Thiemt, and C, Ullrich, “Numerical Stability of Geometric Algorithms,” Proceed-
ings of the ACM Symposium on Computational Geometry, pp. 119-125, Sune 1987.
Rams82a.
Ramshaw, Lyle, “The Braiding of Floating Point Lines," CSL Notebook Entry, Xerox PARC,
October 1982.
Schwa.Schwarz, J.. “Guide to the C++ Real Library (infinite Precision Floating Point),”” Unpublished
Manuscript, AT&T Bell Laboratories.
Sega8Sa,
Segal, M. and C, Sequin, ‘Consistent Calculations for Solids Modeling," Proc. of the ACM Sympo-
sium on Computational Geometry, pp. 29-38, 1985.
‘Scga88a,
Segal, M. and C, Sequin, “Partitioning Polyhedral Objects into Nonintersecting Parts,” IEEE Com-
‘puter Graphics & Applications, January 1988.
Silv88a,
Silver, D., “Geometry, Graphics, & Numerical Analysis,” Ph.D. Thesis (in preparation) , Princeton
University, 1988,
Vand78a.
Vandergraft, J. niroduction to Numerical Computations, Academic Press, New York, 1978.
. “New Methods For Evaluating The Validity Of The Results Of Mathematical Computa-
tions,"” Mathematics and Computers in Simulation XX, pp. 227-249, 1978.
Vign74a,
Vignes, J. and M. La Porte, ““Error Analysis In Computing,”* Proceedings IFIP Congress, pp. 610-
614, Stockholm, 1974,
Wilk63a,
‘Wilkinson, J., Rounding Errors in Algebraic Processes, Prentice-Hall, Inc., Englewood Cliffs, NJ.,
1963.