0% found this document useful (0 votes)
43 views

Survey and Recent Results: Robust Geometric Computation: Chee Yap

The document summarizes a talk on robust geometric computation. It begins with an overview of three parts: a survey of non-robustness in numerical geometry, exact geometric computation approaches like core libraries and constructive root bounds, and new directions like addressing non-robustness given Moore's law and certification paradigms. It then discusses the phenomenon of numerical non-robustness in geometry, responses to it, its impact, and what makes geometry particularly challenging compared to other domains involving numerics.

Uploaded by

Riza Hadiatullah
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Survey and Recent Results: Robust Geometric Computation: Chee Yap

The document summarizes a talk on robust geometric computation. It begins with an overview of three parts: a survey of non-robustness in numerical geometry, exact geometric computation approaches like core libraries and constructive root bounds, and new directions like addressing non-robustness given Moore's law and certification paradigms. It then discusses the phenomenon of numerical non-robustness in geometry, responses to it, its impact, and what makes geometry particularly challenging compared to other domains involving numerics.

Uploaded by

Riza Hadiatullah
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 82

Survey and Recent Results:

Robust Geometric Computation

Chee Yap
Department of Computer Science
Courant Institute
New York University

Oct 18, 2001 Talk @ MIT 1


OVERVIEW

Part I: NonRobustness Survey

Part II: Exact Geometric Computation
 Core Library
 Constructive Root Bounds

Part III: New Directions
 Moore’s Law and NonRobustness
 Certification Paradigm

Conclusion

Oct 18, 2001 Talk @ MIT 2


Numerical
Nonrobustness
Phenomenon

Oct 18, 2001 Talk @ MIT 3


Part I: OVERVIEW

The Phenomenon

What is Geometric?

Taxonomy of Approaches

EGC and relatives

Oct 18, 2001 Talk @ MIT 4


Numerical Non-robustness

Non-robustness phenomenon
 crash, inconsistent state, intermittent

Round-off errors
 benign vs. catastrophic
 quantitative vs. qualitative

Oct 18, 2001 Talk @ MIT 5


Examples

Intersection of 2 lines
 check if intersection point is on line

Mesh Generation
 point classification error (dirty meshes)

Trimmed algebraic patches in CAD
 bounding curves are approximated leading
to topological inconsistencies

Front Tracking Physics Simulation
 front surface becomes self-intersecting

Oct 18, 2001 Talk @ MIT 6


Responses to
Non-robustness

“It is a rare event”

“Use more stable algorithms”

“Avoid ill-conditioned inputs”

“Epsilon-tweaking”

“There is no solution”

“Our competitors couldn’t do it, so
we don’t have to bother”

Oct 18, 2001 Talk @ MIT 7


Impact of Non-robustness

Acknowledged, seldom demanded

Economic/productivity Impact
 barrier to full automation
 scientist/programmer productivity
 mission critical computation fail
 Patriot missile, Ariadne Rocket

E.g. Mesh generation
 a preliminary step for simulations
 1 failure/5 million cells [Aftosmis]
 tweak data if failure

Oct 18, 2001 Talk @ MIT 8


What is Special about
Geometry?

Oct 18, 2001 Talk @ MIT 9


Geometry is Harder

Geometry =
Combinatorics+Numerics

E.g. Voronoi Diagram

Oct 18, 2001 Talk @ MIT 10


Example: Convex Hulls
1 2 1 2 1 2

3
3 3
7 8 7 7
8

9 4 4 4
9

6 6 6
5 5 5

Input Convex Hull Output

Oct 18, 2001 Talk @ MIT 11


Consistency

Geometric Object
D = (G, L,P)
G=graph, L=labeling of G

Consistency Relation (P)

E.g. D is convex hull or Voronoi
diagram

Qualitative error  inconsistency

Oct 18, 2001 Talk @ MIT 12


Examples/Nonexamples

Consistency is critical
 matrix multiplication
 shortest paths in graphs (e.g.
Djikstra’s algorithm)
 sorting and geometric sorting
 Euclidean shortest paths

Oct 18, 2001 Talk @ MIT 13


Taxonomy of Approaches

Oct 18, 2001 Talk @ MIT 14


Gold Standard

Must understand the dominant mode
of numerical computing

“F.P. Mode” :
 machine floating point
 fixed precision (single/double/quad)
 IEEE 754 Standard

What does the IEEE standard do for
nonrobustness? Reduces but not eliminate it.
Main contribution is cross-platform predictability.

Historical Note

Oct 18, 2001 Talk @ MIT 15


Type I Approaches

Basic Philosophy: to make the fast
but imprecise (IEEE) arithmetic robust

Taxonomy
 arithmetic (FMA, scalar vector, sli, etc)
 finite resolution predicates (-tweaking, -
predicates [Guibas-Salesin-Stolfi’89])
 finite resolution geometry (e.g., grids)
 topology oriented approach [Sugihara-Iri’88]

Oct 18, 2001 Talk @ MIT 16


Example

Grid Geometry [Greene-Yao’86]


Finite Resolution Geometries

Oct 18, 2001 Talk @ MIT 17


Example

What is a Finite Resolution Line?
 A suitable set of pixels [graphics]
 A fat line [generalized intervals]
 A polyline [Yao-Greene, Milenkovic, etc]
 A rounded line [Sugihara]

fat line:
polyline:

rounded line:
aX+bY+c=0; (a<2L, b<2L, c<22L)

Oct 18, 2001 Talk @ MIT 18


Example

Topology Oriented Approach of
Sugihara-Iri: Voronoi diagram of 1 million
points

Priority of topological part over
numerical part

Identify relevant and maintainable
properties: e.g. planarity

Issue: which properties to choose?

Oct 18, 2001 Talk @ MIT 19


Exact Approach

Idea: avoid all errors

Big number packages (big integers, big
rationals, big floats, etc)

Only rational problems are exact

Even this is a problem [Yu’92, Karasick-
Lieber-Nackman’89]

Oct 18, 2001 Talk @ MIT 20


Algebraic Computation

Algebraic number:
    
 P(x) = x2 – 2 = 0

Representation:   P(x), 1, 2)

Exact manipulation:
 comparison
 arithmetic operations, roots, etc.

Most problems in Computational
Geometry textbooks requires only
+, –, , , 

Oct 18, 2001 Talk @ MIT 21


Type II Approach

Basic Philosophy: to make slow but
error-free computation more efficient

Taxonomy
 Exact Arithmetic (naïve approach)
 Expression packages
 Compiler techniques
 Consistency approach
 Exact Geometric Computation (EGC)

Oct 18, 2001 Talk @ MIT 22


Consistency Approach

Goal: ensure that no decisions are
contradictory

Parsimonious Algorithms [Fortune’89] :
only make tests that are independent of
previous results

NP-hard but in PSPACE

Oct 18, 2001 Talk @ MIT 23


Consistency is Hard

[Hoffmann, Hopcroft, Karasick’88]

Geometric Object D = (G, L)

G is realizable if there exists L such that (G,
L) is consistent

Algorithm AL: I G(I), L(I))

 AL is geometrically exact if G(I) is the
exact structure for input I
 AL is consistent if G(I) is realizable

Intersecting 3 convex polygons is hard
(geometry theorem proving)

Oct 18, 2001 Talk @ MIT 24


Exact Geometric
Computation

Oct 18, 2001 Talk @ MIT 25


Part II: OVERVIEW

Exact Geometric Computing (EGC)

The Core Library

Constructive Root Bounds

Oct 18, 2001 Talk @ MIT 26


How to Compute Exactly
in the Geometric Sense

Algorithm = sequence of steps
 construction steps
 conditional or branching steps

Branching based on sign   of
predicate evaluation

Output combinatorial structure G in
D=(G,L) is determined by path

Ensure all branches are correct
 this guarantees that G is exact!

Oct 18, 2001 Talk @ MIT 27


Exact Geometric
Computation (EGC)

Exactness in the Geometry, NOT the
arithmetic (cf.geometric exactness)

Simple but profound implications
 We can now use approximate arithmetic [Dube-
Yap’94]
 EGC tells us exactly how much precision is
needed

No unusual geometries
 No need to invent new algorithms --
“standard” algorithms apply
 no unusual geometries

General solution (algebraic case)
 Not algorithm-specific solutions!

Oct 18, 2001 Talk @ MIT 28


Constant Expressions

 = set of real algebraic operators.

 = set of expressions over .

E.g., if  , x1 x2 then  is the
integer polynomials over x1 x2.

Assume  are constant operators (no
variables like x1 x2).

An expression E  is a DAG
 E = with  and  shared

Value function, Val:   R
where Val(E) may be undefined

Oct 18, 2001 Talk @ MIT 29


Fundamental Problem of EGC

Constant Zero Problem
 CZP: given E , is Val(E)=0?

Constant Sign Problem
 CSP: given E , compute the
sign of Val(E).

CSP is reducible to CZP

Potential exponential gap:
 sum of square roots
 CZP is polynomial time [Bloemer-Yap]

Oct 18, 2001 Talk @ MIT 30


Complexity of CSP

Hierarchy:
 0  ,   
 1  0    
 2  1   
 3  2  Root(P,i) : P(x)Z[x] 
 4  0  Sin(x),  

Theorem: CSP(3) is decidable.

Theorem: CSP(1 ) is alternating
polynomial time.

Is CSP(4) is decidable?
Oct 18, 2001 Talk @ MIT 31
Root Bound

A root bound b > 0 for an
expression E is a value such that
b

E.g., the Cauchy’s bound says that
=
because is a root of the polynomial
x4 0 x2  1.

Root bit-bound is defined as  log(b)

Oct 18, 2001 Talk @ MIT 32


How to use root bounds

Let b be a root bound for E.

Compute a numerical

approximation E for .


If  E  E   b/2 then

sign( E) = sign(E)

Else sign (E) = 0.

N.B. root bound is not reached
unless sign is really zero!

Oct 18, 2001 Talk @ MIT 33


Nominally Exact Inputs

EGC Inputs are exact and
consistent

Why care about exactness if the
input is inexact? Because EGC is the
easiest method to ensure consistency.

Oct 18, 2001 Talk @ MIT 34


Core Library

Oct 18, 2001 Talk @ MIT 35


EGC Libraries

GOAL: any programmer may routinely
construct robust programs*

Current Libraries:
 Real/Expr [Yap-Dube’94]
 LEDA real [Burnikel et al’99]
 Core Library [Karamcheti et al’99]

Oct 18, 2001 Talk @ MIT 36


Core Library

An EGC Library
 C++, compact (200 KB)
 Focus on “Numerical Core” of EGC
 precision sensitive mechanism
 automatically incorporates state of art
techniques

Key Design Goal: ease of use
 “Regular C++ Program” with preamble:
#include “CORE.h”
 easy to convert existing programs

Oct 18, 2001 Talk @ MIT 37


Core Accuracy API

Four Levels
(I) Machine Accuracy (IEEE standard)
(II) Arbitrary Accuracy (e.g. Maple)
(III) Guaranteed Accuracy (e.g.
Real/Expr)
(IV) Mixed Accuracy (for fine control)

Oct 18, 2001 Talk @ MIT 38


Delivery System

No change of programmer behavior
 At the flip of a switch!
 Benefits: code logic verification, fast debugging

#define Level N // N=1,2,3,4


#include “CORE.h”

/*
*****************************
* any C/C++ Program Here *
*****************************
*/

Oct 18, 2001 Talk @ MIT 39


What is in CORE levels?

Numerical types:
 int, long, float, double
 BigInt, BigFloat, Expr

Promotions (+Demotions):
 Level II: long BigInt,
double BigFloat
 Level III: long, double  Expr

Oct 18, 2001 Talk @ MIT 40


What is in Level III?

Fundamental gap between Levels
II and III

Need for iteration: consider
a = b + c;

Precision sensitive evaluation

Oct 18, 2001 Talk @ MIT 41


Relative and Absolute Precision


Let real X be an approximation of X.

Composite precision bound [r, a]


If r , then get absolute precision a.

If a , then get relative precision r.

Interesting case: [r, a] = [1, ] means we
obtain the correct sign of X.

Oct 18, 2001 Talk @ MIT 42


Precision-Driven Eval of
Expressions

Expr’s are DAGs

Each node stores: an approximate
BigFloat value; a precision; a root bound

Down-Up Algorithm:
 precision p is propagated down
 error e propagated upwards
 At each node, check e  p
 Check passes automatically at leaves
 Iterate if check fails; use root bounds to
terminate

Oct 18, 2001 Talk @ MIT 43


Example

Line intersection (2-D):
 generate 2500 pairs of lines
 compute their intersections
 check if intersection lies on one line
 40% failure rate at Level I

In 3-D:
 classify pairs of lines as skew, parallel,
intersecting, or identical.
 At Level I, some pairs are parallel and
intersecting, etc.

Oct 18, 2001 Talk @ MIT 44


Example: Theorem
Proving Application

Kahan’s example (4/26/00)
 “To show that you need theorem proving, or
why significance arithmetic is doomed”
 F(z): if (z=0) return 1; else (exp(z)-1)/z;
 Q(y): |y-sqrt(y**2 +1)| - 1/(y-sqrt(y**2+1));
 G(x): F(Q(x)**2).
 Compute G(x) for x=15 to 9999.

Theorem proving with Core Library [Tulone-
Yap-Li’00]
 Generalized Schwartz Lemma for radical
expressions

Oct 18, 2001 Talk @ MIT 45


Constructive Root Bounds

Oct 18, 2001 Talk @ MIT 46


Problem of Constructive
Root Bounds

Classical root bounds (e.g.
Cauchy’s) are not constructive

Wanted: recursive rules for a
family of expressions to maintain
parameters p1, p2, etc, for any
expression E so that B(p1, p2, ...) is
a root bound for E.

We will survey various bounds

Oct 18, 2001 Talk @ MIT 47


Illustration
  is root of A(X) = i=0 ai Xi of degree n

Height of A(X) is A 

 
Degree-height [Yap-Dube 
95]
 Cauchy’s bound:
 maintain bound on heights 1
| | 
1 A 
 but this requires the maintenance of bounds
on degrees

Degree-length [Yap-Dube 95]
 Landau’s bound:
1
| | 
A 2

Oct 18, 2001 Talk @ MIT 48


Degree-Measure Bounds

[Mignotte 82, Burnikel et al 00]

where m( is the measure of .


It turns out that we also need to maintain
the degree bound

Oct 18, 2001 Talk @ MIT 49


BFMS Bound

[Burnikel-Fleischer-Mehlhorn-Schirra’99]
 For radical expressions
 Tight for division-free expressions
 For those with divisions:
E  0  | E |  (u ( E ) D ( E ) 1 ) 1
where E is transformed to U(E)/L(E)
D ( E ) 2 1
E  0  | E |  (u ( E ) l ( E )) 1
 Improvement [‘01]

Oct 18, 2001 Talk @ MIT 50


New Constructive Bound

Applies to general 3 - expressions.

The bounding function is

  (E ) : an upper bound on the absolute


value of conjugates of E
 lc(E ) : an upper bound on the leading
coefficient of Irr (E )
 D(E ) : an upper bound on the degree of
Irr (E )

Oct 18, 2001 Talk @ MIT 51


Inductive Rules

Oct 18, 2001 Talk @ MIT 52


Comparative Study

Major open question: is there a root bit-
bound that depends linearly on degree?

No single constructive root bound is
always better than the others.
 BFMS, degree-measure and our bound
 Compare their behavior on interesting
classes of expressions:
 sum of square roots,
 continued fraction, etc.
 In Core Library, we compute all three
bounds and choose the best one.

Oct 18, 2001 Talk @ MIT 53


Experimental Results

A critical predicate in Fortune’s sweepline algorithm
for Voronoi diagrams is

.

Input coordinates are L-bit long, a’s, b’s and d’s are
3L-, 6L- and 2L-bit integers, respectively.
 The BFMS bound: (79 L + 30) bits
 The degree-measure bound: (64 L + 12) bits
 Our new bound: (19 L + 9) bits
 Best possible [Sellen-Yap]: (15 L + O(1)) bits

Oct 18, 2001 Talk @ MIT 54


Timing on Synthetic Input
Timings for Fortune’s predicate on degenerate
inputs (in seconds)

L 10 20 50 100 200
NEW 0.01 0.03 0.12 0.69 3.90
BFMS 0.03 0.24 1.63 11.69 79.43
D-M 0.03 0.22 1.62 10.99 84.54
Tested on a Sun UltraSparc (440MHz, 512 MB)

Oct 18, 2001 Talk @ MIT 55


Timing for Degenerate
input
Timings on Fortune’s algorithm on degenerate
inputs on a uniform (32 x 32) grid with L-bit
coordinates
(in seconds)

L 10 20 30 50
NEW 35.2 41.7 47.5 112.3
BFMS 86.1 1014.1 1218.1 5892.2
D-M 418.5 1681.6 1874.4 > 2 hrs

Oct 18, 2001 Talk @ MIT 56


Moore’s Law and
Non-robustness Trends

Oct 18, 2001 Talk @ MIT 57


Part III: OVERVIEW

New Directions

Robustness as Resource
 (or, how to exploit Moore’s Law)

Certification Paradigm
 (or, how to use incorrect algorithms)

Oct 18, 2001 Talk @ MIT 58


Moore’s Law and Robustness

Computers are becoming faster
(Moore’s Law, 1965)

Software are becoming less robust
 crashes more often
 must solves larger/harder problems (e.g.
Meshes, CAD)
 expectation is increasing

Inverse correlation
 is this inevitable?

Oct 18, 2001 Talk @ MIT 59


Reversing the Trend

Robustness: all-or-nothing ?

Computational Paradigm
 robustness as computational resource
 exchange some speed for robustness

Goal: a positive correlation between
Moore’s Law and robustness

Oct 18, 2001 Talk @ MIT 60


Robustness Trade-off
Curves

Each program P, for given input,
defines a family of speed-
robustness trade-off curves, one
curve for each CPU speed
Trade-Off Curves

100

80

60 Pentium III 800MHz


speed

Pentium III 500MHz


40
AMD-K6 600MHz
20

0
0 20 40 60 80 100
robustness

•What is 100% robustness?

Oct 18, 2001 Talk @ MIT 61


Computing on the Curve

P only needs to be recompiled to
operate at any chosen point on a
curve.

With each new hardware upgrade,
we can automatically recompile
program to achieve the
“sweetspot” on the new curve

Oct 18, 2001 Talk @ MIT 62


Architecture

Automatically generated Makefiles
 a registry of programs that wants
to join
 sample inputs (various sizes) for
each program
 robustness targets/parameters
 optimization function (e.g., min
acceptable speed)

Oct 18, 2001 Talk @ MIT 63


Hardware Change/Upgrade

For given hardware configuration, for
each program:
 compile different robust versions of the program
 run each on the test inputs to determine trade-
off curve
 optimize the parameters

Layer above the usual compiler

Application: libraries, CPU upgrades

Can be automatic and transparent (like
the effects of Moore’s law)

Oct 18, 2001 Talk @ MIT 64


Certification Paradigm

Oct 18, 2001 Talk @ MIT 65


Another Paradigm Shift

Motivation: programs based on machine
arithmetic are fast, but not always correct.

Floating point filter phenomenon

Need new framework for working
with ``useful but incorrect algorithms’’

Oct 18, 2001 Talk @ MIT 66


Verification vs. Checking

Program verification movement
(1970s)

Another reason why it failed:
 Algorithm: “always correct”
 H-algorithm: “always fast, often correct”

Checking paradigm [Blum,Kannan]
 use H-algorithms
 only check specific input/output pairs, not
programs.

Oct 18, 2001 Talk @ MIT 67


Checking vs. Certifying

Checker: given an input-output pair,
always accept or reject correctly

Certifier (or Filter): given an input-
output pair, either accept or “unsure”

Why certify?
 Certifiers are easier to design
 Nontrivial checkers may not be
known (e.g. sign of determinant)

Oct 18, 2001 Talk @ MIT 68


Filtered Algorithms

How to use Certifiers? Ingredients:
 Algorithm A
 H-algorithm H
 Filter or Certifier F

Basic Architecture
 run H, filter with F, run A if necessary

Oct 18, 2001 Talk @ MIT 69


Some Floating Point Filters

FvW Filter, BFS Filter

Static, Semi-static and dynamic filters

Lemma:
 given expression E with k operations
and floating point inputs, can compute
an upper bound on E with 3k flops

Static case [FwW]: about 10 flops per
arithmetic operations

Oct 18, 2001 Talk @ MIT 70


Model for Filtering

Case: 2D Delaunay Triangulation
 Base line
 Top line
 sigma = top line/base line = 60
 phi = filtering performance = 20

efficacy = sigma/phi = 3

Synthetic vs. Realistic algorithms

beta factor
 = filterable fraction of work at top line

estimating beta [Mehlhorn et al]

Oct 18, 2001 Talk @ MIT 71


Extensions

Different levels of granularity
 Checking Geometric structures
[Mehlhorn, et al]
 Determinant filter [Pan]

Filter compiler [Schirra]

Cascaded bank of filter [Funke et al]
 skip A if necessary

Oct 18, 2001 Talk @ MIT 72


Conclusion

Oct 18, 2001 Talk @ MIT 73


REVIEW

Part I: NonRobustness Survey

Part II: Exact Geometric Computation
 Core Library
 Constructive Root Bounds

Part III: New Directions
 NonRobustness as Resource (or, how to
exploit Moore’s Law)
 Certification Paradigm (or, how to use
incorrect algorithms)

Oct 18, 2001 Talk @ MIT 74


Summary

Non-robustness has major
economic/productivity impact

Scientifically sound and systematic
solution based on EGC is feasible

Main open problem:
 Is EGC for possible for non-algebraic problems?
(CSP)

Program filtering goes beyond program
checking

Robustness as resource paradigm can be
exploited (with Moore’s Law)

Oct 18, 2001 Talk @ MIT 75


Download Software

Core Library Version 1.4

small and easy-to-use

Project Homepage:
http://cs.nyu.edu/exact/

Oct 18, 2001 Talk @ MIT 76


Last Slide

Oct 18, 2001 Talk @ MIT 77


Extras

Oct 18, 2001 Talk @ MIT 78


Algebraic Degree Bound

The expression E  Q (r1 , r2 ,  , rk )
where ri is the radical or root node
in the expression.

D( E )  i 1 ki
k

The degree bound ,
where ki is either the index of radical
nodes or the degree in polynomial root
nodes.

Oct 18, 2001 Talk @ MIT 79


Leading Coefficients and
Conjugates Bound

Basic tool: resultant calculus

Case: given two algebraic numbers and with
minimal polynomials and and a defining
polynomial for is

 Represented in the Sylvester matrix form


 Leading coefficient:
 Constant term:
 Degree:

Oct 18, 2001 Talk @ MIT 80


(cont.)

Due to the admission of divisions,
we also need to bound:
 Tail coefficients tc(E )
 Lower bounds on conjugates v(E )


Measures M (E ) are involved in
bounding tail coefficients.

Oct 18, 2001 Talk @ MIT 81


End of Extra Slides

Oct 18, 2001 Talk @ MIT 82

You might also like