0% found this document useful (0 votes)
41 views

Unit 4

The document discusses several computer graphics and animation techniques for visible surface detection including depth buffer, scan line, depth sorting, BSP tree, and area subdivision methods. It provides details on how each method works, required data structures, and comparisons between methods.

Uploaded by

kavin110305
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

Unit 4

The document discusses several computer graphics and animation techniques for visible surface detection including depth buffer, scan line, depth sorting, BSP tree, and area subdivision methods. It provides details on how each method works, required data structures, and comparisons between methods.

Uploaded by

kavin110305
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

21CSE255T - Computer

Graphics and Animation


Unit IV
• Unit IV Visible-Surface Detection Methods 9 hrs

• Introduction- Classification of Visible-Surface Detection Algorithms -Back-Face Detection-Depth-Buffer


Method-Buffer Methods-Scan-Line Method-Depth-Sorting Method BSP-Tree Method-Area-Subdivision
Method-Octree Methods Ray-Casting Method-Curved Surfaces-Wireframe Methods-Visibility-Detection
Functions- lllumination Models and Surface-Rendering Methods-Light Sources Basic lllumination
Models-Displaying Light Intensities-Halftone Patterns and Dithering Techniques -Polygon-Rendering
Methods-Ray-Tracing Methods -comparison of the methods.
Depth Buffer Algorithm
• Depth-buffer method:
• Commonly used image-space approach to detecting visible surfaces
• Also known as the z-buffer method
• Compares surface depths at each pixel position on the projection plane
• Object depth typically measured from the view plane along the z-axis of a
viewing system
• Procedure:
• Each surface of a scene processed separately
• Processed one point at a time across the surface
• Applicability:
• Typically applied to scenes containing polygon surfaces
• Depth values can be computed quickly
• Easy to implement
• With object descriptions converted to projection coordinates, each (x,
y, 2) position on a polygon surface corresponds to the orthographic
projection point
• (x, y) on the view plane. Therefore, for each pixel position (x, y) on
the view plane, object depths can be compared by comparing z values
• As implied by the name of this method, two buffer areas are required.
A depth buffer is used to store depth values for each (x, y) position as
surfaces are processed, and the refresh buffer stores the intensity
values for each position.
• Initially, all positions in the depth buffer are set to 0 (minimum depth),
and the refresh buffer is initialized to the background intensity
• 2 Buffers are used:
• Depth Buffer (Z values)
• Refresh Buffer (Intensity Values)
A-Buffer (Accumulation Buffer) Method
• An extension of the ideas in the depth-buffer method is the A-buffer
method (at the other end of the alphabet from "z-buffer", where z
represents depth).
• The A buffer method represents an antialiased, area-averaged,
accumalation-buffer method developed by Lucasfilm for implementation
in the surface-rendering system called REYES (an acronym for "Renders
Everything You Ever Saw").
• A drawback of the depth-buffer method is that it can only find one visible
surface at each pixel position. In other words, it deals only with opaque
surfaces and cannot accumulate intensity values for more than one
surface, as is necessary if transparent surfaces are to be displayed (Fig.
13-81.
• The A-buffer method expands the depth buffer so that each position in the
buffer can reference a linked list of surfaces.
• Thus, more than one surface intensity can be taken into consideration at each
pixel position, and object edges can be antialiased.
• Each position in the A-buffer has two fields: depth field - stores a positive or
negative real number intensity field - stores surface-intensity information or a
pointer value.
• If the depth field is positive, the number stored at that position is the depth of a single
surface overlapping the corresponding pixel area.
• The intensity field then stores the RCB components of the surface color at that point
and the percent of pixel coverage.
• If the depth field is negative, this indicates multiple-surface contributions to the pixel
intensity. The intensity field then stores a pointer to a linked list of surface data.
• Data for each surface in the linked list includes RGB intensity components opacity
parameter (percent of transparency) depth percent of area coverage surface identifier
other surface-rendering parameters pointer to next surface
Scan Line Method
• This image space method for removing hidden surface is an extension
of the scan-line algorithm for filling polygon interiors.
• Instead of filling just one surface, we now deal with multiple surfaces
• As each scan line is processed, all polygon surfaces intersecting that
line are examined to determine which are visible.
• Across each scan line, depth calculations are made for each
overlapping surface to determine which is nearest to the view plane.
When the visible surface has been determined, the intensity value for
that position is entered into the refresh buffer.
• It is an extension of 2D polygon filling algorithm.
• It performs the operations with 3 tables:
• Edge Table
X Ymax x ID

• Active Edge Table


ID Polygon Shading Info IN/OUT
Coefficient

• Polygon Table
• Example for Active Edge table:
Scan Line Entries
L1 AB BC EH EF
L2 AD EH BC FG

L1
L2
AB S1
AD S2
BC S1
EH S1&S2
EH S2
BC S1&S2
EF S2
FG S1&S2
• If line passes two surfaces, we have to find the depth. The Z value of the
lines should be considered. If S1 < S2, then S1 will be highlighted.

• Example, from AD to BC, S1 will be highlighted. From BC, FG , S2 will be


highlighted.
DEPTH-SORTING METHOD
• Using both image-space and object-space operations, the depth-sorting
method performs the following basic functions:
• 1. Surfaces are sorted in order of decreasing depth. 2. Surfaces are
scan converted in order, starting with the surface of greatest depth.
• This method for solving the hidden-surface problem is often referred
to as the painter's algorithm.
• In creating an oil painting, an artist first paints the background colors.
Next, the most distant objects are added, then the nearer objects, and
so forth. At the final step, the foreground objects are painted on the
canvas over the background and other objects that have been painted
on the canvas.
• Each layer of paint covers up the previous layer. Using a similar technique,
we first sort surfaces according to their distance from the view plane.
• The intensity values for the farthest surface are then entered into the refresh
buffer. Taking each succeeding surface in turn (in decreasing depth order), we
"paint" the surface intensities onto the frame buffer over the intensities of the
previously processed surfaces.
• Painting polygon surfaces onto the frame buffer according to depth is carried
out in several steps.
• Assuming we are viewing along the-z direction, surfaces are ordered on the
first pass according to the smallest z value on each surface.
• Surface S with the greatest depth is then compared to the other surfaces in the
list to determine whether there are any overlaps in depth. If no depth overlaps
occur, S is scan converted.
• We make the following tests for each surface that overlaps with S.
• If any one of these tests is true, no reordering is necessary for that
surface. The tests are listed in order of increasing difficulty.
• The bounding rectangles in the xy plane for the two surfaces do not overlap
• Surface S is completely behind the overlapping surface relative to the viewing
position.
• The overlapping surface is completely in front of S relative to the viewing
position.
• The projections of the two surfaces onto the view plane do not overlap.
If 4 test cases are failed, surfaces should
be reordered. That is S’ should be scan
conver first then S.
BSP-TREE METHOD
• A binary space-partitioning (BSP) tree is an efficient method for determining
object visibility by painting surfaces onto the screen from back to front, as in
the painter's algorithm.
• The BSP tree is particularly useful when the view reference point changes,
but the objects in a scene are at fixed positions.
• Applying a BSP tree to visibility testing involves identifying surfaces that
are "inside" and "outside" the partitioning plane at each step of the space
subdivision, relative to the viewing direction.
• Figure below illustrates the basic concept in this algorithm. With plane PI, we
first partition the space into two sets of objects.
• One set of objects is behind, or in back of, plane P, relative to the viewing
direction, and the other set is in front of PI.
• Since one object is intersected by plane PI, we divide that object into two
separate objects, labeled A and B. Objects A and C are in front of P, and
objects B and Dare behind PI.
• We next partition the space again with plane P2 and construct the binary tree
representation shown in Fig In this tree, the objects are represented as
terminal nodes, with front objects as left branches and back objects as right
branches.
• For objects described with polygon facets, we chose the partitioning planes to
coincide with the polygon planes.
• The polygon equations are then used to identify "inside" and "outside"
polygons, and the tree is constructed with one partitioning plane for each
polygon face.
• Any polygon intersected by a partitioning plane is split into two parts. When
the BSP tree is complete, we process the tree by selecting the surfaces for
display in the order back to front, so that foreground objects are painted over
the background objects.
• Fast hardware implementations for constructing and processing DSP trees are
used in some systems.
AREA-SUBDIVISION METHOD
• This technique for hidden-surface removal is essentially an image-space
method, but object-space operations can be used to accomplish depth ordering
of surfaces.
• We apply this method by successively dividing the total viewing area into
smaller and smaller rectangles until each small area is the projection of part of
n single visible surface or no surface at all.
• To implement this method, we need to establish tests that can quickly identify
the area as part of a single surface or tell us that the area is too complex to
analyze easily.
• Starting with the total view, we apply the tests to determine whether we should
subdivide the total area into smaller rectangles
• each of the smaller areas, subdividing these if the tests indicate that
visibility of a single surface is still uncertain.
• We continue this process until the subdivisions Area-Subdivision
Method are easily analyzed as belonging to a single surface or until
they are reduced to the size of a single pixel.
• An easy way to do this is to successively divide the area into four
equal parts at each step.
• Tests to determine the visibility of a single surface within a specified
area are made by comparing surfaces to the boundary of the area.
There are four possible relationships that a surface can have with a
specified area boundary. We can describe these relative surface
characteristics in the following way:
• Surrounding surface-One that completely encloses the area.
• Overlapping surface-One that is partly inside and partly outside the area.
• Inside surface-One that is completely inside the area
• Outside surface-One that is completely outside the area.

The tests for determining surface visibility within an area can be stated in
terms of these four classifications. No further subdivisions of a specified area are
needed if one of the following conditions is true:

1. All surfaces are outside surfaces with respect to the area.


2. Only one inside, overlapping, or surrounding surface is in the area.
3. A surrounding surface obscures a l l other surfaces within the area boundaries.
OCTREE METHODS
• When an octree representation is used for the viewing volume, hidden-surface
elimination is accomplished by projecting octree nodes onto the viewing
surface in a front-to-back order. In Fig. 13-24, the front face of a region of
space (the side toward the viewer) is formed with octants 0, 1, 2, and 3.
Surfaces in the front of these octants are visible to the viewer. Any surfaces
toward the rear of the front octants or in the back octants (4,5,6, and 7) may be
hidden by the front surfaces.
• Back surfaces are eliminated, for the viewing direction given in Fig. 13-24,
by processing data elements in the octree nodes in the order 0, 1, 2,3,4, 5,
6, 7.
• This results in a depth-first traversal of the octree, so that nodes
representing octants 0, 1, 2, and 3 for the entire region are visited before
the nodes representing octants 4,5,6, and 7. Similarly, the nodes for the
front four sub octants of octant 0 are visited before the nodes for the four
back sub octants.
• When a color value is encountered in an octree node, the pixel area in the
frame buffer corresponding to this node is assigned that color value only if
no values have previously been stored in this area.
• A method for displaying an octree is first to map the octree onto a quadtree
6 of visible areas by traversing octree nodes from front to back in a
recursive procedure.
• Then the quadtree representation for the visible surfaces is loaded into the
frame buffer. Figure 13-25 depicts the octants in a region of space and the
corresponding quadrants on the view plane. Contributions to quadrant 0
come from octants 0 and 4. Color values in quadrant 1 are obtained from
surfaces in octants 1 and 5, and values in each of the other two quadrants
are generated from the pair of octants aligned with each of these quadrants.
• If we consider the line of sight from a pixel
Ray Casting Method position on the view plane through a scene, as in
Fig. 13-26, we can determine which objects in
the scene (if any) intersect this line.
• After calculating all ray surface intersections,
we identify the visible surface as the one whose
intersection point is closest to the pixel.
• This visibility detection scheme uses
ray-casting procedures.
• Ray casting, as a visibility detection tool, is
based on geometric optics methods, which trace
the paths of light rays.
• Since there are an infinite number of light rays
in a scene and we are interested only in those
rays that pass through pixel positions, we can
trace the light-ray paths backward from the
pixels through the scene.
• The ray-casting approach is an effective visibility-detection method for scenes
with curved surfaces, particularly spheres.
• We can think of ray casting as a variation on the depth-buffer method.
• In the depth-buffer algorithm, we process surfaces one at a time and calculate
depth values for all projection points over the surface.
• The calculated surface depths are then compared to previously stored depths to
determine visible surfaces at each pixel.
• In ray-casting, we process pixels one at a time and calculate depths for all
surfaces along the projection path to that pixel.
• Ray casting is a special case of ray-tracing that trace multiple ray paths to pick
up global reflection and refraction contributions from multiple objects in a scene.
• With ray casting, we only follow a ray out from each pixel to the nearest object.
Efficient ray-surface intersection calculations have been developed for common
objects, particularly spheres
CURVED SURFACES
• Effective methods for determining visibility for objects with curved surfaces
include ray-casting and octree methods.
• With ray casting, we calculate ray surface intersections and locate the smallest
intersection distance along the pixel ray.
• With octrees, once the representation has been established from the input
definition of the objects, all visible surfaces are identified with the same
processing procedures.
• No special considerations need be given to different kinds of curved surfaces.
Curved-Surface Representations
• We can represent a surface with an implicit equation of the form f(x, y, z) =
0 or with a parametric representation.
• Spline surfaces, for instance, are normally described with parametric
equations.
• In some cases, it is useful to obtain an explicit surface equation, as, for
example, a height function over an xy ground plane:
• Z = f(x,y)

•Many objects of interest, such as spheres, ellipsoids, cylinders, and cones,


have quadratic representations.
•These surfaces are commonly used to model molecular structures, roller
bearings, rings, and shafts
Surface Contour Plots
• For many applications in mathematics, physical sciences, engineering and other fields, it is
useful to display a surface function with a set of contour lines that show the surface shape.
• The surface may be described with an equation or with data tables, such as topographic data on
elevations or population density.
• With an explicit functional representation, we can plot the visible surface contour lines and
eliminate those contour sections that are hidden by the visible parts of the surface.
• To obtain an xy plot of a functional surface, we write the surface representation in the form y =
f(x,z)
• A curve in the xy plane can then be plotted for values of z within some selected range, using a
specified interval ▲z. Starting with the largest value of z, we plot the curves from "front" to
"back" and eliminate hidden sections.
• We draw the curve sections on the screen by mapping an xy range for the function into an xy
pixel screen range.
• Then, unit steps are taken in x and the corresponding y value for each x value is determined from
Eq. 13-8 for a given value of z.
WIREFRAME METHODS
• When only the outline of an object is to be displayed, visibility tests are
applied to surface edges.
• Visible edge sections are displayed, and hidden edge sections can either be
eliminated or displayed differently from the visible edges.
• For example, hidden edges could be drawn as dashed lines, or we could use
depth cueing to decrease the intensity of the lines as a linear function of
distance from the view plane.
• Procedures for determining visibility of object edges are referred to as
wireframe-visibility methods. They are also called visible-he detection
methods or hidden-line detection methods
VISIBILITY-DETECTION FUNCTIONS
• A particular function can then be invoked with the procedure name,
such as backFace or depthBuffer.
• A table of available methods is listed at Summary each installation,
and a particular visibility-detection method is selected with the
hidden-line-hidden-surface-removal (HLHSR) function:
Specular Reflection
• When we look at an illuminated shiny surface, such as polished metal, an
apple, or a person's forehead, we see a highlight, or bright spot, at certain
viewing directions.
• This phenomenon, called specular reflection, is the result of total, or near
total, reflection of the incident light in a concentrated region around the
specular reflection angle.
POLYGON-RENDERING METHODS
• Illumination - How to color single point
• Shading – How to color whole object
• Constant Intensity Shading or Flat Shading – Each entire polygon drawn with same
color – One light calculation per polygon
• Color is computed once for each polygon
• Drawback : Mach bond effect(Boundary is of mixed color)
• A fast and simple method for rendering an object with polygon surfaces is constant-intensity
shading, also caned flat shading. In this method, a single intensity is calculated for each polygon.
All points over the surface of the polygon are then displayed with the same intensity value.
• Gouraud Shading (Color interpolation technique)
• Color is computed once per vertex using normal of vertex
• Colors are interpolated across polygon
• Drawback:Slower than flat shading,it can miss highlights that occur in the middle of the polygon
• Phong Shading (Normal interpolation shading, normals are interpolated across polygon)
• Fast Phong Shading

You might also like