0% found this document useful (0 votes)
12 views

Unit 3

The document discusses 3D graphics concepts including 3D object representations using polygon surfaces and meshes, curved surfaces, transformations and viewing. Polygon surfaces are represented using polygon tables to store vertex, edge and polygon data. Parallel and perspective projections are used to project 3D scenes onto 2D planes. Depth cueing, visible surface detection, surface rendering and cutaway/exploded views are used to generate realistic 3D representations.

Uploaded by

22 SAI SUSHMA S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Unit 3

The document discusses 3D graphics concepts including 3D object representations using polygon surfaces and meshes, curved surfaces, transformations and viewing. Polygon surfaces are represented using polygon tables to store vertex, edge and polygon data. Parallel and perspective projections are used to project 3D scenes onto 2D planes. Depth cueing, visible surface detection, surface rendering and cutaway/exploded views are used to generate realistic 3D representations.

Uploaded by

22 SAI SUSHMA S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

UNIT III

THREE DIMENSIONAL GRAPHICS


Three dimensional concepts; Three dimensional object representations – Polygon surfaces-
Polygon tables- Plane equations – Polygon meshes; Curved Lines and surfaces, Quadratic
surfaces; Blobby objects; Spline representations – Bezier curves and surfaces -B-Spline curves
and surfaces. TRANSFORMATION AND VIEWING: Three dimensional geometric and
modeling transformations – Translation, Rotation, Scaling, composite transformations; three
dimensional viewing – viewing pipeline, viewing coordinates, Projections, Clipping; Visible
surface detection methods.

THREE DIMENSIONAL CONCEPTS

Three Dimensional Display Methods

1. To obtain a display of a three dimensional scene that has been modeled in


world coordinates, we must setup a co-ordinate reference for the ‘camera’.
2. This coordinate reference defines the position and orientation for the plane of
the camera film which is the plane we want to use to display a view of the
objects in the scene.
3. Object descriptions are then transferred to the camera reference coordinates
and projected onto the selected display plane.
4. The objects can be displayed in wire frame form, or we can apply lighting and
surface rendering techniques to shade the visible surfaces.

Parallel Projection:

1. Parallel projection is a method for generating a view of a solid object is to


project points on the object surface along parallel lines onto the display plane.
2. In parallel projection, parallel lines in the world coordinate scene project into
parallel lines on the two dimensional display planes.
3. This technique is used in engineering and architectural drawings to represent
an object with a set of views that maintain relative proportions of the object.
4. The appearance of the solid object can be reconstructed from the major views.
Three parallel projection views of an object, showing relative
proportions from different viewing positions.

1
Perspective Projection:

1. It is a method for generating a view of a three dimensional scene is to project points to the
display plane alone converging paths.
2. This makes objects further from the viewing position be displayed smaller than objects of the
same size that are nearer to the viewing position.
3. In a perspective projection, parallel lines in a scene that are not parallel to the display plane
are projected into converging lines.
4. Scenes displayed using perspective projections appear more realistic, since this is the way
that our eyes and a camera lens form images.

Depth Cueing:

1. Depth information is important to identify the viewing direction, which is the front and
which is the back of displayed object.
2. Depth cueing is a method for indicating depth with wire frame displays is to vary the
intensity of objects according to their distance from the viewing position.
3. Depth cueing is applied by choosing maximum and minimum intensity (or color) values and
a range of distance over which the intensities are to vary.

Visible line and surface identification:

1. A simplest way to identify the visible line is to highlight the visible lines or to display them
in a different color.
2. Another method is to display the non-visible lines as dashed lines

Surface Rendering:

1. Surface rendering method is used to generate a degree of realism in a displayed scene.

2. Realism is attained in displays by setting the surface intensity of objects according to the
lighting conditions in the scene and surface characteristics.

3. Lighting conditions include the intensity and positions of light sources and the background
illumination.
4. Surface characteristics include degree of transparency and how rough or smooth the surfaces
are to be.

Exploded and Cutaway views:

2
1. Exploded and cutaway views of objects can be to show the internal structure and relationship
of the objects parts.
2. An alternative to exploding an object into its component parts is the cut away view which
removes part of the visible surfaces to show internal structure.

Three-dimensional and Stereoscopic Views:

1. In Stereoscopic views, three dimensional views can be obtained by reflecting a raster image
from a vibrating flexible mirror.
2. The vibrations of the mirror are synchronized with the display of the scene on the CRT.
3. As the mirror vibrates, the focal length varies so that each point in the scene is projected to a
position corresponding to its depth.
4. Stereoscopic devices present two views of a scene; one for the left eye and the other for the
right eye.
5. The two views are generated by selecting viewing positions that corresponds to the two eye
positions of a single viewer.
6. These two views can be displayed on alternate refresh cycles of a raster monitor, and viewed
through glasses that alternately darken first one lens then the other in synchronization with
the monitor refresh cycles.

THREE DIMENSIONAL OBJECT REPRESENTATIONS

Representation schemes for solid objects are divided into two categories as follows:

1. Boundary Representation (B-reps)

It describes a three dimensional object as a set of surfaces that separate the object interior from
the environment. Examples are polygon facets and spline patches.

2. Space partitioning representation

It describes the interior properties, by partitioning the spatial region containing an object into a
set of small, non-overlapping, contiguous solids (usually cubes). Eg: Octree Representation

Polygon Surfaces
Polygon surfaces are boundary representations for a 3D graphics object is a set of polygons that
enclose the object interior.
Polygon Tables
1. The polygon surface is specified with a set of vertex coordinates and associated attribute
parameters.
2. For each polygon input, the data are placed into tables that are to be used in the subsequent
processing.

3
3. Polygon data tables can be organized into two groups: Geometric tables and attribute tables.

Geometric Tables: Contain vertex coordinates and parameters to identify the spatial orientation
of the polygon surfaces.
Attribute tables: Contain attribute information for an object such as parameters specifying the
degree of transparency of the object and its surface reflectivity and texture characteristics. A
convenient organization for storing geometric data is to create three lists:
1. The Vertex Table: Coordinate values for each vertex in the object are
stored in this table.
2. The Edge Table It contains pointers back into the vertex table to identify
the vertices for each polygon edge.
3. The Polygon TableIt contains pointers back into the edge table to identify
the edges for each polygon.
This is shown in fig

Vertex table Edge Table Polygon surface table


V1 : X1, Y1, Z1 E1 : V1, V2 S1 : E1, E2, E3
V2 : X2, Y2, Z2 E2 : V2, V3 S2 : E3, E4, E5, E6
V3 : X3, Y3, Z3 E3 : V3, V1
V4 : X4, Y4, Z4E4 : V3, V4
V5 : X5, Y5, Z5 E5 : V4, V5
E6 : V5, V1

1. Listing the geometric data in three tables provides a convenient reference to the individual
components (vertices, edges and polygons) of each object.
2. The object can be displayed efficiently by using data from the edge table to draw the
component lines.

4
3. Extra information can be added to the data tables for faster information extraction. For
instance, edge table can be expanded to include forward points into the polygon table so that
common edges between polygons can be identified more rapidly.

E1: V1, V2, S1


E2: V2, V3, S1
E3 : V3, V1, S1, S2
E4 : V3, V4, S2
E5 : V4, V5, S2
E6 : V5, V1, S2
4. This is useful for the rendering procedure that must vary surface shading smoothly across the
edges from one polygon to the next. Similarly, the vertex table can be expanded so that
vertices are cross-referenced to corresponding edges.
5. Additional geometric information that is stored in the data tables includes the slope for each
edge and the coordinate extends for each polygon. As vertices are input, we can calculate
edge slopes and we can scan the coordinate values to identify the minimum and maximum x,
y and z values for individual polygons.
6. The more information included in the data tables will be easier to check for errors.
7. Some of the tests that could be performed by a graphics package are:
8. That every vertex is listed as an endpoint for at least two edges.
9. That every edge is part of at least one polygon.
10. That every polygon is closed.
11. That each polygon has at least one shared edge.
12. That if the edge table contains pointers to polygons, every edge referenced by a polygon
pointer has a reciprocal pointer back to the polygon.
Plane Equations

5
Polygon
Meshes

1. A single plane surface can be specified with a function such as fillArea. But when object
surfaces are to be tiled, it is more convenient to specify the surface facets with a mesh
function.
2. One type of polygon mesh is the triangle strip. A triangle strip formed with 11 triangles
connecting 13 vertices.

3. This function produces n-2 connected triangles given the coordinates for n vertices.
4. Another similar function in the quadrilateral mesh, which generates a mesh of (n-1) by (m-
1) quadrilaterals, given the coordinates for an n by m array of vertices. Figure shows 20
vertices forming a mesh of 12 quadrilaterals.

Curved Lines and Surfaces


1. Displays of three dimensional curved lines and surface can be generated from an input set of
mathematical functions defining the objects or from a set of user specified data points.
2. When functions are specified, a package can project the defining equations for a curve to the
display plane and plot pixel positions along the path of the projected function.
3. For surfaces, a functional description in decorated to produce a polygon-mesh
approximation to the surface.
Quadric Surfaces
1. Sphere
2. Ellipsoid
3. Torus

6
Sphere
1. In Cartesian coordinates, a spherical surface with radius r centered on the coordinate’s
origin is defined as the set of points (x, y, z) that satisfy the equation.
x2 + y2 + z2 = r2 -------------------------(1)
2. The spherical surface can be represented in parametric form by using latitude and
longitude angles
3. The parameter representation in eqn (2) provides a symmetric range for the angular
parameter θ and φ.

Ellipsoid
Ellipsoid surface is an extension of a spherical surface where the radius in three
mutually perpendicular directions can have different values

Torus
1. Torus is a doughnut shaped object.
2. It can be generated by rotating a circle or other conic about a specified axis.

7
Blobby Objects- A Collection of Density Functions
1. By a blobby object we mean a nonrigid object. That is things, like cloth, rubber, liquids,
water droplets, etc.
2. These objects tend to exhibit a degree of fluidity.
3. For example, in a chemical compound electron density clouds tend to be distorted by the
presence of other atoms/molecules.
4. Several models have been developed to handle these kinds of objects.

8
5. One technique is to use a combination of Gaussian density functions (Gaussian bumps).

6. Another technique called the meta-ball technique is to describe the object as being made of
density functions much like balls. • The advantage here is that the density function falls of in
a finite interval.

SPLINE REPRESENTATIONS

1. A Spline is a flexible strip used to produce a smooth curve through a designated set of points.
2. Several small weights are distributed along the length of the strip to hold it in position on the
drafting table as the curve is drawn.
3. The Spline curve refers to any sections curve formed with polynomial sections satisfying
specified continuity conditions at the boundary of the pieces.
4. A Spline surface can be described with two sets of orthogonal spline curves.
5. Splines are used in graphics applications to design curve and surface shapes, to digitize
drawings for computer storage, and to specify animation paths for the objects or the camera
in the scene. CAD applications for splines include the design of automobiles bodies, aircraft
and spacecraft surfaces, and ship hulls.

Interpolation and Approximation Splines


1. Spline curve can be specified by a set of coordinate positions called control points which
indicates the general shape of the curve.
2. These control points are fitted with piecewise continuous parametric polynomial functions in
one of the two ways.
3. When polynomial sections are fitted so that the curve passes through each control point the
resulting curve is said to interpolate the set of control points.
A set of six control points interpolated with piecewise continuous
polynomial sections

9
4. When the polynomials are fitted to the general control point path without necessarily passing
through any control points, the resulting curve is said to approximate the set of control
points.

A set of six control points approximated with piecewise continuous polynomial sections

5. Interpolation curves are used to digitize drawings or to specify animation paths.


6. Approximation curves are used as design tools to structure object surfaces.
7. A spline curve is designed, modified and manipulated with operations on the control points.
The curve can be translated, rotated or scaled with transformation applied to the control
points.
8. The convex polygon boundary that encloses a set of control points is called the convex hull.
9. The shape of the convex hull is to imagine a rubber band stretched around the position of the
control points so that each control point is either on the perimeter of the hull or inside it.

Parametric Continuity Conditions


1. For a smooth transition from one section of a piecewise parametric curve to the next various
continuity conditions are needed at the connection points.
2. If each section of a spline in described with a set of parametric coordinate functions or the
form
x = x(u), y = y(u), z = z(u), u1<= u <= u2 -----(a)
3. We set parametric continuity by matching the parametric derivatives of adjoining curve
sections at their common boundary.

10
4. Zero order parametric continuity referred to as C0 continuity, means that the curves meet.
(i.e) the values of x,y, and z evaluated at u2 for the first curve section are equal.
Respectively, to the value of x,y, and z evaluated at u1 for the next curve section.
5. First order parametric continuity referred to as C1 continuity means that the first
parametric derivatives of the coordinate functions in equation (a) for two successive curve
sections are equal at their joining point.
6. Second order parametric continuity, or C2 continuity means that both the first and second
parametric derivatives of the two curve sections are equal at their intersection.
7. Higher order parametric continuity conditions are defined similarly.

Geometric Continuity Conditions


1. To specify conditions for geometric continuity is an alternate method for joining two
successive curve sections.
2. The parametric derivatives of the two sections should be proportional to each other at their
common boundary instead of equal to each other.
3. Zero order Geometric continuity referred as G0 continuity means that the two curves sections
must have the same coordinate position at the boundary point.
4. First order Geometric Continuity referred as G1 continuity means that the parametric first
derivatives are proportional at the interaction of two successive sections.

11
5. Second order Geometric continuity referred as G2 continuity means that both the first and
second parametric derivatives of the two curve sections are proportional at their boundary.
Here the curvatures of two sections will match at the joining position.

Bezier Curves

12
13
Bezier Surfaces
1. Two sets of orthogonal Bézier curves are used.
2. Cartesian product of Bézier blending functions:

14
Bezier Patches
1. A common form of approximating larger surfaces by tiling with cubic Bézier patches.
m=n=3
2. 4 by 4 = 16 control points.

B-Spline Curves
1. Another polynomial curve for modelling curves and surfaces
2. Consists of curve segments whose polynomial coefficients only depend on just a few control
points
i. Local control
3. Segments joined at knots

4.The curve does not necessarily pass through the control points
5. The shape is constrained to the convex hull made by the control points
6. Uniform cubic b-splines has C2 continuity
a. Higher than Hermite or Bezier curves

Basis Functions

15
1. We can create a long curve using many knots and B-splines
2. The unweighted cubic B-Splines have been shown for clarity.
3. These are weighted and summed to produce a curve of the desired shape
Generating a curve

The basic one: Uniform Cubic B-Splines

Cubic B-splines with uniform knot-vector is the most commonly used form of B-splines.

X (t )  t T MQ (i) for ti  t  ti 1
where : Q (i)  ( xi 3 , ... , xi )
 1 3 3 1
 6 0
1 3 3
M ,
6  3 0 3 0 
 
 1 4 1 0
t T  (t  ti ) 3 , (t  ti ) 2 , t  ti ,1
ti : knots, 3i

16
THREE DIMENSIONAL GEOMETRIC AND MODELING TRANSFORMATIONS

Translation:

17
Rotation :

18
Scaling:

19
Reflections:
1. The matrix expression for the reflection transformation of a position P = (x, y, z) relative to x-
y plane can be written as:

2. Transformation matrices for inverting x and y values are defined similarly, as reflections
relative to yzplane and xzplane, respectively
Shears:
1. The matrix expression for the shearing transformation of a position P = (x, y, z), to produce z-
axis shear, can be written as:

2. Parameters a andb can be assigned any real values. The effect of this transformation is to
alter x- and y- coordinate values by an amount that is proportional to the z value, while
leaving the z coordinate unchanged.
3. Shearing transformations for the x axis and y axis are defined similarly.
Composite Transformation
1. Composite three dimensional transformations can be formed by multiplying the matrix
representation for the individual operations in the transformation sequence.
2. This concatenation is carried out from right to left, where the right most matrixes is the first
transformation to be applied to an object and the left most matrix is the last transformation.
3. A sequence of basic, three-dimensional geometric transformations is combined to produce a
single composite transformation which can be applied to the coordinate definition of an
object.
Three Dimensional Transformation Functions
1. Some of the basic 3D transformation functions are: translate ( translateVector,
matrixTranslate) rotateX(thetaX, xMatrixRotate) rotateY(thetaY, yMatrixRotate)
rotateZ(thetaZ, zMatrixRotate) scale3 (scaleVector, matrixScale)
2. Each of these functions produces a 4 by 4 transformation matrix that can be used to transform
coordinate positions expressed as homogeneous column vectors.
3. Parameter translate Vector is a pointer to list of translation distances tx, ty, and tz.
4. Parameter scale vector specifies the three scaling parameters sx, sy and sz.
5. Rotate and scale matrices transform objects with respect to the coordinate origin.
6. Composite transformation can be constructed with the following functions:
1. composeMatrix3
2. buildTransformationMatrix3

20
3. composeTransformationMatrix3
7. The order of the transformation sequence for the buildTransformationMarix3 and
composeTransfomationMarix3 functions, is the same as in 2 dimensions:
1. scale
2. rotate
3. translate
8. Once a transformation matrix is specified, the matrix can be applied to specified points with
transformPoint3 (inPoint, matrix, outpoint)
9. The transformations for hierarchical construction can be set using structures with the function
setLocalTransformation3 (matrix, type)
where parameter matrix specifies the elements of a 4 by 4 transformation matrix and
parameter type can be assigned one of the values of:
Preconcatenate,
Postconcatenate, or replace.

Modeling and Coordinate Transformations

1. In modeling, objects are described in a local (modeling) coordinate reference frame, and then
the objects are repositioned into a world coordinate scene.
2. For instance, tables, chairs and other furniture, each defined in a local coordinate system, can
be placed into the description of a room defined in another reference frame, by transforming
the furniture coordinates to room coordinates. Then the room might be transformed into a
larger scene constructed in world coordinate.
3. Three dimensional objects and scenes are constructed using structure operations.
4. Object description is transformed from modeling coordinate to world coordinate or to another
system in the hierarchy.
5. Coordinate descriptions of objects are transferred from one system to another system with the
same procedures used to obtain two dimensional coordinate transformations.
6. Transformation matrix has to be set up to bring the two coordinate systems into alignment:
- First, a translation is set up to bring the new coordinate origin to the position of the
other coordinate origin.
- Then a sequence of rotations is made to the corresponding coordinate axes.
- If different scales are used in the two coordinate systems, a scaling transformation
may also be necessary to compensate for the differences in coordinate intervals.
7. If a second coordinate system is defined with origin (x0, y0,z0) and axis vectors as shown in
the figure relative to an existing Cartesian reference frame, then first construct the translation
matrix T(-x0, -y0, -z0), then we can use the unit axis vectors to form the coordinate rotation
matrix
8. The complete coordinate-transformation sequence is given by the composite matrix R .T.
9. This matrix correctly transforms coordinate descriptions from one Cartesian system to
another even if one system is left-handed and the other is right handed.

21
THREE DIMENSIONAL VIEWING
In three dimensional graphics applications,
1. we can view an object from any spatial position, from the front, from above or from the back.
2. We could generate a view of what we could see if we were standing in the middle of a group
of objects or inside object, such as a building.
Viewing Pipeline:
In the view of a three dimensional scene, to take a snapshot we need to do the following steps.
1. Positioning the camera at a particular point in space.
2. Deciding the camera orientation (i.e.,) pointing the camera and rotating it around the line of
right to set up the direction for the picture.
3. When snap the shutter, the scene is cropped to the size of the ‘window’ of the camera and
light from the visible surfaces is projected into the camera film.
In such a way the below figure shows the three dimensional transformation pipeline, from
modeling coordinates to final device coordinate.

22
Processing Steps
1. Once the scene has been modeled, world coordinates position is converted to viewing
coordinates.
2. The viewing coordinates system is used in graphics packages as a reference for specifying
the observer viewing position and the position of the projection plane.
3. Projection operations are performed to convert the viewing coordinate description of the
scene to coordinate positions on the projection plane, which will then be mapped to the
output device.
4. Objects outside the viewing limits are clipped from further consideration, and the remaining
objects are processed through visible surface identification and surface rendering procedures
to produce the display within the device viewport.
Viewing Coordinates
Specifying the view plane
1. The view for a scene is chosen by establishing the viewing coordinate system, also called the
view reference coordinate system.
2. A view plane or projection plane is set-up perpendicular to the viewing Zv axis.
3. World coordinate positions in the scene are transformed to viewing coordinates, then viewing
coordinates are projected to the view plane.
4. The view reference point is a world coordinate position, which is the origin of the viewing
coordinate system. It is chosen to be close to or on the surface of some object in a scene.
5. Then we select the positive direction for the viewing Zv axis, and the orientation of the view
plane by specifying the view plane normal vector, N. Here the world coordinate position
establishes the direction for N relative either to the world origin or to the viewing coordinate
origin.

Projections
Once world coordinate descriptions of the objects are converted to viewing coordinates, we can
project the 3 dimensional objects onto the two dimensional view planes. There are two basic
types of projection.
1. Parallel Projection: Here the coordinate positions are transformed to the view plane along
parallel lines. Parallel projection of an object to the view plane
Parallel projections are specified with a projection vector that defines the direction for the
projection lines.
When the projection in perpendicular to the view plane, it is said to be an Orthographic parallel
projection, otherwise it said to be an Oblique parallel projection.

23
2. Perspective Projection– Here, object positions are transformed to the view plane along lines
that converge to a point called the projection reference point.

3. Orthographic Projection
1. Orthographic projections are used to produce the front, side and top views of an object.
2. Front, side and rear orthographic projections of an object are called elevations.
3. A top orthographic projection is called a plan view.
4. This projection gives the measurement of lengths and angles accurately.
5. The orthographic projection that displays more than one face of an object is called
axonometric orthographic projections.
6. The most commonly used axonometric projection is the isometric projection.
7. It can be generated by aligning the projection plane so that it intersects each coordinate
axis in which the object is defined as the same distance from the origin.

24
4. Oblique Projection
1. An oblique projection in obtained by projecting points along parallel lines that are not
perpendicular to the projection plane.
2. The below figure α and φ are two angles.

3. Point (x,y,z) is projected to position (xp,yp) on the view plane.


4. The oblique projection line form (x,y,z) to (xp,yp) makes an angle α with the line on the
projection plane that joins (xp,yp) and (x,y).
5. This line of length L in at an angle φ with the horizontal direction in the projection
plane.
CLIPPING

1. An algorithm for three-dimensional clipping identifies and saves all surface segments within
the view volume for display on the output device. All parts of objects that are outside the
view volume are discarded.
2. Instead of clipping against straight-line window boundaries, we now clip objects against the
boundary planes of the view volume.
3. To clip a line segment against the view volume, we would need to test the relative position of
the line using the view volume's boundary plane equations. By substituting the line endpoint
coordinates into the plane equation of each boundary in turn, we could determine whether the
endpoint is inside or outside that boundary.
4. An endpoint (x, y, z) of a line segment is outside a boundary plane if Ax + By + Cz+ D >0,
where A, B ,C, and D are the plane parameters for that boundary.
5. Similarly, the point is inside the boundary if Ax + By+ Cz+D <0. Lines with both endpoints
outside a boundary plane are discarded, and those with both endpoints inside all boundary
planes are saved.
6. The intersection of a line with a boundary is found using the line equations along with the
plane equation.
7. Intersection coordinates (x1, y1, z1) are values that are on the line and that satisfy the plane
equation Ax1, + By1 + Cz1 + D = 0.
8. To clip a polygon surface, we can clip the individual polygon edges. First, we could test the
coordinate extents against each boundary of the view volume to determine whether the object

25
is completely inside or completely outside that boundary. If the coordinate extents of the
object are inside all boundaries, we save it. If the coordinate extents are outside all
boundaries, we discard it. Other-wise, we need to apply the intersection calculations.
Viewport Clipping
1. Lines and polygon surfaces in a scene can be clipped against the viewport boundaries with
procedures similar to those used for two dimensions, except that objects are now processed
against clipping planes instead of clipping edges.
2. The two-dimensional concept of region codes can be extended to three dimensions by
considering positions in front and in back of the three-dimensional viewport, as well as
positions that are left, right, below, or above the volume. For three dimensionalpoints, we
need to expand the region code to six bits. Each point in the description of a scene is then
assigned a six-bit region code that identifies the relative position of the point with respect to
the viewport.
3. For a line endpoint at position (x, y, z), we assign the bit positions in the region code from
right to left as

Three Dimensional Viewing Functions


1. With parameters specified in world coordinates, elements of the matrix for transforming
world coordinate descriptions to the viewing reference frame are calculated using the
function.
EvaluateViewOrientationMatrix3(x0,y0,z0,xN,yN,zN,xV,yV,zV,error,viewMatrix)
- This function creates the viewMatrix from input coordinates defining the viewing
system.
- Parameters x0,y0,z0 specify the sign of the viewing system.
- World coordinate vector (xN, yN, zN) defines the normal to the view plane and the
direction of the positive zv viewing axis.
- The world coordinates (xV, yV, zV) gives the elements of the view up vector.
- An integer error code is generated in parameter error if input values are not specified
correctly.
2. The matrix proj matrix for transforming viewing coordinates to normalized projection
coordinates is created with the function.

26
EvaluateViewMappingMatrix3
(xwmin,xwmax,ywmin,ywmax,xvmin,xvmax,yvmin,yvmax,zvmin,zvmax,
projType,xprojRef,yprojRef,zprojRef,zview,zback,zfront,error,projMatrix)

- Window limits on the view plane are given in viewing coordinates with parameters
xwmin, xwmax, ywmin and ywmax.
- Limits of the 3D view port within the unit cube are set with normalized coordinates
xvmin, xvmax, yvmin, yvmax, zvmin and zvmax.
- Parameter projType is used to choose the projection type either parallel or perspective.
- Coordinate position (xprojRef, yprojRdf, zprojRef) sets the projection reference point.
This point is used as the center of projection if projType is set to perspective; otherwise, this
point and the center of the viewplane window define the parallel projection vector.
- The position of the viewplane along the viewing zv axis is set with parameter z view.
- Positions along the viewing zv axis for the front and back planes of the view volume are
given with parameters z front and z back.
- The error parameter returns an integer error code indicating erroneous input data.

VISIBLE SURFACE DETECTION METHODES.

A major consideration in the generation of realistic graphics displays is identifying those parts of
a scene that are visible from a chosen viewing position. There are many approaches we can take
to solve this problem, and numerous algorithms have been devised for efficient identification of
visible objects for different types of applications. Some methods require more memory, some
involve more processing time, and some apply only to special types of objects. Deciding upon a
method for a particular application can depend on such factors as the complexity of the scene,
type of objects to be displayed, available equipment, and whether static or animated displays are
to be generated.
The various algorithms are referred to as visible-surface detection methods. Sometimes these
methods are also referred to as hidden-surface elimination methods, although there can be subtle
differences between identifying visible surfaces and eliminating hidden surfaces. For wireframe
displays, for example, we may not want to actually eliminate the hidden surfaces, but rather to
display them with dashed boundaries or in some other way to retain information about their
shape. In this chapter, we explore some of the most commonly used methods for detecting
visible surfaces in a three-dimensional scene.

CLASSIFICATION OF VISIBLE-SURFACE DETECTION ALGORITHMS


Visible-surface detection algorithms are broadly classified according to whether they deal with
object definitions directly or with their projected images. These two approaches are called
object-space methods and image-space methods, respectively. An object-space method compares
objects and parts of objects to each other within the scene definition to determine which surfaces,
as a whole, we should label as visible. In an image-space algorithm, visibility is decided point by
point at each pixel position on the projection plane. Most visible-surface algorithms use image-

27
space methods, although object space methods can be used effectively to locate visible surfaces
in some cases.
Line display algorithms, on the other hand, generally use object-space methods to identify visible
lines in wireframe displays, but many image- space visible-surface algorithms can be adapted
easily to visible-line detection. Although there are major differences in the basic approach taken
by the various visible-surface detection algorithms, most use sorting and coherence methods to
improve performance. Sorting is used to facilitate depth comparisons by ordering the individual
surfaces in a scene according to their distance from the view plane. Coherence methods are used
to take advantage of regularities in a scene. An individual scanline can be expected to contain
intervals (runs) of constant pixel intensities, and scan-line patterns often change little from one
line to the next. Animation frames contain changes only in the vicinity of moving objects. And
constant relationships often can be established between objects and surfaces in a scene.
BACK-FACE DETECTION
A fast and simple object-space method for identifying the back faces of a polyhedron is based on
the "inside-outside" tests. A point (x, y, z) is "inside" a polygon surface with plane parameters A,
B, C, and D if When an inside point is along the line of sight to the surface, the polygon must be
a back face (we are inside that face and cannot see the front of it from our viewing position).
We can simplify this test by considering the normal vector N to a polygon surface, which has
Cartesian components (A, B, C). In general, if V is a vector in the viewing direction from the eye
(or "camera") position, then this polygon is a back face if V.N>0 Furthermore, if object
descriptions have been converted to projection coordinates and our viewing direction is parallel
to the viewing z-axis, then
V = (0, 0, Vz) and V.N=VZC.
So that we only need to consider the sign of C, the; component of the normal vector N. In a right-
handed viewing system with viewing direction along the negative Zv axis, the polygon is a back
face if C < 0. AIso, we cannot see any face whose normal has z component C = 0, since our
viewing direction is grazing that polygon. Thus, in general, we can label any polygon as a back
face if its normal vector has a z component value:
C<=0

Similar methods can be used in packages that employ a left-handed viewing system. In these
packages, plane parameters A, B, C and D can be calculated from polygon vertex coordinates
specified in a clockwise direction (instead of the counterclockwise direction used in a right-
handed system). Also, back faces have normal vectors that point away from the viewing position
and are identified by C >= 0 when the viewing direction is along the positive zv axis. By
examining parameter C for the different planes defining an object, we can immediately identify
all the back faces.

28
DEPTH-BUFFER METHOD
A commonly used image-space approach to detecting visible surfaces is the depth buffer method,
which compares surface depths at each pixel position on the projection plane. This procedure is
also referred to as the z-buffer method, since object depth is usually measured from the view
plane along the z axis of a viewing system. Each surface of a scene is processed separately, one
point at a time across the surface. The method is usually applied to scenes containing only
polygon surfaces, because depth values can be computed very quickly and the method is easy to
implement. But the method can be applied to nonplanar surfaces With object descriptions
converted to projection coordinates, each (x, y, 2) position on a polygon surface corresponds to
the orthographic projection point (x, y) on the view plane. Therefore, for each pixel position (x,
y) on the view plane, object depths can be compared by comparing z values. Figure: 4.12 shows
three surfaces at varying distances along the orthographic projection line from position (x, y) in
a view plane taken as the xvyvplane. Surface S1, is closest at this position, so its surface intensity
value at (x, y) is saved. We can implement the depth-buffer algorithm in normalized coordinates,
so that z values range from 0 at the back clipping plane to Z max at the front clipping plane.
The value of Z max can be set either to 1 (for a unit cube) or to the largest
value that can be stored on the system. As implied by the name of this method, two buffer areas
are required. A depth buffer is used to store depth values for each (x, y) position as surfaces are
processed, and the refresh buffer stores the intensity values for each position. Initially, all
positions in the depth buffer are set to 0 (minimum depth), and the refresh buffer is initialized to
the background intensity. Each surface listed in the polygon tables is then processed, one scan
line at a time, calculating the depth (z value) at each (x, y) pixel position. The calculated depth is
compared to the value previously stored in the depth buffer at that position. If the calculated
depth is greater than the value stored in the depth buffer, the new depth value is stored, and the
surface intensity at that position is determined and in the same xy location in
the refresh buffer.

29
A-BUFFER METHOD
The A-buffer method is an extension of the depth-buffer method.The A-buffer method is
visibility detection method developed at Lucas film Studios for the rendering system REYES
(Renders Everything You Ever Saw) The A-buffer expands on the depth buffer method to allow
transparencies The key data structure in the A-buffer is the accumulation buffer.

Each position in the A-buffer has two fields


► Depth field- stores a positive or negative real number
► Intensity field--stores surface-intensity information or a pointer value

If depth is >= 0, the number stored at that position is the depth of a single surface overlapping
the corresponding pixel area. The intensity field then stores the RGB components of the surface
color at that point and the percent of pixel coverage. If depth < 0 this indicates multiple-surface
contributions to the pixel intensity. The intensity field then stores a pointer to a linked List of
surface data
Surface information in the A-buffer includes:
a. RGB intensity components
b. Opacity parameter
c. Depth
d. Percept of area coverage
e. Surface identifier
f. Other surface rendering parameters
The algorithm proceeds just like the depth buffer algorithm. The depth and opacity values are
used to determine the final colour of a pixel.
SCAN-LINE METHOD
An image space method for identifying visible surfaces. Computes and compares depth
values along the various scan-lines for a scene
Two important tables are maintained:
► The edge table
► The POLYGON table

30
The edge table contains:
1. Coordinate end points of each line in the scene
2. The inverse slope of each line
3. Pointers into the POLYGON table to connect edges to surfaces
The POLYGON table contains:
1. The plane coefficients
2. Surface material properties
3. Other surface data
4. Maybe pointers into the edge table
To facilitate the search for surfaces crossing a given scan-line an active list of edges is formed
for each scan-line as it is processed. The active list stores only those edges that cross the scan-
line in order of increasing x. Also a flag is set for each surface to indicate whether a position
along a scan-line is either inside or outside the surface Pixel positions across each scan-line are
processed from left to right. At the left intersection with a surface the surface flag is turned on.
At the right intersection point the flag is turned off. We only need to perform depth calculations
when more than one surface has its flag turned on at a certain scan-line position

Figure 4-17 illustrates the scan-line method for locating visible portions of surfaces for pixel
positions along the line. The active list for line 1 contains information from the edge table for
edges AB, BC, EH, and FG. For positions along this scan line between edges AB and BC, only
the flag for surface S1is on. Therefore no depth calculations are necessary, and intensity
information for surface S1, is entered from the polygon table into the refresh buffer. Similarly,
between edges EH and FG, only the flag for surface S2 is on. NO other positions along scan line
1 intersect surfaces, so the intensity values in the other areas are set to the background intensity.

The background intensity can be loaded throughout the buffer in an initialization routine.
For scan lines 2 and 3 in Fig. 4-17, the active edge list contains edges AD, EH, BC, and FG.
Along scan line 2 from edge AD to edge EH, only the flag for surface S 1ies on. But between
edges EH and BC, the flags for both surfaces are on. In this interval, depth calculations must be
made using the plane coefficients for the two surfaces. For this example, the depth of surface
S1is assumed to be less than that of S2 , so intensities for surface S1 are loaded into the refresh
buffer until boundary BC is encountered. Then the flag for surface S1 goes off, and intensities
for surface S2 are stored until edge FG is passed. We can take advantage of-coherence along the
scan lines as we pass from one scan

31
line to the next. In Fig. 4-17, scan line 3 has the same active list of edges as scan line 2. Since no
changes have occurred in line intersections, it is unnecessary again to make depth calculations
between edges EH and BC. The two surfaces must be in the same orientation as determined on
scan line 2, so the intensities for surface S1 can be entered without further calculations.
DEPTH SORTING METHOD
Using both image space and object-space operations. The depth-sorting method performs
the following basic functions:
► Surfaces are sorted in order of decreasing depth.
► Surfaces are scan converted in order, starting with the surface of greatest depth.
Sorting operations are carried out in both image and object space.
The scan conversion of the polygon surfaces is performed in image space.
This method for solving the hidden-surface problem is often referred to as the painter's algorithm

First sort surfaces according to their distance from the view plane. The intensity values for the
farthest surface are then entered into the refresh buffer. Taking each succeeding surface in turn
(in decreasing depth order), we "paint" the surface intensities onto the frame buffer over the
intensities of the previously processed surfaces.

32
If a depth overlap is detected at any point in the list, we need to make some additional
comparisons to determine whether any of the surfaces should be reordered.
We make the following tests for each surface that overlaps with S. If any one of these tests is
true, no reordering is necessary for that surface. The tests are listed in
order of increasing difficulty.
1) The bounding rectangles in the xy plane for the two surfaces do not overlap
2) Surface S is completely behind the overlapping surface relative to the viewing position.
3) The overlapping surface is completely in front of S relative to the viewing position.
4) The projections of the two surfaces onto the view plane do not overlap.

The coordinates for all vertices of S into the plane equation for the overlapping surface and
check the sign of the result. If the plane equations are set up so that the outside of the surface is
toward the viewing position, then S is behind S' if all vertices of S are
"inside" S‘.

S' is completely in front of S if all vertices of S are "outside" of S'.


AREA-SUBDIVISION METHOD
The area-subdivision method takes advantage of area coherence in a scene by
locating those view areas that represent part of a single surface. We apply this method by

33
successively dividing the total viewing area into smaller and smaller rectangles until each small
area is the projection of part of a single visible surface or no surface at all. We continue this
process until the subdivisions are easily analyzed as belonging to a single surface or until they
are reduced to the size of a single pixel.
An easy way to do this is to successively divide the area into four equal parts at each step. There
are four possible relationships that a surface can have with a specified area
boundary
► Surrounding surface-One that completely encloses the area.
► Overlapping surface-One that is partly inside and partly outside the area.
► Inside surface-One that is completely inside the area.
► Outside surface-One that is completely outside the area.

34

You might also like