CG Module3 1
CG Module3 1
Jini George
Viewing pipeline
• Rectangular area is the standard
Window
• Defines what is to be displayed
• A world-coordinate area selected for display is
called a window.
• In computer graphics, a window is a graphical
control element.
• It consists of a visual area containing some of the
graphical user interface of the program it belongs
to and is framed by a window decoration.
• A window defines a rectangular area in world
Jini George
Viewing pipeline
Viewport
• Defines where it is to be displayed
• An area on a display device to which a window is
mapped is called a viewport.
• A viewport is a polygon viewing region in computer
graphics. The viewport is an area expressed in
rendering-device-specific coordinates, e.g. pixels
for screen coordinates, in which the objects of
interest are going to be rendered.
• A viewport defines in normalized coordinates a
rectangular area on the display device where the
image of the data appears
Jini George
Jini George
Jini George
Window-to-Viewport transformation
• Window-to-Viewport transformation is the process
of transforming a two-dimensional, world-
coordinate scene to device coordinates.
• In particular, objects inside the world or clipping
window are mapped to the viewport. The viewport
is displayed in the interface window on the screen.
• In other words, the clipping window is used to
select the part of the scene that is to be displayed.
The viewport then positions the scene on the
output device.
Jini George
Example
Jini George
2D viewing transformation pipeline
Jini George
• This transformation involves developing formulas
that start with a point in the world window, say (xw,
yw).
• The formula is used to produce a corresponding
point in viewport coordinates, say (xv, yv).
• This mapping should be "proportional" in the sense
that if xw is 30% of the way from the left edge of the
world window, then xv is 30% of the way from the
left edge of the viewport.
• Similarly, if yw is 30% of the way from the bottom
edge of the world window, then yv is 30% of the way
from the bottom edge of the viewport. The picture
below shows this proportionality.
Jini George
Jini George
Derivation
• Using this proportionality, the following ratios
must be equal.
Jini George
Jini George
Jini George
Jini George
COHEN SUTHERLAND LINE CLIPPING
Jini George
Clipping
• The primary use of line clipping in CG is to remove
objects, lines or line segments that are outside the
viewing pane.
• In line clipping ,we will cut the portion of line which
is outside of window and keep only the portion that
is inside the window.
• Based on application clipping window boundaries is
identified i.e polygon
• A portion of two dimensional scene that is chosen for
display is termed as clipping window.
• A region to which an object inside the clipping
window is mapped is calledJini George
viewport
Types of lines
The lines are divided into three types.
• Visible Line:The line lying inside the view pane, is
a visible line.
• Invisible Line:The line lying outside the view pane,
is an invisible line.
• Clipped Line:“A line that lies inside or outside the
window is called clipped line.” A point where the
line cut the view pane is known as the
Intersection point of the line
Jini George
• We can use the clipping process with world coordinates.
• The object and images in view pane can be mapped to
device coordinates.
Jini George
Types of clipping
• Line clipping
• Point clipping
• Text clipping
• Exterior Clipping
• Curve clipping
• Polygon clipping
Jini George
Applications of clipping
Jini George
Algorithm
• Encode the endpoints of the line
1. If the 2 endpoints have the code 0000 & their
AND is 0000 then line is completely inside. So accept
the line.
2. If the two endpoints are non-zero & their AND is
non zero, the line is completely outside. So reject
the line.
3. If any 1 of the 2 endpoints are nonzero & their
AND is zero, the line is partially inside. So clip the
line
Jini George
• Clipping needs intersection points
• If a point is outside any window boundary find
the intersection point on window boundary.
• The endpoint is replaced with intersection point
and the region code is updated.
Jini George
For finding the intersection point
If a line needs to be clipped, first find an intersection point of all boundaries with
the following formula-
Slope: m = (y1-y0) /(x1-x0)
1. When the line intersects the left side boundary of the window port.
y0 = y1+ m(x-x1)
Here x = xwmin (Minimum value of x coordinate)
2. When the line intersects the right-side boundary of the window port.
y0 = y1+m(x-x1)
Here x = xwmax (Maximum value of x coordinate)
3. When the line intersects Top side boundary of the window port.
x0 = x1+(y-y1)/m
Here y = ywmax (Maximum value of y coordinate)
4. When the line intersects the bottom side boundary of the window
port.
x0 = x1+(y-y1)/m
Jini George
Here y = ywmin (Minimum value of y coordinate)
limitation of Cohen Sutherland algorithm
Jini George
Sutherland Hodgman algorithm
Jini George
Sutherland Hodgman algorithm
• This algorithm is used for polygon clipping.
• A polygon can be clipped by processing its boundary as
a whole against each window edge.
• This is achieved by processing all polygon vertices
against each clip rectangle boundary in turn.
• Four steps are here
• 1. Left clip
• 2.Right clip
• 3. Top clip
• 4.Bottom clip
Jini George
• Beginning with the original set of polygon
vertices, we could first clip the polygon against
the left rectangle boundary to produce a new
sequence of vertices.
• The new set of vertices could then be
successively passed to a right boundary clipper,
a top boundary clipper and a bottom boundary
clipper, as shown in figure .
• At each step a new set of polygon vertices is
generated and passed to the next window
boundary clipper. This is the fundamental idea
used in the Sutherland - Hodgeman algorithm.
Jini George
Jini George
• The output of the algorithm is a list of polygon
vertices all of which are on the visible side of a
clipping plane.
• Such each edge of the polygon is individually
compared with the clipping plane.
• This is achieved by processing two vertices of
each edge of the polygon around the clipping
boundary or plane.
• This results in four possible relationships
between the edge and the clipping boundary or
Plane.
Jini George
• Four cases are to be considered in processing of edges
of the polygon aganist the left window boundary
1. If the first vertex of the edge is outside the window
boundary and the second vertex of the edge is inside
then the intersection point of the polygon edge with
the window boundary and the second vertex are
added to the output vertex list .
Jini George
2. If both vertices of the edge are inside the window
boundary, only the second vertex is added to the
output vertex list.
Jini George
• 3. If the first vertex of the edge is inside the
window boundary and the second vertex of the
edge is outside, only the edge intersection with
the window boundary is added to the output
vertex list.
Jini George
4. If both vertices of the edge are outside the
window boundary, nothing is added to the output
list.
Jini George
• Once all vertices are processed for one clip
window boundary, the output list of vertices is
clipped against the next window boundary.
• Going through above four cases we can realize
that there are two key processes in this algorithm.
Determining the visibility of a point or vertex
(lnside - Outside test) and
Determining the intersection of the polygon edge
and the clipping plane.
Jini George
• Sutherland-Hodgeman Polygon Clipping Algorithm:-
1. Read coordinates of all vertices of the Polygon.
2. Read coordinates of the clipping window
3. Consider the left edge of the window
4. Compare the vertices of each edge of the polygon,
individually with the clipping plane.
5. Save the resulting intersections and vertices in the new
list of vertices according to four possible relationships
between the edge and the clipping boundary.
6. Repeat the steps 4 and 5 for remaining edges or the
clipping window. Each time the resultant list of vertices is
successively passed to process the next edge of the
clipping window.
7. Stop.
Jini George
THREE DIMENSIONAL VIEWING
PIPELINE
Jini George
• The steps for computer generation of a view of a three-
dimensional scene are somewhat analogous to the
processes involved in taking a photograph.
• To take a snapshot, we first need to position the camera at
a particular point in space. Then we need to decide on the
camera orientation . Which way do we point the camera and
how should we rotate it around the line of sight to set the
up direction for the picture?
• Finally, when we snap the shutter, the scene is cropped to
the size of the "window" (aperture) of the camera, and light
from the visible surfaces is projected onto the camera film.
• We need to keep in mind, however, that the camera analogy
can be carried only so far, since we have more flexibility and
many more options for generating views of a scene with a
graphics package than we do with a camera.
Jini George
Three Dimensional viewing Pipeline
• The viewing-pipeline in 3 dimensions is almost the same as the 2D-
viewing-pipeline.
• Only after the definition of the viewing direction and orientation
(i.e., of the camera) an additional projection step is done, which is
the reduction of 3D-data onto a projection plane:
Jini George
• Above figure shows the general processing steps for modeling and
converting a world-coordinate description of a scene to device
coordinates.
• Once the scene has been modeled, world-coordinate positions are
converted to viewing coordinates.
• The viewing-coordinate system is used in graphics packages as a
reference for specifying the observer viewing position and the
position of the projection plane, which we can think of in analogy
with the camera film plane.
• Next, projection operations are performed to convert the viewing-
coordinate description of the scene to coordinate positions on the
projection plane, which will then be mapped to the output device.
• Objects outside the specified viewing limits are clipped for further
consideration, and the remaining objects are processed through
visible-surface identification and surface-rendering procedures to
produce the display within the device viewport.
Jini George
• Viewing Coordinates : A view of an object in 3D
is similar to photographing the object.
Whatever appears in the viewfinder is
projected into the flat film surface. The type
and the size of the camera lens determines
which parts of the scene appear in the final
picture.
• These ideas incorporated into 3D graphics
packages so that views of a scene can be
generated, given the spatial position,
orientation, and aperture size of the camera
Jini George
Jini George
• Specifying the View Plane : We choose a
particular view for a scene by first establishes
the viewing-coordinate system also called view
reference coordinate as shown in above
diagram.
• A view plane or projection plane is then set up
perpendicular to the viewing .
• World-coordinate position in the scene are
transferred to viewing coordinates, then
viewing coordinates are transferred into device
coordinate.
Jini George
Jini George
PROJECTIONS
Jini George
• Projection is a technique or process which is
used to transform a 3D object into a 2D
plane."
• In other words, we can define "projection as a
mapping of points P (x, y, z) on to its image P'
(x,' y, 'z') in the projection plane or view plane,
which create the display surface.”
Jini George
Jini George
Perspective projection
• Perspective projection is used to determine the
projector lines come together at a single point.
The single point is also called "project reference
point" or "Center of projection."
Jini George
• Characteristic of Perspective Projection:
– The Distance between the object and projection center is finite.
– In Perspective Projection, it is difficult to define the actual size
and shape of the object.
– The Perspective Projection has the concept of vanishing points.
– The Perspective Projection is realistic but tough to implement.
• Vanishing Point: Vanishing point can be defined as a point
in image plane where all parallel lines are interlinked. The
Vanishing point is also called “Directing Point.”
• Use of Vanishing Point:
- It is used in 3D games and graphics editing.
– It is also used to represent 3D objects.
– We can also include perspective in the background of an image.
– We can also insert the shadow effect in an image.
Jini George
Types of Perspective Projection:
Jini George
• There are three types of Perspective Projection.
• One Point: A One Point perspective contains only one vanishing
point on the horizon line.
• It is easy to draw.
• Use of One Point- The One Point projection is mostly used to
draw the images of roads, railway tracks, and buildings.
Jini George
• Two Point: It is also called "Angular Perspective." A
Two Point perspective contains two vanishing points on
the line.
• Use of Two Point- The main use of Two Point projection
is to draw the two corner roads.
Jini George
• Three-Point- The Three-Point Perspective contains three
vanishing points. Two points lie on the horizon line, and one
above or below the line.
• It is very difficult to draw.
• When we see an object from above, than the third point is below
the ground. If we see an object from the below, than the third
point is in the space above.
Jini George
• Use of Three-Point:
It is mainly used in skyscraping.(related to height)
• Advantages:
Better Look
Clear Representation
• Disadvantages:
Difficult to Draw
Not Suitable for many-dimensional images
Jini George
Parallel Projection:
• In Parallel Projection, the distance of project plane from the center
of projection is infinite. We can specify the direction of projection
instead of the center of the projection. We can connect the
projected vertices through the line segment.
• The parallel Projection eliminates the Z-Coordinate. And the parallel
lines from each vertex in the object are enhanced until the lines
intersect the view plane.
Jini George
• Characteristic of parallel Projection:
– In parallel Projection, the projection lines are
parallel to each other.
– There is the least amount of distortion within the
object.
– The lines that are parallel to the object are also
parallel to the drawing.
– The view of Parallel Projection is the less realistic
cause of no foreshortening.
– The Parallel Projections are good for accurate
measurements.
Jini George
• Types of Parallel Projection
There are two kind of parallel Projection:
1. Orthographic projection: In the Orthographic Parallel
Projection, the Projection is perpendicular to the view
plane.
• The Orthographic Projection is divided into two parts-
1(A)Multiview Orthographic Projection: In Multiview
Orthographic Projection, we can represent the two-
dimensional Orthographic image into a three-dimensional
object. The Multiview Orthographic Projection Includes-
– Front View
– Top View
– Side View
Jini George
Jini George
1(B)Axonometric Orthographic Projection: The Axonometric
Orthographic Projection is used to construct the pictorial
representation of an object. The sight lines are perpendicular to
the projection plane.
• The Axonometric Orthographic Projection includes-
• Isometric: In Isometric, we can represent the three-dimensional
objects into two-dimensional drawings visually. The Angle
between the two co-ordinate is 120 degrees.
• Dimetric: In Dimetric Projection, the view direction of the two
axes are equal, and the direction of the third axis is defined
individually.
• Trimetric: In the Trimetric Projection, the view direction of all
three axes is unequal. The scale of all three angles is defined
individually.
Jini George
2.Oblique Projection: In the Oblique Parallel Projection, the direction
of projection is not normal to projection of plane. It is a simple
technique that is used to construct two-dimensional images of three-
dimensional objects.
• The Oblique Projection is mostly used in technical drawing.
• The Oblique Projection is divided into two parts-
• Cavalier: In cavalier Projection, there is an angle between the
Projection and Projection Plane is 45 degrees.
Jini George
• Cabinet: In Cabinet Projection, there is an angle between
Projection and Projection Plane is 63.4 degrees.
• Advantages:
• Good for exact Measurement
• Parallel lines remain parallel
• Disadvantages:
• Less Realistic Looking
• Angles are not preserved
Jini George
Difference b/n perspective and parallel
projections
perspective parallel
1. Perspective Projection 1. Parallel Projection represents
represents any given object in a any given object in a different
three-dimensional manner. way like a telescope.
2. In Perspective Projection 2. In Parallel Projection Centre Of
Centre Of Projection is located Projection is located at infinite.
at a finite point in 3 spaces. 3. Give an accurate view of an
3. Can not give an accurate view object.
of the object.
4. Projection lines are parallel to
4. Projection lines are not parallel each other in Parallel
to each other in Perspective Projection.
Projection.
5. A projector in parallel
5. A projector in perspective
projection is parallel.
projection is not parallel.
6. Parallel Projection does not
6. Perspective Projection
generate a realistic view.
generates a realistic view. Jini George
Difference b/n perspective and parallel projections
perspective parallel
7. Perspective Projection divided 7. Parallel projection divided into
into Three type Two type
One point perspective Orthographic Parallel
Two point perspective Projection
Three point perspective Oblique Parallel Projection
Jini George
VISIBLE SURFACE DETECTION
ALGORITHMS
Jini George
• In the realistic graphics display, we have to identify those parts of a
scene that are visible from a chosen viewing position.
• The various algorithms that are used for that are referred to as
Visible-surface detection methods or hidden-surface elimination
methods.
• Types of Visible Surface Detection Methods:
Object-space methods and
Image-space methods
• Visible-surface detection algorithms are broadly classified according to
whether they deal with object definitions directly or with their
projected images.
• These two approaches are called object-space methods and image-
space methods, respectively.
Jini George
• An object-space method compares objects and parts of
objects to each other within the scene definition to
determine which surfaces, as a whole, we should label as
visible. (if accuracy needed)
• In an image-space algorithm, visibility is decided point by
point at each pixel position on the projection plane. (time
constrain)
• Most visible-surface algorithms use image-space
methods, although object space methods can be used
effectively to locate visible surfaces in some cases.
• Line display algorithms, on the other hand, generally use
object-space methods to identify visible lines in wire
frame displays, but many image-space visible-surface
algorithms can be adapted easily to visible-line detection.
Jini George
• Methods for Detecting Visible surface
Methods.
Depth Buffer Method
Scan line Method
Jini George
Depth buffer(Z-buffer) algorithm
• Commonly used image space approach
• Compares surface depth of each pixel on the projection
plane
• Referred as the Z- buffer method
• Since depth is usually measured along the Z axis of
viewing system
• Usually applied to scenes of polygonal surfaces
• Depth values can be computed very quickly
Jini George
• Here s1 has the smallest depth from viewing
plane and visible at that point
Jini George
• Two buffer areas are required
Depth buffer
• Store depth values for each (x,y) position or Z
value
• All positions are initialized to minimum
depth(usually 0)
Refresh buffer(frame buffer)
• Stores the intensity values for each position
• All positions are initialized to the background
intensity
Jini George
How depth value is calculated?
• Depth value for a surface position (x,y)are
calculated from the plane equation for each
surface
Z=Ax-By-D/C
• Depth of next position(X+1,Y) Y
• =-(A(x+1)-By-D/C
• =Z-(A/C) X X+1
• A/C is constant for each surface
Jini George
Algorithm
• Step-1 − Set the buffer values −
Depthbuffer (x,y) = 0
Framebuffer (x,y)= background color
• Step-2 − Process each polygon
i. For each projected x,y pixel position of a polygon, calculate
depth Z.
ii. If Z > depthbuffer( x,y)
iii. Compute surface color,
set depthbuffer( x,y)= Z,
Framebuffer( x,y )= surfacecolor (x,y)
Jini George
Advantages
• It is easy to implement.
• It reduces the speed problem if implemented
in hardware.
• It processes one object at a time.
Disadvantages
• It requires large memory.
• It is time consuming process.
Jini George
SCAN LINE ALGORITH
Jini George
Scan Line method
• Image space method
• We can deal with multiple surface
• Extension of scan line algorithm for polygon filling
interiors
• We can deal with multiple surface as scan line is
processed, all polygon surface intersecting that line
are examined to determine visible for all polygons
intersecting each scan line
• Processed from left to right
• Depth calculations for each overlapping surface
• The intensity of nearest position is entered into the
refresh
Jini Georgebuffer
Jini George
• 3 tables
1) Edge table
Coordinate endpoints for each line
Slope of each line
Pointers into the polygon table
Identify the surfaces bounded by each line
2) Polygon table
Coefficients of the plane equation for each surface
Intensity information for the surfaces
Pointers into the edge table
3) Active list
Contain only edges across the current scan line
Sorted in order of increasing X
Jini George
Flag for each surface
• Indicate whether inside or outside of the surface
• At the leftmost boundary of a surface
• The surface flag is turned on
• At the rightmost boundary of a surface
• The surface flag is turned off
Jini George
Jini George
Jini George
Jini George