0% found this document useful (0 votes)
213 views

Computer Graphics (CG CHAP 2)

The document discusses computer graphics topics including raster display systems, the 3D graphics pipeline, and the Z buffer for hidden surface removal. The 3D graphics pipeline involves converting 3D scenes to 2D screens through steps like vertex shader processing, rasterization, and fragment shading. The Z buffer algorithm uses depth testing at each pixel to determine visibility and remove hidden surfaces by only displaying the closest polygons.

Uploaded by

Vuggam Venkatesh
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
213 views

Computer Graphics (CG CHAP 2)

The document discusses computer graphics topics including raster display systems, the 3D graphics pipeline, and the Z buffer for hidden surface removal. The 3D graphics pipeline involves converting 3D scenes to 2D screens through steps like vertex shader processing, rasterization, and fragment shading. The Z buffer algorithm uses depth testing at each pixel to determine visibility and remove hidden surfaces by only displaying the closest polygons.

Uploaded by

Vuggam Venkatesh
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 32

Computer Graphics

 Raster display systems

 Introduction to the 3D graphics pipeline

 The Z Buffer for hidden surface removal

1
Raster Scan Display:
A Raster Scan Display is based on intensity control of pixels in the form of a
rectangular box called Raster on the screen. Information of on and off pixels is stored
in refresh buffer or Frame buffer.
Televisions in our house are based on Raster Scan Method.

The raster scan system can store information of each pixel position, so it is suitable for
realistic display of objects. Raster Scan provides a refresh rate of 60 to 80 frames per
second.
Frame Buffer is also known as Raster or bit map.

In Frame Buffer the positions are called picture elements or pixels.


Beam refreshing is of two types. First is horizontal retracing and second is vertical
retracing.
When the beam starts from the top left corner and reaches the bottom right scale, it

will again return to the top left side called at vertical retrace. Then it will again more
horizontally from top to bottom call as horizontal retracing shown in fig:
2
3
Types of Scanning or travelling of beam in Raster Scan
1. Interlaced Scanning
2. Non-Interlaced Scanning
In Interlaced scanning, each horizontal line of the screen is traced from
top to bottom. Due to which fading of display of object may occur. This
problem can be solved by Non-Interlaced scanning. In this first of all
odd numbered lines are traced or visited by an electron beam, then in the
next circle, even number of lines are located.
For non-interlaced display refresh rate of 30 frames per second used.
But it gives flickers. For interlaced display refresh rate of 60 frames per
second is used.

4
Advantages of RDS:
Realistic image
Million Different colors to be generated
Shadow Scenes are possible.

Disadvantages of RDS :
Low Resolution
Expensive

5
Random Scan Raster Scan
1. It has high Resolution 1. Its resolution is low.
2. It is more expensive 2. It is less expensive
3. Any modification if needed is easy 3.Modification is tough
4. Solid pattern is tough to fill 4.Solid pattern is easy to fill
5. Refresh rate depends or resolution 5. Refresh rate does not depend on the picture.
6. Only screen with view on an area is displayed. 6. Whole screen is scanned.
7. Beam Penetration technology come under it. 7. Shadow mark technology came under this.
8. It does not use interlacing method. 8. It uses interlacing
9. It is restricted to line drawing applications 9. It is suitable for realistic display.

6
Introduction to the 3D graphics pipeline

The process of converting vector graphic representations of objects


into a raster image is performed by a well defined, step-by-step process
called the graphics pipeline.
In the early days of computer graphics the pipeline was not
programmable. Data was fed into the pipeline and out came a raster
image on the other side.
But today’s GPU’s allow for a programmer to control certain parts of
the pipeline using shader programs.
A shader program is a specialized set of instructions for manipulating
graphics data in the graphics pipeline.
Shader programs provide a programmer with amazing flexibility and
potential creativity, but at the cost of added complexity.

7
3D graphics pipeline
 Computer Graphics is process of mapping 3D scenes on a 2D screen.
 Graphics pipeline or rendering pipeline defines the primitive
operations required for this mapping.
 Open GL & Direct 3D are two notable graphic mapping pipeline
model.
 Graphics pipeline primarily works in 3 stages-
1. Application
2. Geometry
3. Rasterization

Application Geometry Rasterization Screen


8
9
Step 1 in the pipeline is data setup.
A webGL program must establish a link between the attribute variables
in a vertex shader program and the GPU buffers that hold the data those
variables use.
Any other data that the shaders need for a model rendering is also copied
to the shader program. These are called uniform variables because they
don’t change as a single rendering is performed.

Step 2 is execution of a vertex shader program on each vertex of a


geometric primitive defined in model world coordinates. Each vertex is
transformed into normalized device coordinates. This is typically done
with a single matrix transformation matrix, which is a combination of
transformations that move a model to the correct place in a scene, then
positions the model in front of a virtual camera, and then projects the
world coordinates into a unit cube.

10
Step 3 clips away all of the data that is outside the field-of-view of the
virtual camera.

Step 4 maps the model data from normalized device coordinates into a
viewport defined in pixels.

Step 5 rasterizes a geometric primitive by determining which pixels in


the raster image are inside it’s boundaries.

Step 6 executes a fragment shader on each pixel that composes a


geometric primitive and outputs a color value for the pixel.

Step 7 combines (composites) the color of a pixel from a fragment


shader with the color of the pixel already assigned to the output draw
buffer.

11
CPU
a central processing unit that controls a computing device and
performs general calculations
GPU
a graphics processing unit is a hardware device that computes
calculations and performs tasks specifically for a graphics pipeline.
Graphics pipeline
a series of steps that perform a rendering process.
Shader program
a computer program that runs on a GPU that performs one or more
steps of a graphics pipeline.
Vertex shader
a computer program that runs on a GPU to position the geometry of
models in a scene.
Fragment shader
a computer program that runs on a GPU to assign a color to the pixels
12
that compose a surface of a model.
Figure shows the 3D graphics pipeline

13
14
15
16
17
18
19
Z-Buffer or Depth-Buffer method
When viewing a picture containing non transparent objects and
surfaces, it is not possible to see those objects from view which are
behind from the objects closer to eye. To get the realistic screen image,
removal of these hidden surfaces is must. The identification and removal
of these surfaces is called as the Hidden-surface problem.
Z-buffer, which is also known as the Depth-buffer method is one of the
commonly used method for hidden surface detection. It is an Image
space method. Image space methods are based on the pixel to be drawn
on 2D. For these methods, the running time complexity is the number of
pixels times number of objects. And the space complexity is two times
the number of pixels because two arrays of pixels are required, one for
frame buffer and the other for the depth buffer.
The Z-buffer method compares surface depths at each pixel position on
the projection plane. Normally z-axis is represented as the depth. The
algorithm for the Z-buffer method is given below :
20
Algorithm:
First of all, initialize the depth of each pixel.
i.e, d(i, j) = infinite (max length)
Initialize the color value for each pixel
as c(i, j) = background color
for each polygon, do the following steps :

for (each pixel in polygon's projection)


{
find depth i.e, z of polygon at (x, y) corresponding to pixel (i, j)

if (z < d(i, j))


{
d(i, j) = z;
c(i, j) = color;
}
} continue to…..
21
Depth buffer is an extension of the frame buffer. Depth buffer algorithm
requires 2 arrays, intensity and depth each of which is indexed by pixel
coordinates (x, y).
Algorithm Explanation
For all pixels on the screen, set depth [x, y] to 1.0 and intensity [x, y] to a
background value.
For each polygon in the scene, find all pixels (x, y) that lie within the
boundaries of a polygon when projected onto the screen. For each of
these pixels:
(a) Calculate the depth z of the polygon at (x, y)
(b) If z < depth [x, y], this polygon is closer to the observer than others
already recorded for this pixel. In this case, set depth [x, y] to z and
intensity [x, y] to depth [x, y], the polygon already recorded at (x, y) lies
closer to the a value corresponding to polygon's shading. If instead z >
observer than does this new polygon, and no action is taken.
3. After all, polygons have been processed; the intensity array will
contain the solution.
22
4. The depth buffer algorithm illustrates several features common to all
hidden surface algorithms.
5. First, it requires a representation of all opaque surface in scene
polygon in this case.
6. These polygons may be faces of polyhedral recorded in the model of
scene or may simply represent thin opaque 'sheets' in the scene.
7. The 2nd important feature of the algorithm is its use of a screen
coordinate system. Before step 1, all polygons in the scene are
transformed into a screen coordinate system using matrix multiplication.

23
Let’s consider an example to understand the algorithm in a better way.
Assume the polygon given is as below :

In starting, assume that the depth of each pixel is infinite.

24
As the z value i.e, the depth value at every place in the given
polygon is 3, on applying the algorithm, the result is:

     
Now, let’s change the z values. In the figure given below, the z values
goes from 0 to 3.

25
In starting, the depth of each pixel will be infinite as :

Now, the z values generated on the pixel will be different which are as
shown below :

26
 Therefore, in the Z buffer method, each surface is processed
separately one position at a time across the surface. After that the
depth values i.e, the z values for a pixel are compared and the closest
i.e, (smallest z) surface determines the color to be displayed in frame
buffer. The z values, i.e, the depth values are usually normalized to
the range [0, 1]. When the z = 0, it is known as Back Clipping Pane
and when z = 1, it is called as the Front Clipping Pane.
 If z < depth [x, y], this polygon is closer to the observer than others
already recorded for this pixel. In this case, set depth [x, y] to z and
intensity [x, y] to a value corresponding to polygon's shading.
 If z > depth [x, y], the polygon already recorded at (x, y) lies closer
to the observer than does this new polygon, and no action is taken.

In this method, 2 buffers are used :


1.Frame buffer
2.Depth buffer
27
Calculation of depth :
As we know that the equation of the plane is :
ax + by + cz + d = 0, this implies
z = -(ax + by + d)/c, c!=0
Calculation of each depth could be very expensive,
but the computation can be reduced to a single add per pixel by using
an increment method as shown in figure below :

28
Let’s denote the depth at point A as Z and at point B as Z'. Therefore :

AX + BY + CZ + D = 0 implies

Z = (-AX - BY - D)/C ------------(1)

Similarly, Z' = (-A(X + 1) - BY -D)/C ----------(2)

Hence from (1) and (2), we conclude :

Z' = Z - A/C ------------(3)


Hence, calculation of depth can be done by recording the plane
equation of each polygon in the (normalized) viewing coordinate system
and then using the incremental method to find the depth Z.
So, to summarize, it can be said that this approach compares surface
depths at each pixel position on the projection plane. Object depth is
usually measured from the view plane along the z-axis of a viewing
system. 29
Example

 Let S1, S2, S3 are the surfaces. The surface closest to projection
plane is called visible surface. The computer would start (arbitrarily)
with surface 1 and put it’s value into the buffer. It would do the
same for the next surface.
 It would then check each overlapping pixel and check to see which
one is closer to the viewer and then display the appropriate color. As
at view-plane position (x, y), surface S1 has the smallest depth30from
the view plane, so it is visible at that position.
Points to remember :
1) Z buffer method does not require pre-sorting of polygons.
2) This method can be executed quickly even with many polygons.
3) This can be implemented in hardware to overcome the speed problem.
4) No object to object comparison is required.
5) This method can be applied to non-polygonal objects.
6) Hardware implementation of this algorithm are available in some
graphics workstations.
7) The method is simple to use and does not require additional data
structure.
8) The z-value of a polygon can be calculated incrementally.
31
9) Cannot be applied on transparent surfaces i.e, it only deals with
opaque surfaces. For ex :

10) If only a few objects in the scene are to be rendered, then this
method is less attractive because of additional buffer and the overhead
involved in updating the buffer.
11) Wastage of time may occur because of drawing of hidden objects.

32

You might also like