0% found this document useful (0 votes)
0 views

chapter 3

OpenGL is a hardware-independent software interface for graphics that consists of around 150 commands, focusing on geometric primitives rather than high-level modeling commands. The OpenGL rendering pipeline processes geometric and pixel data through various stages, including vertex operations, rasterization, and fragment operations, before rendering to the framebuffer. Additionally, OpenGL is supported by libraries like GLU, GLUT, and Open Inventor, which enhance its functionality for different applications.

Uploaded by

dejenehundaol91
Copyright
© © All Rights Reserved
Available Formats
Download as ODP, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

chapter 3

OpenGL is a hardware-independent software interface for graphics that consists of around 150 commands, focusing on geometric primitives rather than high-level modeling commands. The OpenGL rendering pipeline processes geometric and pixel data through various stages, including vertex operations, rasterization, and fragment operations, before rendering to the framebuffer. Additionally, OpenGL is supported by libraries like GLU, GLUT, and Open Inventor, which enhance its functionality for different applications.

Uploaded by

dejenehundaol91
Copyright
© © All Rights Reserved
Available Formats
Download as ODP, PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT 3

Introduction to the rendering


process with OpenGL
What is OpenGL?


OpenGL is a software interface to graphics hardware.

This interface consists of about 150 distinct commands.

OpenGL is designed as a streamlined, hardware-
independent interface to be implemented on many
different hardware platforms.
2

OpenGL doesn’t provide high−level commands for describing models of
three−dimensional objects.

Such commands might allow you to specify relatively complicated shapes such as

automobiles, parts of the body, airplanes, or molecules.

With OpenGL, you must build up your desired model from a small set of geometric

Primitives such as points, lines, and polygons.

3

The OpenGL Utility Library (GLU) provides many of the modeling
features, such as quadric surfaces and NURBS curves and surfaces.

GLU is a standard part of every OpenGL implementation. Also, there is a


higher−level, object−oriented toolkit, Open Inventor, which is built on top


of OpenGL, and is available separately for many implementations of
OpenGL.

4
OpenGL Rendering Pipeline
Most implementations of OpenGL have a similar order of operations, a series of

processing stages called the OpenGL rendering pipeline.

Geometric data (vertices, lines, and polygons) follow the path through the row of boxes

that includes evaluatorsand per−vertex operations, while pixel data (pixels, images, and
bitmaps) are treated differently for part of the process.]

Both types of data undergo the same final steps (rasterization and per−fragment

operations) before the final pixel data is written into the framebuffer.

5
6

Display Lists:

All data, whether it describes geometry or pixels, can be saved in
a display list for current or later use. When a display list is
executed, the retained data is sent from the display list.

Evaluators:

All geometric primitives are eventually described byvertices.
Parametric curves and surfaces may be initially described by
control points and polynomial functions called basis functions.

7

Evaluators provide a method to derive the vertices used
to represent the surface from the control points. The
method is a polynomial mapping, which can produce
surface normal, texture coordinates, colors, and spatial
coordinate values from the control points.

8

Per−Vertex Operations:

converts the vertices into primitives. If advanced features are enabled, this
stage is even busier. Such as texturing and lighting.

Primitive Assembly:

Clipping, a major part of primitive assembly, is the elimination of portions of
geometry which fall outside a balf−space, defined by a plane. Point clipping
simply passes or rejects vertices; line or polygon clipping can add additional
vertices depending upon how the line or polygon is clipped.

9
The results of this stage are complete geometric

primitives, which are the transformed and clipped


vertices with related color, depth, and sometimes
texture−coordinate values and guidelines for the
rasterization step.

10

Pixel Operations:

Pixels from an array in system memory are first unpacked from one of a variety of
formats into the proper number of components.

Next the data is scaled, biased, and processed by a pixel map. The results are clamped
and then either written into texture memory or sent to the rasterization step.

There are special pixel copy operations to copy data in the framebuffer to other parts of
the framebuffer or to the texture memory. A single pass is made through the pixel transfer
operations before the data is written to the texture memory or back to the framebuffer.

11

Texture Assembly:

Some OpenGL implementations may have special resources to accelerate


texture performance. There may be specialized, high−performance texture
memory. If this memory is available, the texture

objects may be prioritized to control the use of this limited and valuable
resource.

12

Rasterization:

Rasterization is the conversion of both geometric and pixel data into fragments. Each fragment square

corresponds to a pixel in the framebuffer. Line and polygon stipples, line width, point size, shading

model, and coverage calculations to support antialiasing are taken into consideration as vertices are

connected into lines or the interior pixels are calculated for a filled polygon. Color and depth values

are assigned for each fragment square.

13

Fragment Operations:

Before values are actually stored into the framebuffer, a series of operations are
performed that may alter or even throw out fragments. All these operations can be
enabled or disabled.

The first operation which may be encountered is texturing, where a texel (texture
element) is generated from texture memory for each fragment and applied to the
fragment. Then fog calculations may be applied, followed by the scissor test, the alpha
test, the stencil test, and the depth−buffer test (the depth buffer is for hidden−surface
removal).

14
OpenGL−Related Libraries

The OpenGL Utility Library (GLU) contains several
routines that use lower−level OpenGL commands to
perform such tasks as setting up matrices for specific
viewing orientations and projections, performing polygon
tessellation, and rendering surfaces.

This library is provided as part of every OpenGL
implementation. GLU routines use the prefix glu.

15

For every window system, there is a library that extends the
functionality of that window system to support OpenGL rendering.
For machines that use the X Window System, the OpenGL
Extension to the X Window System (GLX) is provided as an
adjunct to OpenGL. GLX routines use the prefix glX.

For Microsoft Windows, the WGL routines provide the Windows
to OpenGL interface. All WGL routines use the prefix wgl. For IBM
OS/2, the PGL is the Presentation Manager to OpenGL interface,
and its routines use the prefix pgl.

16

The OpenGL Utility Toolkit (GLUT) is a window system−independent toolkit,
GLUT routines use the prefix glut.

Open Inventor is an object−oriented toolkit based on OpenGL which provides
objects and methods for creating interactive three−dimensional graphics
applications.

Open Inventor, which is written in C++, provides prebuilt objects and a built−in
event model for user interaction, high−level application components for creating
and editing three−dimensional scenes, and the ability to print objects and
exchange data in other graphics formats. Open Inventor is separate from
OpenGL.

17
The synthetic camera model

In computer graphics we use a synthetic camera model to mimic
the behaviour of a real camera. The image in a pinhole camera is
inverted. The film plane is behind the lens.

In the synthetic camera model we avoid the inversion by placing
the film plane, called the projection plane, in front of the lens.

18
19

We need to know six things about our synthetic camera model in order to take a picture

1. Position of the camera (from where it’s looking)

2. The Look vector specifies in what direction the camera is pointing

3. The camera’s Orientation is determined by the Look vector and the angle through which the camera is rotated about that vector,
i.e., the direction of the Up vector

4. Aspect ratio of the electronic “film:” ratio of width to height

5. Height angle: determines how much of the scene we will fit into our view volume; larger height angles fit more of the scene into
the view volume – the greater the angle, the greater the amount of perspective distortion

6. Front and back clipping planes: limit extent of camera’s view by rendering (parts of) objects lying

between them and throwing away everything outside of them

• Optional parameter — Focal length: often used for photorealistic rendering; objects at distance Focal length from camera rendered
in sharp detail, objects closer or farther away get blurred; reduction in visibility is continuous – your camera won’t be implementing
focal length blurring.
20
Output primitives

Graphic SW and HW provide subroutines to describe a
scene in terms of basic geometric structures called output
primitives.

Output primitives are combined to form complex structures.

Simplest primitives

– Point (pixel)

– Line segment

21
Scan Conversion

Converting output primitives into frame buffer updates. Choose which
pixels contain which intensity value.
Constraints


– Straight lines should appear as a straight line

– primitives should start and end accurately

– Primitives should have a consistent brightness along their length

– They should be drawn rapidly

22

You might also like