0% found this document useful (0 votes)
3 views

UNIT 3

Uploaded by

dejenehundaol91
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

UNIT 3

Uploaded by

dejenehundaol91
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 30

Chapter 3

Introduction to the rendering


process with OpenGL
By: Tsega A(MSc)
What is OpenGL?


OpenGL is a software interface to graphics hardware.


This interface consists of about 150 distinct commands.


OpenGL is designed as a streamlined, hardware-independent
interface to be implemented on many different hardware
platforms.

2
What is OpenGL?


Open Graphics Library (OpenGL) is a cross-language
(language independent), cross-platform (platform-independent)
API for rendering 2D and 3D Vector Graphics(use of polygons to
represent image).

3
Cont.. OpenGL API is designed mostly in hardware.


Design : This API is defined as a set of functions which may be
called by the client program. Although functions are similar to
those of C language but it is language independent.

Development : It is an evolving API and Khronos
Group regularly releases its new version having some extended
feature compare to previous one. GPU vendors may also
provide some additional functionality in the form of extension.

Associated Libraries : The earliest version is released with a
companion library called OpenGL utility library. But since
OpenGL is quite a complex process. So in order to make it
easier other library such as OpenGL Utility Toolkit is added 4

Cont..

OpenGL doesn’t provide high−level commands for describing
models of three−dimensional objects.


Such commands might allow you to specify relatively
complicated shapes such as automobiles, parts of the body,
airplanes, or molecules.


With OpenGL, you must build up your desired model from a
small set of geometric Primitives such as points, lines, and
5

Cont..

The OpenGL Utility Library (GLU) provides many of the modeling
features, such as quadric surfaces and NURBS curves and
surfaces.


GLU is a standard part of every OpenGL implementation. Also,
there is

a higher−level, object−oriented toolkit, Open Inventor, which is built on


top of OpenGL, and is available separately for many
6

OpenGL syntax

OpenGL commands use the prefix gl and initial capital letters for
each word making up the command name such as glBegin().
Similarly, OpenGL defined constants begin with GL_, use all capital
letters, and use underscores to separate words such
as GL_COLOR_BUFFER_BIT.

Some OpenGL command names have a number and one, two, or
three letters at the end to denote the number and type of
parameters to the command. The first character indicates the
number of values of the indicated type that must be presented to the
command. The second character or character pair indicates the
specific type of the arguments: 8-bit integer, 16-bit integer, 32-bit 7

Features of OpenGL

3D Transformations

Color models

Lighting

Rendering

Modeling

8
OpenGL Rendering Pipeline

Most implementations of OpenGL have a similar order of
operations, a series of processing stages called the OpenGL
rendering pipeline.


Geometric data (vertices, lines, and polygons) follow the path
through the row of boxes that includes evaluators and
per−vertex operations, while pixel data (pixels, images, and
bitmaps) are treated differently for part of the process.] 9

Cont…

10

Display Lists:

All data, whether it describes geometry or pixels, can be saved
in a display list for current or later use. When a display list is
executed, the retained data is sent from the display list.

Evaluators:

All geometric primitives are eventually described by vertices.
Parametric curves and surfaces may be initially described by
control points and polynomial functions called basis functions.
11

Evaluators provide a method to derive the vertices used to
represent the surface from the control points. The method is a
polynomial mapping, which can produce surface normal,
texture coordinates, colors, and spatial coordinate values from
the control points.

12

Per−Vertex Operations:

converts the vertices into primitives. If advanced
features are enabled, this stage is even busier. Such
as texturing and lighting.

Primitive Assembly:

Clipping, a major part of primitive assembly, is the
elimination of portions of geometry which fall outside
ahalf−space, defined by a plane. Point clipping simply 13

Cont..

The results of this stage are complete geometric


primitives, which are the transformed and clipped


vertices with related color, depth, and sometimes
texture−coordinate values and guidelines for the

rasterization step.

14

Cont..

Pixel Operations:

Pixels from an array in system memory are first
unpacked from one of a variety of formats into the
proper number of components.

Next the data is scaled, biased, and processed by a
pixel map. The results are clamped and then either
written into texture memory or sent to the rasterization
step.
15

Cont..

Texture Assembly:

Some OpenGL implementations may have special


resources to accelerate texture performance. There
may be specialized, high−performance texture
memory. If this memory is available, the texture

. 16

Cont…

Rasterization:

Rasterization is the conversion of both geometric and
pixel data into fragments. Each fragment square

corresponds to a pixel in the framebuffer. Line and polygon


stipples, line width, point size, shading del, and coverage
calculations to support antialiasing are taken into 17

Fragment Operations:

Before values are actually stored into the framebuffer,
a series of operations are performed that may alter or
even throw out fragments. All these operations can be
enabled or disabled.

The first operation which may be encountered is


texturing, where a texel (texture element) is generated
18
OpenGL−Related Libraries

The OpenGL Utility Library (GLU) contains several
routines that use lower−level OpenGL commands to
perform such tasks as setting up matrices for specific
viewing orientations and projections, performing
polygon tessellation, and rendering surfaces.

This library is provided as part of every OpenGL
implementation. GLU routines use the prefix glu.

19

Cont..

For every window system, there is a library that
extends the functionality of that window system to
support OpenGL rendering. For machines that use the
X Window System, the OpenGL Extension to the X
Window System (GLX) is provided as an adjunct to
OpenGL. GLX routines use the prefix glX.

For Microsoft Windows, the WGL routines provide the
Windows to OpenGL interface.
20

Cont..

The OpenGL Utility Toolkit (GLUT) is a window
system−independent toolkit, GLUT routines use the
prefix glut.

Open Inventor is an object−oriented toolkit based on OpenGL
which provides objects and methods for creating interactive
three−dimensional graphics applications.

Open Inventor, which is written in C++, provides prebuilt objects
and a built−in event model for user interaction, high−level
application components for creating and editing
21
Coordinate Systems

To transform our vertices from the local (to
the object) coordinates to the
Normalized Device Coordinates (NDC) requires
three major transformations with associated matrices:
model, view and projection transformations.

The final matrix combining model, view and
projection

transformation is called MVP and it is
obtained as matrix-matrix multiplication of the
individual transformation matrices. 22

Cont..

To transform the coordinates i

Our vertex coordinates first start i

23
From World to View/Camera
coordinates

The view space is the space as seen from
the camera’s point of view (also user or eye
view).

24
The synthetic camera model

In computer graphics we use a synthetic camera
model to mimic the behaviour of a real camera. The
image in a pinhole camera is inverted. The film plane
is behind the lens.

In the synthetic camera model we avoid the inversion
by placing the film plane, called the projection plane, in 25
26

We need to know six things about our synthetic
camera model in order to take a picture

1. Position of the camera (from where it’s looking)

2. The Look vector specifies in what direction the camera is


pointing

3. The camera’s Orientation is determined by the Look vector and


the angle through which the camera is rotated about that vector,
27
Aspect ratio of the electronic “film:” ratio of width to height

5. Height angle: determines how much of the scene we will fit into
our view volume; larger height angles fit more of the scene into
the view volume – the greater the angle, the greater the amount
of perspective distortion

6. Front and back clipping planes: limit extent of camera’s view by


rendering (parts of) objects lying
28
Output primitives

Graphic SW and HW provide subroutines to describe a scene
in terms of basic geometric structures called output primitives.

Output primitives are combined to form complex structures.

Simplest primitives

– Point (pixel)

– Line segment

29
Scan Conversion

Converting output primitives into frame buffer updates. Choose
which pixels contain which intensity value.

Constraints

– Straight lines should appear as a straight line

– primitives should start and end accurately

– Primitives should have a consistent brightness along their
length

– They should be drawn rapidly
30

You might also like