0% found this document useful (0 votes)
37 views

Lecture 41

Uploaded by

SWAG BROTHERZ
Copyright
© © All Rights Reserved
Available Formats
Download as PPS, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

Lecture 41

Uploaded by

SWAG BROTHERZ
Copyright
© © All Rights Reserved
Available Formats
Download as PPS, PDF, TXT or read online on Scribd
You are on page 1/ 85

Computer

Graphics
Lecture 41
Viewing Using
OpenGL
Taqdees A. Siddiqi
[email protected]
Today we will practically
implement viewing a
geometric model in any
orientation by transforming it
in three-dimensional space
and control the location in
three-dimensional space
(POV) from where the model
is viewed.
Also we will see:
 Clipping undesired portions
 Modeling transformation
 Projecting the model and
 Combining transformations
We will also discuss on how
to instruct OpenGL to draw
the geometric models. Now
we must decide how we want
to position the models in the
scene, and we must choose
a vantage point from where
to view the scene.
• Use the default positioning
and vantage point,
• Or specify position and
vantage point
• Choose a viewpoint
If we want to look at the
corner of the room
containing a globe, then
we must decide:
- how far away from the
scene is the viewer?
- and where exactly
should the viewer be?
We would like to ensure that
the final image of the scene
contains a good view:
that a portion of the floor is
visible
all the objects in the scene
are visible
objects are presented in an
interesting arrangement.
Now how to use OpenGL to
accomplish these tasks:
 how to position and orient
models in three-dimensional
space
 how to establish the location
in three-dimensional space
of the viewpoint
All of these factors help
determine exactly what
image appears on the
screen.
Although ultimately a 2-D
image of 3-D models is
drawn;
yet we need to think in 3-D
while making many of the
decisions that determine
what gets drawn on the
screen.
A series of three computer
operations converts an
object's 3-D coordinates to
pixel positions on the screen.
Transformations, which are
represented by matrix
multiplication, include
modeling, viewing, and
projection operations.
Such operations include:
1. rotation,
2. translation,
3. scaling,
4. reflection,
5. orthographic projection, and
6. perspective projection.
Generally, we use a combination
of several transformations to
draw a scene.
Since the scene is rendered
on a rectangular window,
objects (or parts of objects)
that lie outside the window
must be clipped. In three-
dimensional computer
graphics, clipping occurs by
throwing out objects on one
side of a clipping plane.
Finally, a correspondence
must be established
between the transformed
coordinates and screen
pixels. This is known as a
viewport transformation.
Overview: The Camera
Analogy
The transformation process
to produce the desired
scene for viewing is
analogous to taking a
photograph with a camera.
As shown in Figure 1 ahead,
the steps might be the
following:
0. Set up our tripod,
pointing the camera
towards the scene
(viewing transformation).
1. Arrange the scene to be
photographed into the
desired composition
(modeling transformation)
2. Choose a camera lens or
adjust the zoom (projection
transformation)
3. Determine how large we
want the final photograph to
be - we might want it enlarged
(viewport transformation)
4. After these steps are
performed, the picture can be
snapped or the scene can be
drawn
Figure 1: The Camera Analogy
Figure 1-a : The Camera Analogy
Figure 1-b : The Camera Analogy
Figure 1-c : The Camera Analogy
Figure 1-d : The Camera Analogy
Note that these steps
correspond to the order in
which we specify the desired
transformations in our
program, not necessarily the
order in which the relevant
mathematical operations are
performed on an object's
vertices.
The viewing transformations
must precede the modeling
transformations in our code,
but we can specify the
projection and viewport
transformations at any point
before drawing occurs. Figure
2 ahead shows the order in
which these operations occur
on our computer.
Figure 2: Stages of Vertex Transformation
To specify viewing, modeling,
and projection transformat-
ions, we construct a 4 × 4
matrix M, which is then
multiplied by the coordinates
of each vertex v in the scene
to accomplish the
transformation v'=Mv
(Remember that vertices
always have four coordinates
(x, y, z, w), though in most
cases w is 1 and for two-
dimensional data z is 0.) Note
that viewing and modeling
transformations are
automatically applied to
surface normal vectors, in
addition to vertices.
(Normal vectors are used only
in eye coordinates.) This
ensures that the normal
vector's relationship to the
vertex data is properly
preserved.
The viewing and modeling
transformations we specify
are combined to form the
modelview matrix, which is
applied to the incoming
object coordinates to yield
eye coordinates.
Next, if we've specified
additional clipping planes to
remove certain objects from
the scene or to provide
cutaway views of objects,
these clipping planes are
applied.
After that, OpenGL applies
the projection matrix to
yield clip coordinates. This
transformation defines a
viewing volume; objects
outside this volume are
clipped so that they're not
drawn in the final scene.
After this point, the
perspective division is
performed by dividing
coordinate values by w, to
produce normalized device
coordinates.
Finally, the transformed
coordinates are converted to
window coordinates by
applying the viewport
transformation. We can
manipulate the dimensions of
the viewport to cause the
final image to be enlarged,
shrunk, or stretched.
We might correctly suppose
that the x and y coordinates
are sufficient to determine
which pixels need to be
drawn on the screen.
However, all the
transformations are
performed on the z
coordinates as well. This
way, at the end of this
transformation process,
the z values correctly
reflect the depth of a given
vertex.
One use for this depth value
is to eliminate unnecessary
drawing. OpenGL can use
this information to determine
which surfaces are obscured
by other surfaces and can
then avoid drawing the
hidden surfaces.
As we've probably guessed
by now, we need to know a
few things about matrix
mathematics to get the most
out of this lecture as we
have learnt from previous
lectures.
A Simple Example:
Drawing a Cube
Example 1 draws a cube
that's scaled by a modeling
transformation see Figure 3
ahead.
The viewing transformation,
gluLookAt(), positions and
aims the camera towards
where the cube is drawn.
A projection transformation
and a viewport
transformation are also
specified. The rest of this
section explains the
transformation commands it
uses.
Figure 3: Transformed Cube
Example 1 : Transformed Cube
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>
void init(void){
glClearColor (0.0, 0.0, 0.0, 0.0);
glShadeModel (GL_FLAT);
}
void display(void){
glClear (
GL_COLOR_BUFFER_BIT );
glColor3f (1.0, 1.0, 1.0);
glLoadIdentity();
gluLookAt (0.0, 0.0, 5.0, 0.0,
0.0, 0.0, 0.0, 1.0, 0.0);
glScalef (1.0, 2.0, 1.0);
glutWireCube (1.0);
glFlush ();
}
void reshape (int w, int h){
glViewport (0, 0, (GLsizei) w,
(GLsizei) h);
glMatrixMode
(GL_PROJECTION);
glLoadIdentity ();
glFrustum (-1.0, 1.0, -1.0, 1.0,
1.5, 20.0);
glMatrixMode (GL_MODELVIEW);
}
int main(int argc, char** argv){
glutInit(&argc, argv);
glutInitDisplayMode (
GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize (
500, 500);
glutInitWindowPosition (
100, 100);
glutCreateWindow (argv[0]);
init ();
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
return 0;
}
The Viewing
Transformation
Recall that the viewing
transformation is analogous to
positioning and aiming a
camera. In this code example,
before the viewing
transformation can be
specified, the current matrix is
set to the identity matrix with
glLoadIdentity().
This step is necessary. If
we don't clear the current
matrix by loading it with
the identity matrix, we
continue to combine
previous transformation
matrices with the new one
we supply.
In some cases, we do want
to perform such
combinations, but we also
need to clear the matrix
sometimes. In Example 1,
after the matrix is initialized,
the viewing transformation
is specified with
gluLookAt().
The arguments for this
command indicate where the
camera (or eye position) is
placed, where it is aimed,
and which way is up.
The arguments used here
place the camera at (0, 0, 5),
aim the camera lens
towards (0, 0, 0), and
specify the up-vector as (0,
1, 0). The up-vector defines
a unique orientation for the
camera.
If gluLookAt() was not called,
the camera has a default
position and orientation. By
default, the camera is situated
at the origin, points down
negative z-axis, and has an up-
vector of (0, 1, 0). In Example
1, the overall effect is that
gluLookAt() moves the camera
5 units along the z-axis.
The Modeling
Transformation
We use the modeling
transformation to position
and orient the model. For
example, we can rotate,
translate, or scale the model
or perform some
combination of these
operations.
In Example 1, glScalef() is the
modeling transformation used.
The arguments for this
command specify how scaling
occurs along the axes. If all
arguments are 1.0, command
has no effect. In Example 1,
the cube is drawn twice as
large in the y direction.
Thus, if one corner of the cube
had originally been at (3.0, 3.0,
3.0), that corner would wind up
being drawn at (3.0, 6.0, 3.0).
The effect of this modeling
transformation is to transform
the cube so that it isn't a cube
but a rectangular box.
Now change the gluLookAt()
call in Example 1 to the
modeling transformation
glTranslatef() with parameters
(0.0, 0.0, -5.0). The result should
look exactly the same as when
we used gluLookAt(). Why are
the effects of these two
commands similar?
Note that instead of moving
the camera (with a viewing
transformation) so that the
cube could be viewed, we
could have moved the cube.
This duality in the nature of
viewing and modeling
transformations is the
reason why we need to
think about the effect of
both types of
transformations
simultaneously.
It doesn't make sense to try to
separate the effects, but
sometimes it's easier to think
about them one way rather
than the other. This is also
why modeling and viewing
transformations are combined
into the modelview matrix
before the transformations are
applied.
Also note that the
modeling and viewing
transformations are
included in the display()
routine, along with the call
that's used to draw the
cube, glutWireCube().
This way, display() can be used
repeatedly to draw the
contents of the window if, for
example, the window is moved
or uncovered, and we've
ensured that each time, the
cube is drawn in the desired
way, with the appropriate
transformations.
The potential repeated use of
display() underscores the need
to load the identity matrix
before performing the viewing
and modeling transformations,
especially when other
transformations might be
performed between calls to
display().
The Projection
Transformation
Specifying the projection
transformation is like
choosing a lens for a
camera. We can think of this
as determining what the field
of view is and therefore what
objects are inside it and to
some extent how they look.
This is equivalent to
choosing among wide-
angle, normal, and
telephoto lenses, for
example, with a wide-angle
lens, we can include a wider
scene in the final
photograph than with a
telephoto lens.
But a telephoto lens allows
us to photograph objects as
though they're closer to us
than they actually are.
In computer graphics, we
don't have to pay $10,000
for a 2000-millimeter
telephoto lens; once we've
bought our graphics
workstation, all we need to
do is use a smaller number
for our field of view.
In addition to the field-of-
view considerations, the
projection transformation
determines how objects are
projected onto the screen,
as its name suggests.
Two basic types of
projections are provided by
OpenGL, along with several
corresponding commands for
describing the relevant
parameters. One type is the
perspective projection, how
we see things in daily life.
For example, railroad
tracks.
If we're trying to make
realistic pictures, we'll
want to choose
perspective projection,
which is specified with the
glFrustum() command in
this code example.
Orthographic projection is
used in architectural and
computer-aided design
applications.
Before glFrustum() can be
called to set the projection
transformation, some
preparation needs to happen.
As shown in the reshape()
routine in Example 1, the
command called glMatrixMode()
is used first, with the argument
GL_PROJECTION.
This indicates that the current
matrix specifies the projection
transformation; the following
transformation calls then affect
the projection matrix. As we can
see, a few lines later
glMatrixMode() is called again,
this time with GL_MODELVIEW
as the argument.
This indicates that succeeding
transformations now affect the
modelview matrix instead of the
projection matrix.
Note that glLoadIdentity() is
used to initialize the current
projection matrix so that only
the specified projection
transformation has an effect.
Now glFrustum() can be
called, with arguments that
define the parameters of the
projection transformation.
In this example, both the
projection transformation
and the viewport
transformation are contained
in the reshape() routine,
which is called when the
window is first created and
whenever the window is
moved or reshaped.
This makes sense, since
both projecting (the width to
height aspect ratio of the
projection viewing volume)
and applying the viewport
relate directly to the screen,
and specifically to the size or
aspect ratio of the window
on the screen.
Change the glFrustum() call in
Example 1 to the more
commonly used Utility Library
routine gluPerspective() with
parameters (60.0, 1.0, 1.5, 20.0).
Then experiment with different
values, especially for fov(field of
view ), near and far plane.
Computer
Graphics
Lecture 41

You might also like