100% found this document useful (1 vote)
119 views

CG Unit-5

The document discusses several topics related to 3D computer graphics and animation. It defines key concepts like wireframe visibility methods, computer animation, color models, transformation from world to view coordinates, and view reference points. It also discusses visible surface detection algorithms like back-face detection, A-buffer method, and depth-buffer method. Finally, it covers design of animation sequences including storyboarding, object definitions, key frames, and in-betweens as well as animation languages.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
100% found this document useful (1 vote)
119 views

CG Unit-5

The document discusses several topics related to 3D computer graphics and animation. It defines key concepts like wireframe visibility methods, computer animation, color models, transformation from world to view coordinates, and view reference points. It also discusses visible surface detection algorithms like back-face detection, A-buffer method, and depth-buffer method. Finally, it covers design of animation sequences including storyboarding, object definitions, key frames, and in-betweens as well as animation languages.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
You are on page 1/ 5

2-Makrs

1. What is wireframe visibility method?


Procedure for determining visibility of an object edges are referred to as wireframe-
visibility methods. They are also called as visible-line detection methods or hidden-line
detection methods.
2. Define Computer Animation
Computer Animation generally refers to sequence of visual changes in a scene. It can also
be generated by changing camera parameters, such as position, orientation and focal
length.
3. Define Color Model
A color model is a method for explaining the properties or behavior of color within some
particular context.
4. Write down the steps involved in transformation from world to view coordinate
Translate the view reference point to the origin of the world coordinate system.
Apply rotation to align the xv, yv, and zv with the world xw, yw and zw axes, respectively.
5. Define view reference point
World coordinate position called as view reference point. This point is the origin of
viewing coordinate system.
6. What is story board?
Storyboard is an outline of the action. It defines the motion sequence as a set of basic
events that are to take place.

5-Marks:

1. Explain 3D transformation from world to view coordinate


The transformation from world to view coordinate involves the following steps:
 Translate the view reference point to the origin of the world coordinate system.
 Apply rotation to align the xv, yv, and zv with the world xw, yw and zw axes,
respectively.

The rotation sequence can require up to three coordinate axis rotations, depending on the
direction we choose for N. In general , if N is not aligned with any world coordinate axis,
we can superimpose the viewing and world systems with the transformation sequence
Rz.Ry.Rx

The complete world-to-viewing coordinate transformation matrix is obtained as the


matrix product. Mwc,vc =R.T

2. Explain CMY color model


10-Marks:

1. Explain any three visible surface detection algorithms


Visible surface detection algorithms are broadly classified into two parts.
 Object-Space methods
 Image-Space methods
Object space method compares objects and parts of object to each other within the scene
definition to determine which surface is visible.
In image space algorithm visibility is decided point by point at each pixel positions on the
projection plane.
Back-Face Detection
A fast and simple object space method for identifying the back faces of a polyhedron is
based on the “inside-outside”. A point is inside a polygon surface with plane parameters
A,B,C and D if
Ax+By+Cz+D < 0
When an inside point is along the line of sight to the surface, the polygon must be a back
face. We can simplify this test by considering the normal vector N to a polygon surface,
which has Cartesian components (A,B,C). In general V is a vector n viewing direction
from the eye position. V.N > 0
If object descriptions have been converted to projection coordinates and our viewing
directions is parallel to the viewing zv axis, then V=(0,0,Vz) and V.N =V zC . So that we
only need to consider the sign of C, the z component of the normal vector N. Thus, in
general we can lable any polygon as back face if its normal vector z component value
C<= 0

A-Buffer Method:
An extension idea of depth-buffer method is the A-buffer method. The A-buffer method
represents an antialiased, areta-averaged, accumulation-buffer method developed by
Lucasfilm for implementation of surface rendering system called REYES (renders
everything you ever saw). A drawback of depth buffer is that it can only find one visible
surface at each pixel position. The A-buffer method expands the depth buffer so that each
position in the buffer can reference a linked list of surfaces. Each position in the A-buffer
has two fields:
Depth field – stores a positive or negative real number
Intensity field – stores surface-intensity information or a pointer value.
If the depth field is positive, the number stored at that position is the depth of a single
surface overlapping the corresponding pixel area. The intensity field then stores the RGB
components of the surface color at that point and the percent of pixel coverage.
If the depth field is negative, this indicates multiple surface contributions to the pixel
intensity. The intensity field then stores a pointer to a linked list of surface data. Data for
each surface in the linked list includes.
 RGB intensity components
 Opacity parameter
 Depth
 Percent of area coverage
 Surface identifier
 Other surface-rendering parameters
 Pointer to net surface
Using the opacity factors and percent of surface overlaps, we can calculate the intensity
of each pixel as average contributions from the overlapping surface

Depth-Buffer Method:
Algorithm

2. Write a short note on: i) Design of Animation Sequence


ii)Animation Language
In general, animation sequence is designed with the following steps:
 Storyboard Layout
 Object Definitions
 Key-Frame Specifications
 Generation of in-between frames
Late, frames can be recorded on film or they can be consecutively displayed in “real-time
playback” mode.
Stroyboard: It is an outline of action. It defines the motion sequence as a set of basic
events that are to take place. Depending on the type of animation to be produced, the
storyboard could consist of a set of rough sketches or it could be a list of the basic ideas
for the motion.
Object Definitions: object definition is given for each participant in the actions. Objects
can be defined in terms of basic shapes, such as polygons or splines. In addition, the
associated movements for each object are specified along with the shape.
Key Frame: It is a detailed drawing of the scene at a certain time in the animation
sequence. Within each key frame, each object is positioned according to the time for that
frame.
In-Betweens: These are the intermediate frames between the key frames. The number of
in-betweens needed is determined by the media to be used to display the animation. Film
requires 24 frames per second. For 1-minute film sequence with no duplication, we
would need 1440 frames. With five in-betweens for each pair of key frames, we would
need 288 key frames.
ii)Computer Animation Language:
Design and control of animation sequences are handled with a set of animation
routines. A general – purpose language, such as C, Lisp, Pascal, or FORTAN, is often
used to program the animation functions but several specialized animation language have
been developed. Animation function includes a graphics editor, a key-frame generator,
an in-between generator, and standard graphics routines.
A typical task in an animation specification is scene description. This includes the
positioning of object and light sources, defining the photometric parameters and setting
the camera parameters. Another standard function is action specifications. This involves
the layout of motion paths for the objects and camera.
Key-frame systems are specialized animation language designed simply to generate the
in-between from the user-specified key frame.
Parameterized Systems allow object-motion characteristics to be specified as part of the
object definitions. The adjustable parameters control such object characteristics as degree
of freedom, motion limitations, and allowable shape changes.

You might also like