Computer Graphics BCA 33
Computer Graphics BCA 33
SYLLABUS
GRAPHIC DEVICES
Cathode Ray Tube, Quality of Phosphors, CRTs for Color Display, Beam
Penetration CRT, The Shadow - Mask CRT, Direct View Storage Tube, Tablets,
The light Pen, Three Dimensional Devices
C Graphics Basics
Graphics programming, initializing the graphics, C Graphical functions, simple
programs
1
COMPUTER GRAPHICS
2
COMPUTER GRAPHICS
UNIT – 1
BASICS OF COMPUTER GRAPHICS
1.1 Introduction
1.2 What is computer Graphics?
1.3 Area of Computer Graphics
1.3.1 Design and Drawing
1.3.2 Animation
1.3.3 Multimedia applications
1.3.4 Simulation
1.4 How are pictures actually stored and displayed
1.5 Difficulties for displaying pictures
1.6 Block Summary
1.7 Review Question and Answers.
UNIT 2
GRAPHIC DEVICES
2.1 Introduction
2.2 Cathode Ray Tube
2.3 Quality of Phosphors
2.4 CRTs for Color Display
2.5 Beam Penetration CRT
2.6 The Shadow - Mask CRT
2.7 Direct View Storage Tube
2.8 Tablets
2.9 The light Pen
2.10Three Dimensional Devices
Unit 3
C Graphics Introduction
3.1 Introduction
3.2 ‘C’ GRAPHICS FUNCTIONS
3.3 C Graphics Programming Examples
3
COMPUTER GRAPHICS
UNIT 4
SIMPLE LINE DRAWING METHODS
4.1 Introduction
4.2 Point Plotting Techniques
4.3 Qualities of good line drawing algorithms
4.5 The Digital Differential Analyzer (DDA)
4.6 Bresenham’s Algorithm
4.7 Generation of Circles
UNIT 5
TWO DIMENSIONAL TRANSFORMATIONS
5.1 Introduction
5.2 What is transformation?
5.3 Matrix representation of points
5.4 Basic transformation
5.5 Translation
5.6 Rotation
5.7 Scaling
UNIT 6
CLIPPING AND WINDOWING
6.1 Introduction
6.2 Need for Clipping and Windowing
6.3 Line Clipping Algorithms
6.4 The midpoint subdivision Method
6.5 Other Clipping Methods
6.6 Sutherland - Hodgeman Algorithm
6.7 Viewing Transformations
UNIT 7
GRAPHICAL INPUT TECHNIQUES
7.1 Introduction
7.2 Graphical Input Techniques
7.3 Positioning Techniques
7.4 Positional Constraints
7.5 Rubber band Techniques
UNIT 8
THREE DIMENSIONAL GRAPHICS
8.1 INTRODUCTION
8.2 Need for 3-Dimensional Imaging
8.3 Techniques for 3-Dimesional displaying
4
COMPUTER GRAPHICS
UNIT 9
SOLID AREA SCAN CONVERSION
9.1 Introduction
9.2 Solid Area Scan Conversion
9.3 Scan Conversion of Polygons
9.4 Algorithm Singularity
UNIT 10
Three Dimensional Transformations
10.1 Introduction
10.2 Three-Dimensional transformation
10.3 Translations
10.4 Scaling
10.5 Rotation
10.6 Viewing Transformation
10.7 The Perspective
10.8 Algorithms
10.9 Three Dimensional Clipping
10.10 Perspective view of Cube
UNIT 11
HIDDEN SURFACE REMOVAL
11.1 Introduction
11.2 Need for hidden surface removal
11.3 The Depth - Buffer Algorithm
11.4 Properties that help in reducing efforts
11.5 Scan Line coherence algorithm
11.6 Span - Coherence algorithm
11.7 Area-Coherence Algorithms
11.8 Warnock’s Algorithm
11.9 Priority Algorithms
5
COMPUTER GRAPHICS
UNIT – 1
1.1 Introduction
1.2 What is computer Graphics?
1.3 Area of Computer Graphics
1.3.1 Design and Drawing
1.3.2 Animation
1.3.3 Multimedia applications
1.3.4 Simulation
1.4 How are pictures actually stored and displayed
1.5 Difficulties for displaying pictures
1.1 Introduction
In this unit, you are introduced to the basics of computer graphics.
To begin with we should know why one should study computer graphics. Its
areas of application include design of objects, animation, simulation etc.
Though computer graphics gained importance after the introduction of
monitors, these are several other input and output devices that are important
for the concept of computer graphics. They include high-resolution color
monitors, light pens, joysticks, mouse etc. You will be introduced to the
working principles of them.
6
COMPUTER GRAPHICS
7
COMPUTER GRAPHICS
and display them on demand was one of the major attractions for using
computers in graphics mode. Few samples in this area are given below.
a) A mechanical engineer can make use of computer
graphics to design nuts, bolts, gears etc.
b) Civil engineer can construct the buildings, bridges,
train tracks, roads etc on the computer and can see in different angles
and views before actually putting the foundation for them. It helps in
finalizing the plans of these structures.
c) A text tile designer designs different varieties of
designs through computer graphics
d) Electronics and electrical engineers design their
circuits, PCB designs easily through computer graphics.
1.3.2 Animation
Making the pictures to move on the graphical screen is called
animation. Animation really makes the use of computers and computer
graphics interesting. Animation brought the computers pretty close to the
average individuals. It is the well known principle of moving pictures that a
succession of related pictures, when flashed with sufficient speed will make the
succession of pictures appear to be moving. In movies, a sequence of such
pictures is shot and is displayed with sufficient speed to make them appear
moving. Computers can do it in another way. The properties of the picture can
be modified at a fairly fast rate to make it appear moving. For example, if a
hand is to be moved, say, the successive positions of the hand at different
periods of time can be computed and pictures showing the position of the hand
at these positions can be flashed on the screen. This led to the concept of
“animation” or moving pictures. In the initial stages, animation was mainly
used in computer games.
8
COMPUTER GRAPHICS
off, on and handling it during flying, contacting with and getting the help from
control room etc will be better explained using computers animation technique.
1.3.4 Simulation
The other revolutionary change that graphics made was in the
area of simulation. Basically simulation is a mockup of an environment
elsewhere to study or experience it. The availability of easily interactive devices
(mouse is one of them, we are going to see a few other later in the course) made
it possible to build simulators. One example is of flight simulators, wherein the
trainee, sitting in front of a computer, can operate on the interactive devices as
if he were operating on the flight controls and the changes he is expected to see
outside his window are made to appear on the screen, so that he can master
the skills of flight operations before actually trying his hand on the actual
flights.
All operations on computers are in terms of 0’s and 1’s and hence
figures are also to be stored in terms of 0’s and 1’s. Thus a picture file, when
viewed inside the memory, can be no different from other files – a string of Os
and 1s. However, their treatment when they are to be displayed makes the
difference.
9
COMPUTER GRAPHICS
00000000 00000000
01100110 10101100
01000000 00000010 Scan Line
01000000 00000010
01000000 00000010
0011111111111110
00000000 00000000
00000000 00000000
00000000 00000000
00000000 00000000
10
COMPUTER GRAPHICS
a
x
x
x
x
x
x
oY x
X b
Points to be illuminated
11
COMPUTER GRAPHICS
o x
12
COMPUTER GRAPHICS
(Why this is called a stair case effect and how we can reduce it, we
will see in due course)
ii) Response time: Especially when talking of animation,
the speed at which new calculations are made and the speed at which
the screen can interact are extremely important. Imagine a running
bus, shown on the screen. Each new position of the bus (and it's
surroundings, if needed) is to be calculated and sent to the screen and
the screen should delete the earlier position of the bus and display its
new position. All this should happen at a speed that convinces the
viewer that the vehicle is actually moving at the prescribed speed,
otherwise a running vehicle would appear like a "walking" bus or
worse a "piecewise movement” bus. For this, most the speed of the
algorithm and the speed of the display devices are extremely
important. Further, the entire operation should appear smooth and
not jerky otherwise, especially in simulation applications, the effects
can be dangers.
iii) What happens when the size of the picture exceeds
the size of the screen?: Obviously, some areas of the picture are to
be cut off. But this involves certain considerations and needs to be
addressed by software. [Which we will discuss while discussing about
clipping and windowing]
13
COMPUTER GRAPHICS
Review Questions
Answers
1. Animation
2. Morphing
3. Multimedia
4. Simulation
5. y= mx+c
6. Frame buffer
7. Resolution
8. Interactive graphics.
14
COMPUTER GRAPHICS
Unit 2
GRAPHIC DEVICES
2.1 Introduction
2.2 Cathode Ray Tube
2.3 Quality of Phosphors
2.4 CRTs for Color Display
2.5 Beam Penetration CRT
2.6 The Shadow - Mask CRT
2.7 Direct View Storage Tube
2.8 Tablets
2.9 The light Pen
2.10 Three Dimensional Devices
2.1 Introduction
Due to the widespread reorganization of the power and utility of
computer graphics in almost all fields, a broad range of graphics hardware and
software systems are available now. Graphics capabilities for both two-
dimensional and three-dimensional applications are now common on general-
purpose computers, including many hand-held calculators. These need wide
variety of interactive devices.
In this unit, we will look into some of the commonly used hardware
devices in conjunction with graphics. While the normal concept of a CPU,
Memory and I/O devices of a computer still holds good, we will be
concentrating more on the I/O devices. The special purpose output devices that
allow us to see pictures in color, for example, with different sizes, features etc.
Also, once the picture is presented, the user may like to modify it interactively.
So one should be able to point to specific portions of the display and change
them. Special input devices that allow such operations are also introduced.
While ever changing technologies keep producing newer and newer products,
what you are being introduced to here are trends of technology.
15
COMPUTER GRAPHICS
defection of the beam. Further these electronic / magnetic fields can be easily
manipulated by using suitable electric fields with this background. In following
section we describe the structure and working of the simple CRT.
Simple CRT makes use of a conical glass tube. At the narrow end
of the glass tube an electronic gun is kept. This gun generates electrons that
will be made to pass through the magnetic system called yoke. This magnetic
system is used for making the electronic beam to fall throughout the broad
surface of the glass tube. The broad surface of the glass tube contains a single
coat of high quality phosphorus. This reflects the electronic beam makes it to
fall on the computer screen.
16
COMPUTER GRAPHICS
Quality of Phosphors
The quality of graphic display depends on the quality of phosphors
used. The phosphors are usually chosen for their color characteristics and
persistence. Persistence is how long the picture will be visible on the screen,
after it is first displayed. Most of the standards prescribe that the intensity of
the picture should fall to 1/10 of its original intensity is less than 100
milliseconds.
17
COMPUTER GRAPHICS
The shadow mask CRT, instead of using one electron gun, uses 3
different guns placed one by the side of the other to form a triangle or a "Delta"
18
COMPUTER GRAPHICS
The shadow mask CRT, though better than the beam penetration
CRT in performance, is not without it's disadvantages. Since three beams are to
be focused, the role of the "Shadow mask" becomes critical. If the focusing is
not achieved properly, the results tend to be poor. Also, since instead of one
pixel point in a monochrome CRT now each pixel is made up of 3 points (for 3
colors), the resolution of the CRT (no. of pixels) for a given screen size reduces.
Another problem is that since the shadow mask blocks a portion of the beams
(while focusing them through the holes) their intensities get reduced, thus
reducing the overall brightness of the picture. To overcome this effect, the
beams will have to be produced at very high intensities to begin with. Also,
since the 3 color points, though close to each other, are still not at the same
point, the pictures tend to look like 3 colored pictures placed close by, rather
than a single picture. Of course, this effect can be reduced by placing the dots
as close to one another as possible.
19
COMPUTER GRAPHICS
changed or at least for several minutes without the need of being refreshed. We
see one such device called the Direct View Storage Tube (DVST) below.
The grid made of very thin, high quality wire, is located with a
dielectric and is mounted just before the screen on the path of the electron
beam from the gun. A pattern of positive charges is deposited on the grid and
this pattern is transferred to the phosphor coated CRT by a continuous flood of
electrons. This flood of electrons is produced by a "flood gun" (This is separate
frame the electron gun that produces the main electron beam).
Just behind the storage mesh is a second grid called the collector.
The function of the collector is to smooth out the flow of flood electrons. Since a
large number of electrons are produced at high velocity by the flood gun, the
collector grid, which is also negatively charged reduces, the acceleration on
these electrons and the resulting low velocity flood pass through the collector
and get attracted by the positively charged portions of the storage mesh (Since
the electrons are negatively charged), but are repelled by the other portions of
20
COMPUTER GRAPHICS
the mesh which are negatively charged (Note that the pattern of positive
charges residing on the storage mesh actually defines the picture to be
displayed). Thus, the electrons attracted by the positive charges pass through
the mesh, travel on to the phosphor coated screen and display the picture.
Since the collector has slowed the electrons down, they may not be able to
produce sharp and bright images. To over come this problem, the screen itself
is maintained at a high positive potential by means of a voltage applied to a
thin aluminum coating between the tube face and the phosphor.
Flood of
electrons
21
COMPUTER GRAPHICS
other popular display device is the plasma panel device, which is partly similar
to the DVST in principle, but over comes some of the undesirable features of
the DVST.
We shall now see some of the popularly used input devices for
interactive graphics.
i) Mouse:
22
COMPUTER GRAPHICS
In addition to its simplicity and low cost, the mouse has the
advantage that the user need not pick it up in order to use it-the mouse simply
sits on the table surface until he needs it. This makes the mouse an efficient
device for pointing, as experiments have shown. The mouse has some unique
properties that are liked by some and disliked by others. For example, if the
mouse is picked up and put down somewhere else, the cursor will not move.
also, the coordinates delivered by the mouse wrap around when overflow
occurs; this effect can be filtered out by software, or can be retained as a
means of moving the cursor rapidly from one side of the screen to the other.
The mouse has two real disadvantages. It cannot be used for tracing data from
paper, since a small rotation of the mouse or a slight loss of contact will cause
accumulative error in all the reading, and it is very difficult to handprint
23
COMPUTER GRAPHICS
ii) Joystick
A joy Stick
24
COMPUTER GRAPHICS
In fact, the joysticks were originally used for video games (hence
the name "joy" stick), but later on modified for the more accurate graphics
requirements.
2.9 TABLETS
The Tablets work on the principle of sound and its speed through
which the position of the pointer on the screen will be decided. It makes use of
flat surface on which we are writing with a stylus. The stylus tip is covered with
material called ceramic. It makes sound when writing on the flat surface.
25
COMPUTER GRAPHICS
26
COMPUTER GRAPHICS
screen, but some changes are to be made. So, instead of trying to know its
coordinates, it is advisable to simply "point" to that portion of the picture and
ask for changes. The simplest of such devices is the "light pen". Its principle is
extremely simple.
We know that every pixel on the screen that is a part of the picture
emits light. In fact they are much brighter than their surrounding pixels. All
that the light pen does is to make use of this light signal to indicate the
position. A small aperture is held against the portion of the picture to be
modified and the light from the pixels, after passing through the operator falls
on a photocell. This photocell converts the light signal received from the screen
to an electrical pulse - a signal sent to the computer. Since the electrical signal
is rather weak, an amplifier amplifies it before being sent to the computer.
Since a "tracking software" keeps track of the position of the light pen always
(in a manner much similar to the position of the mouse being kept track of by
the software), a signal received by the light pen at any point indicates that
portion of the picture that needs to be modified (most often that portion gets
erased, paving way for any other modifications to be made).
However, when the pen is being moved to it's position - where the
modification is required - it will encounter so many other light sources on the
way and these should not trigger the computer. So the operator of the light
pen is normally kept closed and when the final position is reached, then it can
be opened by a switch - in a manner similar to the one used in a photographic
camera, though, of course, the period of opening the operator is for much
longer periods than in a camera.
27
COMPUTER GRAPHICS
Stylus
28
COMPUTER GRAPHICS
Review Questions
3. The term ____________________ indicates how long the picture created on the
phosphorescent screen remains on it.
6. When the picture has to remain on the screen for a long time _______________
type of CRT is sued.
7. The first device to allow the user to move the cursor to any point, without
actually knowing the coordinates was ____________________
8. The input device that allows user to write pictures on it an input them
directly to the computer is called _______________________
29
COMPUTER GRAPHICS
Answers
1. Phosphorescent
2. Magnetic, electrical
3. Persistence
7. Joy stick
8. Tablet
9. Pointing
30
COMPUTER GRAPHICS
UNIT 3
INTRODUCTION TO THE
‘GRAPHICS’ AND ‘C’
3.1 Introduction
3.2 ‘C’ GRAPHICS FUNCTIONS
3.3 C Graphics Programming Examples
3.1 Introduction
‘C’ is the language of choice for the system programming. It also
provides the facility to draw the graphics on the screen. All the graphical
related functions are kept in the header file graphics.h. C is a popular
programming language. It supports computer graphics and provides number of
standard library functions for drawing regular diagrams and figure on the
computer screen. One can use these graphical functions to draw the images
easily through computer program. For these we need to initialize the graphics
mode and detect the related graphics drivers. The standard library functions
are kept in the header file called “graphics.h”. for using any of the graphical
built-in functions “graphics.h” file must be included.
31
COMPUTER GRAPHICS
32
COMPUTER GRAPHICS
Declaration: void far putimage (int left, int top, void far *bitmap, int
top);
Remarks: putimage puts the bit image previously saved with
getimage back onto the screen, with the upper left corner of the image placed at
(left, top)
7. getimage ( ); getimage saves a bit image of the specified region
into memory.
Declaration: void far getimage (int left, int top, int right, int bottom,
void far * bitmap);
Remarks: getimage copies an image from the screen to memory.
8. malloc( ); It allocates the memory.
Declaration: void *malloc(size_t size) ;
Remarks: It allocates a back of size bytes from the memory heap. It
allows a program to allocate memory explicitly as its needed and in the exact
amount needed
9. floodfill( ); Flood_fills a bounded region.
Declaration: void far floodfill (int x, int y, int border);
Remarks: floodfill fills an enclosed area on bitmap devices. The
areas bounded by the color border are flooded with the current fill pattern and
fill color.
10. Closegraph( ); Shut down the graphics system.
Declaration: void far closegraph(void);
Remarks: It reallocates all memory allocated by the graphics
system.
11. cleardevice( ); It clears the graphics screen.
Declaration: void far cleardevice(void);
Remarks: It erases the entire graphics screen and moves the
current position (CP) to home(0, 0).
12. sleep( ); Suspends execution for interval.
Declaration: void sleep(unsigned seconds);
Remarks: With a call to sleep, the current program is suspended
from execution for the number of seconds specified by the argument seconds.
13. exit( ); exit terminates the program.
Declaration: void exit(int status);
Remarks: Exit terminates the calling process.
14. sound( ); sounds turns the PC speaker on at the specified
frequency.
Declaration: void sound(unsigned frequency);
Remarks: Sound turns on the PC’s speaker at a given frequency.
15. nosound( ); sounds turns the PC speaker off.
Declaration: void sound(void );
33
COMPUTER GRAPHICS
Remarks: Sound turns on the PC’s speaker off after it has been
turned on by a call to sound.
16. textcolor( ); It selects a new character color in text mode.
Declaration: void textcolor(int newcolor);
Remarks: This function works that procedure text-mode output
directly to the screen (console output functions), textcolor selects the
foreground character color.
17. delay( ); It suspends execution for interval (milliseconds).
Declaration: void delay(unsigned milliseconds);
Remarks: With a call to delay, the current program is suspended
from execution for the time specified by the argument milliseconds. It is not
necessary to make a calibration call to delay before using it. It is accurate to
one milliseconds.
18. imagesize( ); Returns the number of bytes required to store a
bit image.
Declaration: unsigned far imagesize(int left, int top, int right, int
bottom);
Remarks: determines the size of memory area required storing a bit
images.
19. gotoxy( ); Positions cursor in text window.
Declaration: void gotoxy(int x, int y);
Remarks: gotoxy moves the cursor to the given position in the
current text window. If the coordinates are invalid, the call to gotoxy is ignored.
20. line( ); line draws a line between two specified points.
Declaration: void far line(int x1, inty1, intx2, inty2);
Remarks: line draws a line from (x1, y1) to (x2,y2) using the
current color, line style and thickness. It does not update the current position
(CP)
34
COMPUTER GRAPHICS
35
COMPUTER GRAPHICS
getch();
closegraph(); /*closes the graph mode */
}
36
COMPUTER GRAPHICS
ellipse(getmaxx()/2,0,360,80,50);
/* draws an ellipse taking center of the screen as its center , 0 as starting angle
and 360 as ending angle and 80 pixel as Y radius */
setcolor(4); /*sets the drawing color as red */
ellipse(getmaxx()/2, getmaxy()/2,90,270,50, 80);
/*draws half the ellipse starting from 90 angle and ending at 27o angle
with 50 pixels as X-radius and 80 pixels as Y-radius in red color */
getch();
closegraph(); /* closes the graph mode */
}
37
COMPUTER GRAPHICS
/*arc with (300,200) as its center and 70 pixels as radius. It starts at an angle
20 and ends at an angle 100 degrees*/
getch();
closegraph(); /* closes the graph mode */
}
void main()
{
int gm, gd=DETECT,I;
initgraph(&gd, &gm,’’’’);
while(!kbhit())/* until pressing any key this loop continues */
{
putpixel(rand()%getmaxx(), rand() % getmaxy(), rand()%16);
/*x and y co-ordinates and the color are taken randomly*/
delay(2); /* just to draw the pixels slowly*/
}
getch ();
closegraph(); /* closes the graph mode */
}
38
COMPUTER GRAPHICS
void main()
{
int gm,gd=DETECT;
int x1,x2,y1,y2,c,I;
initgraph(&gd,&gm,’’’’);
while(!kbhit())/*until pressing any key this loop continues*/
{
/*for rectangle co-ordinates are taken randomly*/
x1=rand()%getmaxx();
x2=rand()%getmaxx();
y1=rand()%getmaxy();
y2=rand()%getmaxy();
if(x1>x2)
{
c=x1; /* exchange of x1 and x2 when x1 >x2 */
x1=x2;
x2=c;
}
if(y1>y2)
{
c=y1; /* exchange of y1 and y2 when y1>y2 */
y1=y2;
y2=c;
}
c=rand()%16;
39
COMPUTER GRAPHICS
{
putpixel(I,y1,c);
delay(1);
}
for(I=y1;I<=y2;++i)
{
putpixel(x2,I,c);
delay(1);
}
for(I=x2;I>=x1;_- i)
{
putpixel(I,y2,c);
delay(1);
}
for(I=y2;I>y1;-I)
{
putpixel(x1,I,c);
delay(1);
}
The closed graphical areas can be filled with different fill effects
that can be set using setfillstyle () function. The following program
illustrates fill effects for the rectangles, which are drawn randomly using
putpixel.
void main()
{
int gm,gd= DETECT;
40
COMPUTER GRAPHICS
int x1,x2,y1,y2,c,I;
initgraph(&gd,&gm,’’’’);
while(!kbhit()) /* until pressing any key this loop continues */
{
/* To draw rectangle co-ordinatinates are taken randomly */
x1=rand()%getmaxx();
x2=rand()%getmaxx();
y1=rand()%getmaxy();
y2=rand()%getmaxy();
if (x1>x2)
{
c=x1; /* exchange of x1 and x2 when x1 is >x2 */
x1=x2;
x2=c;
}
if(y1>y2)
{
c=y1; /* exchange of y1 and y2 when y1 is > y2 */
y1=y2;
y2=c;
}
c=rand()%16;
/* for rectangle using putpixel */
for(I=x1 ;i<=x2;++i)
{
putpixel(I,y1,c);
delay (1);
}
for(i=y1;I<=y2;++i)
{
putpixel(x2,I,c);
delay(1);
}
for(i=x2;i>=x1; i)
{
putpixel(i,y2,c);
delay(1);
}
for(i=y2;I>=y1; i)
{
41
COMPUTER GRAPHICS
putpixel(x1,i,c);
delay(1);
}
setfillsytyle(rand()%12, rand()%8); /* setting the random fill styles and colors
*
floodfill(x1+1,y1+1,c);
delay(200); /* to draw the pixels slowly */
}
getch();
closegraph(); /* closes the graph mode */
}
The lines with different lengths and colors are illustrated in the
following program.
initgraph(&gd,&gm,””);
while(kbhit()) /* until pressing any key this loop continues */
{
/* to draw rectangle co-ordinates are taken randomly */
x1=rand()%getmaxx();
x2=rand()%getmaxx();
y1=rand()%getmaxy();
y2=rand()%getmaxy();
42
COMPUTER GRAPHICS
The viewport is the portion of the screen within the screen. The
entire screen is the default viewport. We can make and choose our own
viewports according to our requirements. Once the viewport is set the top left
co-ordinates of the viewport becomes (0,0) origin and the maximum number of
pixels along x-axis and y-axis change according to the size of the view port. Any
graphical setting can be unset using graphdefaults() function.
#include<graphics.h>
#include<conio.h>
#include<stdlib.h>
#include<dos.h>
#include<stdio.h>
void main()
{
int gm, gd=DETECT;
int x1,x2,y1,y2,c,i;
clrscr();
printf(“enter starting co-ordinates of viewport (x1,y1)/n”);
scanf(“%d%d”,&x1,&y1);
printf(“enter ending co-ordinates of viewport(x2,y2)/n”);
scanf(“%d%d”,&x2,&y2);
initgraph(&gd,&gm,””);
43
COMPUTER GRAPHICS
x1=rand()%getmaxx();
x2=rand()%getmaxx();
y1=rand()%getmaxy();
y2=rand()%getmaxy();
setlinestle(rand()%10, rand()%20);
setcolor(rand()%16); /*to set the line color */
line(x1,y1,x2,y2); /*to draw the line */
delay(200);
}
getch();
closegraph(); /*closes the graph mode */
}
#include<graphics.h>
#include<conio.h>
#include<stdlib.h>
#include<dos.h>
#include<stdio.h>
void main()
{
int gm, gd=DETECT;
initgraph(&gd,&gm,””);
setcolor(5);
settextstyle(4,0,5); /*sets the text style with font, direction and char size
*/
moveto(100,100); /*takes the CP to 100,100 */
outtext(“Bangalore is”);
setcolor(4);
settextstyle(3,0,6);
moveto(200,200);
outtext(“silicon”);
setcolor(1)
44
COMPUTER GRAPHICS
settextstyle(5,0,6);
moveto(300.300);
outtext(“Valley”);
setcolor(2);
sertextstyle(1,1,5);
outtextxy(150,50,”Bangalore is”);
getch();
The set of pixels make lines and a set of continuous lines make
surfaces. The following program demonstrates the creation of surfaces using
lines and different colors.
#include<graphics.h>
#include<conio.h>
#include<dos.h>
#include<alloc.h>
#include<math.h>
void main()
{
int gm, gd=DETECT;
initgraph(&gd,&gm,””);
setviewport(100,100,300,300,0);
for(j=0;j<200;j=j+20)
{
for(i=0;i<=200;++i)
{
if (i%20==0)
setcolor(rand()%16+1);
line(i,j,i,j+20);
}
delay(100);
}
getch();
}
45
COMPUTER GRAPHICS
/* Program to draw a car. The different graphical functions are used to draw
different parts of the car */
#include<stdio.h>
#include<graphics.h>
main()
{
int x,y,i,choice;
unsigned int size;
void*car;
int gd=DETECT,gm;
initgraph(&gd, &gm,” “);
do
{
cleardevice();
printf(“1:BODY OF THE CAR\n”);
printf(“2:WHEELS OF THE CAR\n”);
printf(“3:CAR\n”);
printf(“4:QUIT”);
printf(“\nEnter your choice\n”);
scanf(“%d”,&choice);
switch(choice)
{
case 1 : initgraph (&gd,&gm,” “);
line(135,300,265,300);
arc(100,300,0,180,35);
line(65,300,65,270);
line(65,270,110,220);
line(110,220,220,220);
line(140,220,140,215);
line(180,220,180,215);
line(175,300,175,220);
line(120,215,200,250);
line(220,220,260,250);
46
COMPUTER GRAPHICS
line(260,250,85,250);
line(260,250,345,275);
arc(300,300,0,180,35);
line(345,300,345,275);
line(335,300,345,300);
getch();
cleardevice();
break;
case 2: initgraph(&gd,&gm,””);
circle(100,300,25);
circle(100,300,13);
circle(300,300,25);
circle(300,300,13);
getch();
cleardevice();
break;
line(140,220,140,215);
line(180,220,180,215);
line(175,300,175,220);
line(120,215,200,215);
line(220,220,260,250);
line(260,250,85,250);
line(260,250,345,275);
arc(300,300,0,180,35);
circle(300,300,25);
circle(300,300,13);
line(345,300,345,275);
line(335,300,345,300);
47
COMPUTER GRAPHICS
getch();
cleardevice();
break;
case 4 : exit(0);
}
}
while(choice!=4);
getch();
}
48
COMPUTER GRAPHICS
Unit 4
SIMPLE LINE DRAWING METHODS
Contents:
P (x,y)
x 49
COMPUTER GRAPHICS
50
COMPUTER GRAPHICS
Incremental methods
So, the point becomes (7, 15) Similarly the second point will
become (7,16). Note that while the difference between 6.6 and 7.4 was 0.8
(almost 1 pixel value) the display shows then as the some point, whereas the
51
COMPUTER GRAPHICS
Note that the slope of the line has changed - A series of such
changes between successive points make the lines look as shown in the fig
above (Jagged lines)
ii. Lines should terminate accurately: The cause is still the same
as in (1). Because of inaccuracies and approximations, the lines do not
terminate accurately. Either they stop short of the point at which they should
end or extend beyond the points result? Intersections and joints do not form
correctly. Look at the examples below
52
COMPUTER GRAPHICS
Looked another way, given a point on the straight line, we get the
next point by adding x to the x coordinate and y to the y coordinate i.e.
given a point p(x,y), we can get the next point as a Q(x+ ∆x, y + ∆ y) , the next
point R as R(x+2* ∆x, y+2* ∆y)etc. So this is a truly incremental method, where
given a starting point we can go on generating points, one after the other each
spaced from it's previous points by an additional x and y, until we reach the
final point.
Different values of ∆x and ∆y give us different straight lines.
But because of inaccuracies due to rounding off, we seldom get a
smooth line, but end up getting lines that are not really perfect.
53
COMPUTER GRAPHICS
The difference (x2-x1) gives the x spread of the line (along the x-
axis) and (y2-y1) gives the y spread (along y axis)
(x2,y2)
Y spread
X Spread
The larger of these is taken as the variable length (not exactly the
length of the line)
54
COMPUTER GRAPHICS
55
COMPUTER GRAPHICS
The main draw back of DDA method is that it generates the line
with “stair case” effect. It also needs all parameters as float but C language
syntax does not take any floating-point values as co-ordinates in computer
graphics.
Look at the above figure. The left most point should have been at
the point indicated by x, but because of rounding off, falls back to the previous
pixel. Whereas in the second case, the point still falls below the next level, but
because of the rounding off, goes to the next level. In each case, e is the error
involved in the process.
56
COMPUTER GRAPHICS
e=(deeltay/deltax)-0.5;
for(i=1;i=deltax; i++)
{
plot (x ,y);
if e>o then
{
y=y+1;
e=e-1;
}
x=x+1;
e=e+(deltay/ deltax);
}
Now we look at the program below that draws the line using
Bresneham,s line drawing algorithm.
# include<stdio.h>
# include<conio.h>
#include<stddlib.h>
# include <graphics.h>
void brline(int,int,int,int);
void main()
{
int gd=DETECT,gm,color:
int x1,y1,x2,y2;
printf(“\n enter the starting point x1,y1 :”);
scanf(“%d%d”,&x1,&y1);
57
COMPUTER GRAPHICS
while(x<xend)
{
color=random(getmaxcolor());
putpixel((int)x,(int)y,color);
if(e>0)
{
y++;
e=e+2*(dy-dx);
58
COMPUTER GRAPHICS
else
e=e+2*dy;
x++;
}
}
Generation of Circles
59
COMPUTER GRAPHICS
void main()
{
int gm,gd=DETECT,I,;
int x,y,x1,y1,j;
initgraph(&gd,&gm,””);
x=40; /*The c0-ordinate values for calculating radius */
y=40;
for(i=0;i<=360;i+=10)
{
setcolor(i+1);
x1=x*cos(i*3.142/180)+y*sin(i*3.142/180);
y1=x*sin(i*3.142/180)-y*cos(I*3.142/180);
circle(x1+getmaxx()/2,y1+getmaxy()/2,5); /* center of the circle is center
of the screen*/
delay(10);
}
getch();
}
60
COMPUTER GRAPHICS
# include<stdio.h>
# include<conio.h>
# include <graphics.h>
# include<math.h>
#include<dos.h>
/* Function for plotting the co-ordinates at four different angles that are placed
at egual distences */
putpixel(xcentre+y,ycevtre+x,color);
putpixel(xcentre+y,ycevtre-x,color);
putpixel(xcentre-x,ycevtre+x,color);
putpixel(xcentre-y,ycevtre-x,color);
}
61
COMPUTER GRAPHICS
{
y--;
p=p+2*(x-y)+1;
}
x++;
plotpoints xcentre, ycentre,x,y);
delay(100);
}
}
/* The main function that takes (x,y) and ‘r’ the radius from keyboard and
activates other functions for drawing the circle */
void main()
{
intgd=DETECT,gm,xcentre=200,ycentre=150,redius=5;
printf(“\n enter the center points and radius :\n”);
scanf(“%d%d%d”, &xcentre, &ycentre, &radius);
clrscr();
initgraph(&gd,&gm,””);
putpixel(xcentre,ycentre,5);
cir(xcentre,ycentre,redius);
getch();
closegraph();
}
# include<stdio.h>
# include<conio.h>
# include<math.h>
# include <graphics.h>
int xcentre, ycentre, rx, ry;
int p,px,py,x,y,rx2,ry2,tworx2,twory2;
void drawelipse();
void main()
{
62
COMPUTER GRAPHICS
int gd=3,gm=1;
clscr();
initgraph(&gd,&gm,””);
printf(“n Enter X center value: “);
scanf(“%d”,&xcentre);
printf(“n Enter Y center value: “);
scanf(“%d”,&ycentre);
printf(“n Enter X redius value: “);
scanf(“%d”,&rx);
printf(“n Enter Y redius value: “);
scanf(“%d”,&ry);
cleardevice();
ry2=ry*ry;
rx2=rx*rx;
twory2=2*ry2;
tworx2=2*rx2;
/* REGION first */
x=0;
y=ry;
drawelipse();
p=(ry2-rx2*ry+(0.25*rx2));
px=0;
py=tworx2*y;
while(px<py)
{
x++;
px=px+twory2;
if(p>=0)
{
y=y-1;
py=py-tworx2;
}
if(p<0)
p=p+ry2+px;
else
{
p=p+ry2+px-py;
63
COMPUTER GRAPHICS
drawelipse();
}
}
/*REGION second*/
p=(ry2*(x+0.5)*(x+0.5)+rx2*(y-1)*(y-1)-rx2*ry2);
while(y>0)
{
y=y-1:
py=py-tworx2;
if(p<=0)
{
x++;
px =px + twory2;
}
if(p >0)
p=p+rx2-py;
else
{
p=p+rx2-py+px;
drawelipse();
}
}
getch();
closegraph();
}
void drawelipse()
{
Putpixel (xcenter +x, ycenter +y, BROWN);
putpixel (xcenter +x, ycenter +y, BROWN);
putpixel (xcenter +x, ycenter +y, BROWN);
putpixel (xcenter +x, ycenter +y, BROWN);
}
64
COMPUTER GRAPHICS
void main()
{
int gm,gd=DETECT;
float x,y,x1,y1,i;
initgraph(&gd,&gm,””);
x=100;
y=100;
for(i=0;i<=360;i+=0.005)
{
x=x*cos(i*3.142/180)+y*sin(i*3.142/180);
y=x*sin(i*3.142/180)+y*cos(i*3.142/180);
putpixel((int)x+200,(int)y+200,15);
}
getch();
}
Review Questions:
1. Higher the resolution, better will be the quality of pictures because the
__________ will be closer.
2. An algorithm that draws the next point based on the previous paut's location
is called ____________________.
65
COMPUTER GRAPHICS
6. The common difficulty in drawing circles using DDA method with it's
differential equation is that ______________________.
Answers
1. Pixels
2. Incremental method
3. Approximation
5. Bresenham's
7. Parametric equations.
66
COMPUTER GRAPHICS
UNIT 5
TWO DIMENSIONAL TRANSFORMATIONS
5.1 Introduction
5.2 What is transformation?
5.3 Matrix representation of points
5.4 Basic transformation
5.5 Translation
5.6 Rotation
5.7 Scaling
5.8Concentration of the operations
5.9 Rotation about an arbitrary point
5.1 Introduction
67
COMPUTER GRAPHICS
68
COMPUTER GRAPHICS
5.5 Translation
x Q(x2,y2)
y Ty
(x,y)
x Tx
69
COMPUTER GRAPHICS
5.6Rotation
Suppose we want to rotate a point (x1 y1) clockwise through an
angle⁄ about the origin of the coordinate system. Then mathematically we can
show that
x2 = x1cosθθ + y1sinθ
θ and
y2 = x1sinθθ - y1cosθθ
5.7 Scaling
35
30
25 B
20
15
10 B A C
5
A C
5 10 15 20 25 30 35
70
COMPUTER GRAPHICS
71
COMPUTER GRAPHICS
This is also supposed to provide you an insight about the ease with
which matrix representation of operations allows us to Performa a sequence of
operations.
R (Rx, Ry)
y x
xP (x,y)
o x
72
COMPUTER GRAPHICS
ii) Rotate the point P(x1, y1) w.r.t. the (new) origin by
cosθ -sinθ 0
sinθ cosθ 0
0 0 1
Review Questions:
1. If a point (x,y) is moved to a point which is at a distance of Tx along x axis
what is it's new position?
73
COMPUTER GRAPHICS
5. How many values does the matrix representation of a point (x,y) has ? What
are they?
6. Give the matrix formulations for transforming a point (x,y) to (x1, y1) by
translation
Answers
1. (x+Tx, y)
2. (x, y+Ty)
4. x1=xsx, y1=ysy
5. 3 values (x y 1)
6. [ x1 y1 1] = [x y 1] 1 0 0
0 1 0
Tx Ty 1
7. Translate (px, py) to origin, effect the rotation Translate the point back to it's
original position.
74
COMPUTER GRAPHICS
UNIT 6
CLIPPING AND WINDOWING
6.1 Introduction
6.2 Need for Clipping and Windowing
6.3 Line Clipping Algorithms
6.4 The midpoint subdivision Method
6.5 Other Clipping Methods
6.6 Sutherland - Hodgeman Algorithm
6.7 Viewing Transformations
6.1 Introduction
In this unit, you are introduced to the concepts of handling
pictures that are larger than the available display screen size, since any part
of the picture that lies beyond the confines of the screen cannot be
displayed. We compare the screen to a window, which allows us to view only
that portion of the scene outside, as the limits of the window would permit.
Any portion beyond that gets simply blocked out. But in graphics, this
“blocking out “ is to be done by algorithms that decide point beyond which
the picture should not be shown. This concept is called clipping. Thus, we
are “clipping” a picture so that it becomes viewable on a “window”.
75
COMPUTER GRAPHICS
the screen and the resolution (no. of pixels/inch) limits the amount of
district details that can be shown.
Suppose the size of the picture to be shown is bigger than the
size of the screen, then obviously only a portion of the picture can be
displayed. The context is similar to that of viewing a scene outside the
window. While the scene outside is quite large, a window will allow you to
see only that portion of the scene as can be visible from the window – the
latter is limited by the size of the window.
Similarly if we presume that the screen, which allows us to see
the pictures as a window, then any picture whose parts lie outside the limits
of the window cannot be shown and for algorithmic purposes, they have to
be “clipped”. Note that clipping does not become necessary only when we
have a picture larger than the window size. Even if we have a smaller
picture, because it is lying in one corner of the window, parts of it may tend
to lie outside or a picture within the limits of the screen may go (partly or
fully) outside the window limits, because of transformation done on them.
And what is normally not appreciated is that as result of transformation,
parts, which were previously outside the window limits, may come within
limits as well. Hence, in most cases, after each operation an the pictures, it
becomes necessary to check whether the picture lies within the limits of the
screen and if not, too decide as to where exactly does it reach the limits of
the window and clip it at that point. Further, since it is a regular operation
in interactive graphics, the algorithms to do this will have to be pretty fast
and efficient.
The other related concept is windowing. It is not always that we
cut down the invisible parts of the picture to fit it into the window. The
alternate option is to scale down the entire picture to fit it into the window
size i.e. instead of showing only a part of the picture, it’s dimensions can be
zoomed down. In fact, the window can be conceptually divided into more
than one window and a different picture can be displayed in each window,
each of them “prepared” to fit into the window.
In a most general case, one may partly clip a picture and partly
transform it by windowing operation. Also, since the clipped out parts
cannot be discarded by the algorithm, the system should be able to keep
track of every window and the status of every picture in each of them and
keep making changes as required all in real time.
Having seen what clipping and windowing is all about; we
straightaway introduce you to a few clipping and windowing algorithms.
76
COMPUTER GRAPHICS
77
COMPUTER GRAPHICS
creen
001 010
000
First bit: will be 1 if the point is to the left of the left edge of the
screen. (LSB)
Second bit: 1 if the point is to the right of the right edge.
Third bit: is 1 if the point is below the bottom edge and
Fourth bit: is 1 if the point is to the top of the top edge.
(MSB)
The conditions can be checked by simply comparing the screen
coordinate values with the coordinates of the endpoints of the line.
If for a line, both end points have the bit pattern of 0000, the line
can be displayed as it is (trivially).
For example if one of the points of a straight line shows 1000, then
it’s interring section w.r.t. to the top edge needs to be computed (since the
point is above the top edge). If for the same line, the other point returns 0010,
then since a segment of the line a beyond the right edge, the intersection with
the right edge is to be computed.
78
COMPUTER GRAPHICS
P2
P2 1
P11
P1
79
COMPUTER GRAPHICS
into two and so on, until you end up at a point that cannot be further divided.
The segment P1 to this point is to be visible on the screen.
Now we formally suggest the mid point division algorithm.
Polygon clipping
A polygon is a closed figure bounded by line segments. While
common sense tells us that the figure can be broken into individual lines, each
being clipped individually, in certain applications, this method does not work.
Look at the following example.
80
COMPUTER GRAPHICS
A
A solid arrow is being displayed. Suppose the screen edge is as
shown by dotted lines. After clipping, the polygon becomes opened out at the
points A and B. But to ensure that the look of solidly is retained, we B should
close the polygon along the line A-
B. This is possible only if we consider the arrow as a polygon – not
as several individual lines.
Hence we make use of special polygon clipping algorithms – the
most celebrated of them is proposed by Sutherland and Hodgeman.
81
COMPUTER GRAPHICS
visible and so on). Now coming back to the algorithm. It tests each vertex of the
given polygon in turn against a clipping edge e. Vertices that lie on the visible
side of e are included in the output polygon, while those that are on the
invisible side are discarded. The next stage is to check whether the vertex vi
(say) lies on the same side of e as it’s predecessor vi-1. If it does not, it’s
intersection with the clipping edge e is to be evaluated and added to the output
polygon.
V1
V3
V4
V4 V5
82
COMPUTER GRAPHICS
Now consider the vertex v3, v2 and v3 are an different sides of ab.
Compute the intersection of v2 and v3 with ab. Let it be i3. include i3 in the
output polygon. Now consider v3, v4 and v5 are all on the same side (visible side)
of the polygon, and hence when considered are after the other, they are
included in the output polygon straightaway.
Now the output polygon of stage (1) looks like in the figure below
Now repeat the same sequence with respect to the edge b c, for this
output polygon of stage (1) v1, i1 and iz are on the same side of bc and hence get
included in the output polygon of stage (2) since iz and v3, are the different
sides of the line be, the intersection of bc with the line iz is is v3 computed. Let
this point be i3. Similarly, v3, v4 are the different sides of bc, their intersection
83
COMPUTER GRAPHICS
with be is computed as i4, v\4, v5 are on the same sides of bc and hence pass the
test trivially.
i1 i2
i3
v1
v4
i4
v5
After going through two more clippings against the edges cd and
da, the clipped figure looks like the one below
i1 i2
i8
i3
i7
i4
v4
i6 i5
84
COMPUTER GRAPHICS
It may be noted that the algorithms works correctly for all sorts of
polygons.
Assuming a screen of some size say 1024 x 1200 pixels, this size
given the maximum size of the picture that we can represent. But the picture
on hand need not always be corresponding to these sizes. Common sense
suggests that if the size of the picture to be displayed is larger than the size of
the screen, two options are possible (i) clip the picture against the screen edges
and display the visible portions. This will need fairly large amount of
computations, but in the end, we will be seeing only a portion of the picture.
(ii) Scale downs the picture (We have already seen how to enlarge/scale down a
point or a set of points by using matrix transformations). This would enable us
to see the entire picture, though with a smaller dimensions.
The converse can also be true. If we have a very small picture to be
displayed on the screen, we can either display it as it see, thereby seeing only a
cramped picture or scale it up so that the entire screen is used to get a better
view of the same picture.
However, a picture need not always be presented on the complete
screen. More recent applications allow us to see different pictures on the
different part of the screen. i.e., the screen is divided into smaller rectangles
and each rectangle displays a different picture. Such a situation is
encountered when several pictures are being viewed simultaneously either
because we want to work on them simultaneously or we want to view several of
them for comparison purposes. Now, each of these smaller rectangles that form
the space for one such picture is called a “window” and it should be possible for
us to open several of these windows at one time and view the pictures. In such
a scenario, the problem is still the same: of trying to fit the picture into the
rectangle meant for it. i.e. of scaling the picture into it’s window. The only
change is that since the window sizes are different for different pictures, we
should have a general transformation mechanism that can map a picture of
any given size to fit into any window of any given size. Incidentally we call the
coordinates system of the picture as the “world coordinate”
This concept of mapping the points between the two coordinate
systems is called the “windowing transformation”
85
COMPUTER GRAPHICS
wyt
wyb
wx1 wxr
The dotted lines indicate the window while the picture is in full
lines. The window is bounded by the coordinates wx1 and wxr (the x-coordinates
on the left side of window and the x – coordinates on the right side of the
window) and wyt and wyb ( The y- coordinate on the bottom of the window). It is
easy to see that these coordinates enclose a window between them (The dotted
rectangle of the figure),
86
COMPUTER GRAPHICS
vyt
vyb
vxc
vxr
Screen
Now consider any point (xcw yws) on the window. To convert this to
the view port coordinates, the following operations are to be done in the
sequence.
i) Scale the window coordinates so that the sizes of the
window and the view port match. This will ensure that the entire window
fits into the view port without leaving blank spaces in the view port. This
can be done by simply changing the x and y coordinates in the ratio of
the x-size of view port to the x size of window and y – size of view port to
y – size of the window respectively
i.e. vxr – vx1 and vyt – vyb
wxr – wx1 wyt – wyb
87
COMPUTER GRAPHICS
ii) Since the origins of the window and view port need not
be coinciding with their world coordinate systems and the screen
coordinate system respectively we have to shift them correspondingly.
This can be achieved by the following sequence.
88
COMPUTER GRAPHICS
Review Questions
1. Define Clipping
2. Define Windowing
3. Explain the 4 bit code to define regions used in rejection method.
4. What is the other name of the most popular polygon clipping algorithm?
5. With usual notations, state the equations that transform the window
coordinates to screen coordinates.
Answers
1. The process of dividing the picture to it's visible and invisible portions,
allowing the invisible portion to be discarded.
2. Specifying an area (or a window) around a picture in world coordinate, so
that the contents of the window can be displayed or used otherwise.
3.
1001 1000 1010
89
COMPUTER GRAPHICS
UNIT 7
GRAPHICAL INPUT TECHNIQUES
7.1 Introduction
7.2 Graphical Input Techniques
7.3 Positioning Techniques
7.4 Positional Constraints
7.5 Rubber band Techniques
7.1 Introduction
90
COMPUTER GRAPHICS
the letters in the same manner, trial after trial, or to make him draw his graphs
to the exact precision would be time consuming. In other words, whereas a
common user can be made to be aware of what he wants and would be willing
to get it as fast and accurately as possible, making him acquire graphic arts
skills would be inexcusable. On the other hand, it is desirable to make the
computer understand what he wants to input or alternately, we can make the
input devices cover up for the miner lapses of the user and feed a near perfect
input to the computer – like making it cover the circle, when the user stops just
short of closing it or ends up making the two ends one next to the other. There
are several astonishingly simple ways to make the life of the user more
comfortable and at the same time improve effectiveness of the input device.
91
COMPUTER GRAPHICS
Similarly, while locating a center of the circle the cross may get
located very near to the center of the circle, but not exactly at the center. In
fact, it is easy to appreciate that in the case of putting a rectangle at the end of
92
COMPUTER GRAPHICS
the straight lines one may often end up operating between the second and third
stages several times before (if at all) successfully reacting the position of (i). One
of the methods of helping the user is to put a “construct” on the position of the
box. i.e. when the distance between the box and the end of the line is very
small, the box automatically aligns itself on the edge of the line. i.e. it is
enough if the user brings it to either of the positions (ii) & (iii) and the software
automatically aligns it to position (i).
Though we are not considering the implementation aspects of the
same, it is easy to note that writing an algorithm for this is fairly straight
forward. Assuming each line ends at an integer value of the pixel, if the edge of
the base is brought to a value which is a fraction above / below the value,
automatically round it off to the pixel value. For example, if the (x,y) values of
the end of the lines is say (10,50) and a box is brought to a position say (10.6,
50.7), the values are automatically changed to (10,50), similar being the case if
the box position is say (9.7, 49.8). It is easier to see that the first example is
the case where the box is slightly above the line and the second where it is
inside the line.
This type of putting constraints is often called a “modular
constraint”. There can be other types of constraints as well. In a certain figure,
only horizontal and vertical lines are there, say like in a grid design, any
angular lines can be brought into any one of these positions by putting an
angular constraint that no straight line can be at any angle other than 00 and
900. The same can be extended to draw lines at any particular angle.
Now let us go back to the problem of attaching a box to the end of
a line. Suppose the end of the line does not terminate always at integer value.
Then positional constraints cannot be used. In such cases, we can think of
gravity constraints, wherein the box gets attached to the line because of the
“gravitational force” of the line i.e. it gets attached to the nearest free point
which forms the end of line. Again this relieves the user of the difficulty of
exactly putting the box to the end of the line.
93
COMPUTER GRAPHICS
points, it may not be possible to judge the course of the line. Hence, the
positioning can be done dynamically, however, rubber band techniques
normally demand fairly powerful local processing to ensure that lines are
drawn fast enough.
Dragging
Dimensioning Techniques
80, 100
80, 60
30, 40
Selection of Objects
94
COMPUTER GRAPHICS
moving, deletion, copying is whatever can be done. But the actual selection
process poses several problems.
The first one is about the choice of coordinates. When a point is
randomly chosen at the starting point of the selection process, the system
should be able to properly identify its coordinates. The second problem is about
how much is being selected. This can be indicated by selecting a number of
points around the figure or by enclosing the portion selected by a rectangle.
The other method is to use multiple keys i.e. position the cursor at the first
point of selection, press certain combination of keys, move the cursor to the
final position and again press certain combination of keys, so that the figure
lying in between them is selected. The mouse facilitates the same operation by
the use of multiple buttons on it. Once the selection is made, normally the
system is supposed to display the portion selected so that user can know he
has actually selected what he had wanted to. This feed back is done either by
changing the color of the screen, modifying the brightness or by blinking.
Menu selection
This is one of the special cases of selection where the user would
be able to choose and operate from a set of available commands / figures
displayed on the screen. This concept is called the “menu” operation, where
you select the item from those available on the menu card. The use of mouse
an input technique normally implies menus being provided by the system. The
menu concept helps the user to overcome the difficulty of having to draw
simple and often used objects by providing them as a part of the system.
Review Question:
Name the type of input facility available to the user in each of the following
cases
95
COMPUTER GRAPHICS
Answers :
1. Dragging
3. Modular constraint
5. Dimensioning technique.
6. Menu selection.
96
COMPUTER GRAPHICS
UNIT 8
THREE DIMENSIONAL GRAPHICS
8.1 Introduction
8.2 Need for 3-Dimensional Imaging
8.3 Techniques for 3-Dimesional displaying
8.4 Parallel Projections
8.5 Perspective projection
8.6 Intensity cues
8.7 Stereoscope effect
8.8 Kinetic depth effect
8.9 Shading
8.1 INTRODUCTION
97
COMPUTER GRAPHICS
98
COMPUTER GRAPHICS
99
COMPUTER GRAPHICS
100
COMPUTER GRAPHICS
8.9 Shading
Those who have done artistic pictures know that shading is a very
powerful method of shading depth. Depending on the direction of incident light
and the depth of the point under consideration, shades are generated. If they
can be represented graphically, excellent ideas about depth can be created in
the viewer. Raster graphics, which allow each pixel to be set to a large number
of brightness values, is ideally suited for such shading operations.
Review questions
Answers:
1. Animation.
2. Certain experiments may be too costly; certain other experiments need lot of
changes to be made, which is easier to incorporate on a computer.
3. Most of the objects we see in real life are 3-dimentional. Also in applications
like animation or simulation, where realism is of prime importance, not able to
give a concept of depth would make the whole concept useless.
4. Parallel Projection.
5. Perspective Projection.
6. The technique of showing two different pictures which are slightly displaced
from each other, so that the user gets the idea of a third dimension is called the
stereoscope technique.
7. Either by using two screen displaced slightly from each other or by using a
single screen to produce both the views, one after the other at speeds greater
than 20 times per second.
8. In moving objects, the following points move slowly compared to the nearby
points. If a similar technique is used in moving pictures, the viewer gets a cue
about the depth of the object.
101
COMPUTER GRAPHICS
UNIT 9
SOLID AREA SCAN CONVERSION
9.1) Introduction
9.2) Solid Area Scan Conversion
9.3) Scan Conversion of Polygons
9.4) Coherence
9.5) (yx) Algorithm
9.6) Singularities
9.7) Algorithm Singularity
9.1 Introduction
In this unit, we learn about the concept of scan conversion of
polygons. We talk about polygons, since any object of any random shape can be
though of as a polygon – a figure bounded by a number of sides. Thus if we are
able to do certain operations on the polygons, they can be extended to all other
bodies.
So for, we have seen the line drawing algorithms. But if only a
figure bounded by a number of sides is given, we do not know complex when a
large number of polygons is there in the screen. We do not know whether the
objects behind the present object are visible or not. So, we would like to make a
distinction between objects that are inside the polygon and those that are
outside and display them differently. The concept of identifying such pixels is
called the “scan conversion”, since we convert the pixels along one scan line at
a time.
We make use of the property of coherence- i.e. pixels that are in
the same neighborhood share similar properties. Using this, we introduce you
to the YX algorithm, which makes use of the intersections of polygons with the
scan lines and the concept of coherence to suggest an efficient scan conversion
methodology.
102
COMPUTER GRAPHICS
i) Find out pixels that lie within the solid area and find out
those that lie outside the solid area. This concept is called the mask of the
area. One simple way of representative such pixels is to use a 1 to indicate
pixels that lie inside the area and use a 0 to indicate pixels outside. The bit is
called the “mask”
ii) To determine the shading rule. The shading rule deals with
the pixel intensity of each pixel within the solid area. To give a realistic image
for the depth, it is essential that the “shade” of each pixel be indicated
separately, so as to give a coherent idea of the concept of depth. Such a
mechanism would give the effect of shadows to pictures so that pixels that lie
nearer to the observer would caste a shadow on those that are far away. A
variable shading technique is of prime importance in presenting realistic 3-
dimensional pictures.
iii) To determine the priority. When one speaks of 3-dimensions
and a number of objects, the understanding is that some of the objects that are
nearer are likely to cover the objects that are far away. Since each pixel can
represent only one object, the pixel should become a part of the object that is
nearest to the observer i.e. a priority is assigned to each object and if a pixel
forms part of more than one object, then it will represent the object with the
highest priority amongst them.
103
COMPUTER GRAPHICS
9.4 Coherence
a1 a2
a
b
c
Suppose we want to identify all those pixels that lie inside the
polygon and those that lie outside. This can be done in stages, scan line by
scan line. Consider the scan line a. This is made up of a number of pixels.
Beginning with left most point of the scan line, compute the intersections of the
edges of the polygon with the particular scan line. In this case these are two
intersections (91 & a2). Starting at the left most pixels, all pixels lie outside the
polygon up to the first intersection. From then on all pixels lie inside the
polygon until the next intersection. Then afterwards, all pixels lie outside. Now
consider a line b. It has more than two intersections with the polygon. In this
case, the first intersection indicates the beginning of the series of pixels inside
104
COMPUTER GRAPHICS
the polygon, the next intersection indicates that the following pixels will be
inside the polygon and fourth intersection concludes the series.
1. For every edge of the polygon, find out it’s intersection with all
the scan lines (This is a fairly straight forward process, because beginning with
one tip of the edge, every incremental value of y gives the next scan line and
hence a DDA type algorithm can be written to compute all such intersections
very fast and quite efficiency. However, we leave this portion to the student).
Build a list of all those (x,y) intersections.
2. Sort the list so that the intersections of each scan line are at one
place. Then sort them again with respect to the x coordinate values.
(Understanding this concept is central to the algorithm). To simplify the
operations, in stage 1, we simply computed the intersections of every edge with
every (intersecting) scan line. This gives a fairly large number of (unordered)
points. Now sort these points w.r.t. their y-values, i.e. the scan line values.
Assuming that the first scan line has a y value of 1, we get the list of it’s
intersections with every edge. Then of the scan line with value 2 and soon. At
this stage, looking at the previous example, we have the intersections of ‘a’
listed first, then intersection of ‘b’ and then of ‘c’ Now sort these intersections
separately w.r.t. x points. Then the points a1 and a2 appear in the order,
similarly of b and c)
105
COMPUTER GRAPHICS
9.6 Singularities
106
COMPUTER GRAPHICS
107
COMPUTER GRAPHICS
of it. He simply goes about painting this second polygon, without bothering
about the previous polygon. This new polygon, let us say polygon 2, has a
higher priority than the polygon 1. i.e. when the two polygons appear together,
polygon 2 will be visible completely and polygon 1 is visible only if polygon 2 is
not obscuring it in that region. Now, once the second polygon is painted, in a
different color, it is simple to analyze that the parts of the polygon 1 that are
covered by polygon 2 automatically get covered and becomes invisible.
Similarly if a polygon 3 is painted, it gets the highest priority in display.
108
COMPUTER GRAPHICS
3. Remove pairs of nodes from this sorted list and scan convert as
before.
Review questions
1. What is scan conversion?
2. Why are we specific about polygons?
3. What is priority in the concept of a pixel?
4. What is coherence?
5. Why yx algorithm called so?
6. What is singularity? How are they taken care of in yx algorithm?
7. How is a singular point identified?
Answers:
1. The idea of identifying and converting pixels along a scan line that lie inside
the polygon so that they can be displayed differently.
2. Because once we are able to do certain operations on polygons, they can be
extended to others, since most of the regular and irregular boundaries can be
thought of as polygons.
3. In 3-dimensional views, when more than one object stands one behind
another, the same pixel on the screen represents more than one object. So the
priority for the pixel as to which object it should represent is important.
4. Pixels in the same neighborhood share similar properties – most often. If a
pixel is inside a polygon, most probably, it’s neighbors also will be inside the
same polygon. Hence, the same set of operations need not be repeated on each
of them.
5. Since it first sorts the elements with respect to y and then with respect to x.
6. When a vertex coincides with a scan line, it is a singular because the scan
line entres and leaves the polygon at the same place. They are counted as z
intersections for the algorithm.
7. In a singular point, an edge of the polygon changes it’s direction.
109
COMPUTER GRAPHICS
UNIT 10
THREE DIMENSIONAL TRANSFORMATIONS
10.1) Introduction
10.2) Three Dimensional transformation
10.3) Translations
10.4) Scaling
10.5) Rotation
10.6) Viewing Transformation
10.7) The Perspective Transformation
10.8) Three Dimensional Clipping
10.9) Clipping
10.10) Perspective view of Cube
10.1 Introduction
In this unit, we look into the basics of 3-D graphics, beginning with
transformations. In fact the ability to transform a 3-dimensional point, i.e. a
point represented by 3 Co-ordinates (x,y,z) is of immense importance not only
for the various operations on the picture, but also for the ability to display the
3-D picture in a 2-D screen. We briefly see the various transformation
operations – they are nearly similar to the 2-D operations. We also see the
concepts of clipping and windowing in 3-D.
10.3 Translations
110
COMPUTER GRAPHICS
[x1 y1 z1 1] = [x y z 1] 1 0 0 0
0 1 0 0
0 0 1 0
Tx Ty Tz 1
10.4 Scaling
[x1 y1 z1 1] = [x y z 1] Sx 0 0 0
0 Sy 0 0
0 0 Sz 0
0 0 0 1
10.5 Rotation
111
COMPUTER GRAPHICS
θ
O X
Rotation Direction of view of origin
Transformation Matrix
Z
Direction of view of
Origin
Y
O X
Transformation Matrix
COMPUTER GRAPHICS
O X
[x1 y1 z1 1] = [x y z 1] 1 0 0 0
0 cosθ - Sinθ 0
0 sinθ cosθ 0
0 0 0 1
Transformation Matrix
O X
113
COMPUTER GRAPHICS
Suppose the axis passes through the origin, but does not coincide
with any of the axes, then the axis itself is to be first aligned to one of the axes
before doing the transformations. The sequence of events appears as follows.
(a) Rotate the axis through the desired angle to make it coincide
with one of the axes. (Depending on with respect to which axis, it's angle of
deviation is available).
(b) Rotate the point (desired to be rotated) about this axis.
(c) Rotate the axis back to its original angle of deviation.
114
COMPUTER GRAPHICS
[xe ye ze 1] = [ xw yw zw 1] V
115
COMPUTER GRAPHICS
xs / D = xe /ze
The numbers xs and ys can be converted to fractions by dividing
them by the screen size. This operation not only allows as to numbers which
are fractions, but it also makes the numbers dimensionless (we are dividing a
dimension with another dimension).
Xs / D = Xe /sze and Ys / D = Ye / S ze
Or Xs = D xe / Sze and ys = D ye / Sze
Alternatively they can be converted to the screen coordinates by
including a specification of the location of view port in which the image is
displayed.
Xs = (Dxe / Sze ) Vsx + Vcx and ys = (Dye /Sze ) Vsy + Vcy
(Vcx,Vcy)
Vsy
Vsx
View Port
116
COMPUTER GRAPHICS
10.9 CLIPPING
First bit is 1: if the point is to the left the pyramid, else it will be
zero, similarly,
Second bit : If the point is to the right of the pyramid
117
COMPUTER GRAPHICS
118
COMPUTER GRAPHICS
T1 = 1 0 0 0
0 1 0 0
0 0 1 0
-6 -8 -7.5 1
2. Rotate the coordinate system about the x' axis by -90o. Because
we require the inverse transformation, we substitute θ = 90 o.
T2 = 1 0 0 0
0 0 -1 0
0 1 0 0
0 0 0 1
T3 = -0.8 0 0.6 0
0 1 0 0
-0.6 0 -0.8 0
0 0 0 1
4. Rotate about the x' axis by an angle ϕ so that the origin of the
original coordinate system will lie on the z' axis, we have cos -ϕ = cos ϕ =
10/12.5 and sin -ϕ = -sin ϕ = -7.5/12.5:
T4 = 1 0 0 0
0 0.8 0.6 0
0 -0.6 -0.8 0
0 0 0 1
4. Finally reverse the sense of the z' axis in order to create a left handed
coordinate system that conforms to the conversions of the eye coordinate
system, A scaling matrix is used.
119
COMPUTER GRAPHICS
T4 = 1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
N= 4 0 0 0
0 4 0 0
0 0 1 0
0 0 0 1
All the details of the transformations have now been specified. The
matrix VN clipped and converted to screen coordinates using the above
equation transforms each vertex of the cube
120
COMPUTER GRAPHICS
Questions
Answers:
1. 4X4
2.
a) Rotate the axis to make it coincide with x,y or z axis
b) Rotate the point suitably over this axis.
c) Bring the axis back to its original position by the sequence of reverse
transformations.
3. Perspective projection.
4. Xs = (DXe/SZe) Vsx + Vcx
And Ys = (DYe / SZe) Vsy + Vcy
Where Xs and Ys are screen coordinates,
Xe and Ye are the eye coordinates
S is half screen size
Vsx and Vsy are the dimensions of the viewport
Vcx and Vcy are the shift of the viewport with respect to the screen coordinates.
121
COMPUTER GRAPHICS
UNIT 11
HIDDEN SURFACE REMOVAL
11.1) Introduction
11.2) Need for hidden surface removal
11.3) The Depth - Buffer Algorithm
11.4) Properties that help in reducing efforts
11.5) Scan Line coherence algorithm
11.6) Span - Coherence algorithm
11.7) Area-Coherence Algorithms
11.8) Warnock’s Algorithm
11.9) Priority Algorithms
11.1 Introduction
122
COMPUTER GRAPHICS
side faces). The ability to identify the faces and surfaces that are to be covered
and the extent of coverage in the case of partially covered surfaces in real time
is not only computationally intensive, but also analytically daunting. When
only wire frame types of drawings are being displayed, the task gets somewhat
simplified to that of “hidden line removal” – identifying those lines that should
not be shown. However, when solid objects are being considered, the task
becomes more complex because entire surfaces need to be identified for
removal.
A large number of algorithms are available for the job –though no
single algorithm can be though to be all encompassing capable of being efficient
in all possible conditions. However almost all of them share some common
feature. The first one is that at some point in the algorithm, they tend to sort
the objects in the order of their Z-distance from the viewer and try to eliminate
the farthest ones. But the sorting tends to be a difficult task at least in some
cases, since often an object may not be identified with a unique distance – Z.
when several part of the object have different Z coordinates, simple, direct
sorting methods may become inadequate.
The algorithms can also work either with respect to the object
space or the image space. One should clearly be able to draw the distinguishing
line between them. The object space in the space occupied by the pictures
created by the algorithms. However, before these pictures can be displayed,
they undergo various operations – like clipping, windowing, perspective
transformations etc. This final set of pictures – ready for display on the screen
is called the image space. The object space algorithms tend to calculate the
values with as a precision as feasible since often these calculations form the
basis for the next set of calculations, whereas the image space algorithms
calculate with precision that is in line with the precision available with the
display devices. This is because any higher precision, achieved with great
efforts, will become useless since the display devices cannot anyway handle
such precisions. Further, the computational efforts in the case of objects –
123
COMPUTER GRAPHICS
since every object tend to rapidly increase with the no. of objects – since every
object will have to be tested with other objects, where as in the image apace
computations, the increase is much slower, since one tends to look at the
number of pixels, irrespective of the no. of objects in the scene. The number of
pixels in a given resolution of display device is a constant.
a. For every pixel, set it’s depth and intensity pixels to the back
ground value ie. At the end of the algorithm, if the pixel does not become a
part of any of the objects it represents the background value.
b. For each polygon on the scene, find out the pixels that lie
within this polygon (which is nothing but the set of pixels that are chosen if
this polygon is to be displayed completely).
For each of the pixels
i) Calculate the depth Z of the polygon at that point (note
that a polygon, which is inclined to the plane of the screen will have
different depths at different points)
ii) If this Z is less than the previously stored value of
depth in this pixel, it means the new polygon is closer than the earlier
polygon which the pixel was representing and hence the new value of
Z should be stored in it. (i.e from now on it represents the new
polygon). The corresponding intensity is stored in intensity vector.
124
COMPUTER GRAPHICS
One may note that at the end of the processing of all the polygons,
every pixel, will have the intensity value of the object which it should display in
its intensity location and this can be displayed.
This simple algorithm, as can be expected, works on the image
space. The scene should have properly projected and clipped before the
algorithm is used.
The basic limitation of the algorithm is it’s computational
intensiveness. On a 1024 X 1024 screen it will have to evaluate the status of
each of these pixels in a limiting case. In it’s present form, it does not use any
of the coherence or other geometric properties to reduce the computational
efforts.
To reduce the storage, some times the screen is divided into
smaller regions like say 50 X 50 or 100 X 100 pixels, computations made for
each of this religions, displayed on the screen and then the next region is
undertaken. However this can be both advantageous and disadvantageous. It is
obvious that such a division of screen would need each of the polygons to be
processed for each of the regions – thereby increasing the computational
efforts. This is disadvantage. But when smaller regions are being considered, it
is possible to make use of various coherence tests, thereby reducing the
number of pixels to be handled explicitly.
125
COMPUTER GRAPHICS
can be totally removed from the calculations then reducing the efforts
considerably.
ii) Overlap tests: Common sense gives us one simple idea.
An object can obscure another only if (a) one of them is at a farther
distance than another – obviously two objects standing side by side
cannot obscure each other
P2
P1
P1 P2
126
COMPUTER GRAPHICS
127
COMPUTER GRAPHICS
Scan line
If one travels along this scan line, the plane intersects one/more
polygons at different points. If these points of intersection are noted and are
sorted in the increasing order of x, we get a sort of xz algorithm which gives the
list of intersections with different polygons.
Taking them in pairs, just as in the XY algorithm, one can convert
the entire plane into several spans.
i. Spans that do not lie within any polygon, the pixels
can be set to the background intensity.
ii. Spans that lie within a single polygon. All of them can
be set to the intensity of the polygon.
iii. Spans that are intersected by 2 or more polygons. In
such spans, the pixel values can be set to the intensity of the nearest
polygon.
128
COMPUTER GRAPHICS
129
COMPUTER GRAPHICS
Polygons that fall into category (i) and (ii) are removed at each level.
If the remaining polygons can be easily solved, the recursive process stops at
that level, else the process continues (with the polygons of category (i) and (ii)
removed).
130
COMPUTER GRAPHICS
but for certain others B is nearer than A. Hence the priority list, prepared
based only on the depths will have to be rearranged as follows.
Consider the last polygon in the A (Say). If it has no overlaps in
depths with its predecessors, then it has no overlaps with any other polygons
and can remain at the end. Other wise, if it has any depth overlaps with one or
more polygons, denoted by the set {b}, then we have to again check if any
specific polygon B from this set is obscured by A. If yes, then B has no
business to be in that priority since A, which is obscuring B, should have a
higher priority than B. Corresponding modifications are to be made to the list.
Based on the considerations, in the above figure A should have a
higher priority than B, though the Zmax of B is less than that of A.
The question is how to find out the relation “A obscures B”? Apply
the following steps in the same order to ascertain that A does not obscure B.
(a) Depth minimax test should indicate that A and B do not
overlap in depth and B is closer to the viewpoint than A. This test is
implemented by initially sorting by depth all polygons and by the way A and
{b} are selected.
(b) Minimax test in xy should indicate that A and B do not
overlap in X or Y.
(c) All vertices of A should be farther from the view point than
the plane of B. This can be implemented by substituting x,y coordinates of
a into the plane equation of B and solving for the depth of B.
(d) All vertices of B should be closer to the viewpoint than the
plane of B.
(e) A full overlap test should indicate that A and B do not
overlap in x or y.
The order is not very important, except that any one of the tests
being true indicates that A does not obscure B. Since the latter tests are more
involved, it is desirable that the order is followed so that one can avoid the
latter tests if possible.
131
COMPUTER GRAPHICS
Review questions:
1. State painters algorithm in 2-3 lines.
2. What is the main difficulty of the scan line algorithm?
3. What is the concept of overlap testing?
4. If tow or more objects fail in overlap testing, does it mean they
always obscure at least in some regions.
Answers:
1. Start painting from the object that is farthest from the viewer. As
and when new objects are painted, the earlier objects that are obscured by the
nearer objects automatically get removed- either in full or in those regions
where they are invisible.
2. It is computationally insensitive.
3. Two objects can obscure each only if Zmax of one is greater than
the Zmax of the other. Even then, they overlap only if they overlap in either x
or y coordinates. I.e. the maximum y of one is greater than the minimum y of
the other or the maximum x of one is greater than the minimum x of the other
and this holds for both the objects.
4. No, It depends on their actual shapes and placements
132
COMPUTER GRAPHICS
133
COMPUTER GRAPHICS
References
134