CGNOtes
CGNOtes
Introduction
Computer graphics is a field related to the generation of graphics using computers. It includes the
creation, storage and manipulation of images of objects. These objects come from diverse fields such
as physical, mathematical engineering , architectural abstract structures and natural phenomenon.
Computer graphics today is largely interactive; that is largely interactive; that is the user controls the
contents structure and appearance of images of the objects by using input devices such as a keyboard,
mouse, or touch sensitive panel on the screen
Until the early 1980’s computer graphics was a small specialized field, largely because the hardware
was expensive and graphics based application programs that were easy to use and cost effective were
few. Then personal computers with built in raster graphics displays such as the Xerox Star Apple
Macintosh and the IBM PC – popularized the use of bitmap graphics for user computer interaction. A
bitmap is a ones and zeros representation of the rectangular array of points on the screen. Each point is
called a pixel, short for “picture elements”. Once the bitmap graphics became affordable an explosion
of easy to use and inexpensive graphics based user interfaces allowed millions of new users to control
simple low cost application programs such as word processors, spreadsheets and drawing programs
The concept of a “desktop” now became popular metaphor for organizing screen space. By means of a
window manager the user could create position and resize rectangular screen areas called windows.
This allowed user to switch among multiple activities just by pointing and clicking at the desired
window , typically with a mouse. Besides windows, icons which represent data files , application
program, file cabinets, mailboxes. Printers , recycle bin and so on, made the user computer interaction
more effective . by pointing and clicking the icons, users could activate the corresponding programs or
objects which replaced much of the typing of the commands used in earlier operating systems and
computer applications . today almost all interactive application programs even those for manipulating
text i.e. word processor) or numerical data (e.g. spreadsheet programs) use graphics extensively in the
user interface and for visualizing and manipulating the application specific objects.
Even people who do not use computers encounter computer graphics in TV commercials and as
cinematic special effects. Thus computer graphics is an integral part of all computer user interfaces,
and is indispensable for visualizing 2D, 3D objects in all most all areas such as education , science ,
At the same time, it was becoming clear to computer, automobile, and aerospace manufacturers that
(CAD) and computer aided manufacturing (CAM) activities had enormous potential for automating
drafting and other drawing intensive activities. The General Motors DAC system for automobile design
and the Itek-Digitek system for lens design were pioneering efforts that showed the utility of graphical
interaction in the iterative design cycles common in engineering. By the mid 60s, a number of
commercial products using these systems had appeared
At that time only the most technology intensive organizations could use the interactive computer
graphics where as others used punch cards, a non-interactive system .
Among the reasons for this were these:
The high cost of graphics hardware – at a time when automobiles cost a few thousand
dollars, computers cost several millions of dollars, and the first commercial computer
displays cost more than a hundred thousand dollars
The need for large scale expensive computing resources to support massive design database
The difficulty of writing large interactive programs using batch oriented FORTRAN
programming
One of a kind , non-portable software, typically written for a particular manufacturer’s
display devices. When software is non-portable, moving to new display devices necessitates
expensive and time consuming rewriting of working programs
1. User interfaces
most applications have user interfaces that rely on the desktop window systems to manage multiple
simultaneous activities and on point and click facilities to allow users to select menu items, icons,
and objects on the screen These activities fall under computer graphics. Typing is necessary only to
input text to be stored and manipulated. For example, word processing spreadsheet and desktop
publishing programs are the typical examples where user interface techniques are implemented
2. Plotting
plotting 2D and 3D graphs of mathematical physical and economic functions use computer graphics
extensively The histograms, bar and pie charts, the task scheduling charts, are the most commonly
used plotting . These are all used to present meaningfully and concisely the trends and patterns of
complex data
6. Simulation
Simulation is the imitation of the conditions like those, which is encountered in real life. Simulation
thus helps to learn or to feel the conditions one might have to face in near future with out being in
danger at the beginning of the course. For example, astronauts can exercise the feeling of
weightlessness in a simulator, similarly a pilot training can be conducted in a flight simulator . The
military tank simulator the naval simulator, driving simulator , air traffic control simulator, heavy
duty vehicle simulator and so on are some of the mostly used simulator in practice. Simulators are
also used to optimize the system
For example the vehicle, observing the reactions of the driver during the operation of the simulator
7. Entertainment
Disney movies such as Lion King and the beauty and the beast, and other scientific movies like star
trek are the best examples of the application of computer graphics in the field of entertainment.
Instead of drawing all the necessary frames with slightly changing scenes for the production of
cartoon film only the key frames are sufficient for such cartoon film where the in between frames
are interpolated by the graphics system dramatically decreasing the cost production while
maintaining the quality. Computer and video games such as Fifa, Formula-1, Doom and Pools are
few to name where computer graphics is used extensively
9. Cartography
Cartography is a subject which deals with the making of maps and charts. Computer graphics is
used to produce both accurate and schematic representations of geographical and other natural
phenomena from measurement data. Examples include geographic maps, oceanographic charts,
weather maps, contour maps and population density maps
Surfer is one of such graphics packages which is extensively used for cartography
Besides these the field of Computer Graphics has been able to find its usage in diversified fields
like medical field, tourism field etc where computers as used for storing and processing data. It is
also widely used for educational purpose and for creating public and social awareness and for
activities related to nation development this is because information portrayed visually is more
expressive and precise.
A picture speaks thousands of words,
Here ,
2. Suppose we have a video monitor with display area that measures 12 inches across and 9.6 inches high. If
resolution is 1280 by 1024 what is the aspect ratio and the diameter of each pixel?
3. How much time is spent scanning across each row of pixels during screen refresh on a raster system with a
resolution of 1280 x 1024 and refresh rate of 60 frames per second.
Terminologies
Pixel
The smallest number of phosphor dots that the electron gun can focus on is called a pixel, it
comes from the term picture element. Each pixel has a unique address, which the computer
uses to locate the pixel and control its appearance. Some electron guns can focus on pixels as
small as a single phosphor dot.
Fluorescence / Phosphorescence
When the electron beam strikes the phosphor-coated screen of the CRT the individual electrons
are moving with the kinetic energy proportional to the acceleration voltage.
Some of this energy is dissipated as heat but the rest is transferred to the electron of the
phosphor atoms making them jump to higher quantum energy levels
In returning to their previous quantum levels these excited electrons give up their extra energy
in the form of light at frequencies that is colors predicted by the quantum theory
Any given phosphor has several different quantum levels to which electrons can be excited each
corresponding to a color associated with return to an unexcited state
Further, electrons on some levels are less stable and turn to the unexcited state more rapidly
than others.
A phosphors fluorescence is the light emitted as these very unstable electrons lose their excess
energy whole the phosphor is being struck by electrons
Phosphorescence is the light given off by the return of the relatively more stable excited
electrons to their unexcited state once the electron beam excitation is removed
Since fluorescence usually last just a fraction of a microsecond the most of the light emitted is
phosphorescence for a give phosphor
Persistence
A phosphor‟s persistence is defined as the time from the removal of excitation to the moment
when phosphorescence has decay to 10 percent of the initial light output
The range of persistence of different phosphors can reach many seconds
The phosphors used for graphics display devices usually have persistence of 10 to 60 micro
seconds
A phosphor with low persistence is useful for animation and a high persistence phosphor is
useful to highly complex static pictures
Refresh rate
The refresh rate is the number of times per second the image is redrawn to give a feeling of un-
flickering pictures and it is usually 50 per second
As the refresh rate decreases flicker develops because the eye can no longer integrate the
individual light impulses coming from a pixel
Saroj Shakya Computer Graphics Nepal College of Information Technology
The refresh rate above which a picture sops flickering and fuses into a steady image is called the
critical fusion frequency (CFF)
The factors affecting the CFF are:
i. Persistence: longer the persistence the lower the CFF But the relation between the CFF
and persistence is non linear
ii. Image intensity: Increasing the image intensity increases the CFF with non linear
relationship
iii. Ambient room light Decreasing the ambient room light increases the CFF with nonlinear
relationship
iv. Wave lengths of emitted light
v. Observer
Resolution
Resolution is defined as the maximum number of points that can be displayed horizontally and
vertically with out overlap on a display device
A monitor‟s resolution is determined by the number of pixels on the screen, expressed as a
matrix. The more pixels a monitor can display, the higher its resolution and the dearer its
images appear. For example, a resolution of 640 X 480 means that there are 640 pixels
horizontally across the screen and 480 pixels vertically down the screen. The actual resolution
is determined by the video controller not by the monitor itself—most monitors can operate at
several different resolutions e.g. 800 x 600, 1024 x 768, 1152 x 864, 1280 x 1024. As the
resolution increases, the image on the screen gets smaller.
Factors affecting the resolution are as follows
i. Spot profile The spot intensity has a Gaussian distribution as depicted in figure. So two
adjacent spots on the display device appear distinct as log as their separation D2 is
greater than the diameter of the spot D1 at which each spot has an intensity of about 60
percent of that at the center of the spot
Types of Displays
Display Technology
i. Vector Display Technology
Vector display technology was developed in 60‟s and used as a common display device until
80‟s , It is also called random scan, a stroke, a line drawing or calligraphic display
System Bus
.
.
MoveTo(10,10)
LineTo(639,479)
Display List -> .
.
.
It consists of a central processing unit, a display processor , a monitor , system memory and
peripheral devices such as mouse and key board
A display processor is also called a display processing unit or graphics controller
The application program and graphics subroutine package both reside in the system memory
and execute on CPU . A graphics subroutine package creates a display list and stores in the
system memory
A display list contains point and line plotting commands with end point coordinates as well as
character plotting commands
The DPU interprets the commands in the display list and plots the respective output primitives
such as point, line and characters
As a matter of fact the DPU sends digital point coordinates to a vector generator that converts
the digital coordinate values to analog voltages for circuits that displace an electron beam
hitting on the CRT‟s phosphor coating
Therefore the beam is deflected from endpoint to endpoint as dictated by the arbitrary order of
the commands in the display list, hence the name Random Scan Display Since the light output
of the phosphor decays in tens or at most hundreds of microseconds the DPU must cycle thru
the display list to refresh the image around 50 times per second to avoid flicker. A portion of
the system memory where display list resides is called a refresh buffer.
This display technology is used with mono chromatic CRTs or beam penetration color CRTs
Advantages:
i. It can produce a smooth output primitives with higher resolution unlike the raster display
technology
ii. It is better than raster display for real time dynamics such as animation
System Bus
0000000000
0100000000
0010000000
0001000000
Frame Buffer -> 0 0 0 0 1 0 0 0 0 0
The quality of the images that a monitor can display is defined by the video card (also called the
video controller or the video adapter) and the monitor. The video controller is an intermediary
device between the CPU and the monitor. It contains the video-dedicated memory and other
circuitry necessary to send information to the monitor for display on the screen.
In most computers, the video card is a separate device that is plugged into the motherboard. In
many newer computers, the video circuitry is built directly into the motherboard, eliminating
the need for a separate card.
The screen changes constantly as a user works—the screen is updated many times each second,
whether anything on the screen actually changes or not.
Frame Buffer
In general, the size of the frame buffer depends upon the total number of bits assigned per pixel
and the total resolution of the screen
Frame Buffer size = total resolution * bits assigned per pixel
So if the total resolution of the screen is 640 * 480 and 8 bits are assigned per pixel then the
total size of the frame buffer will be 640 * 480 * 8
The total number of intensities that can be produced out of a single pixel on the screen depends
upon the total number of bits assigned for that pixel
number of bits assigned per pixel
Total number of intensities = 2
that can be produced out of
a single pixel
So a 24 bit video card has the ability to produce 16 million different intensities out of a single
pixel on the screen as 224 = 16777216
0000000001 DAC
1 r
0100000000
0 00 10 00 00 00 00 00 00 00 1 DAC
1
0 00 01 10 00 00 00 00 00 00 0 DAC Electron
0 00 000010100000000000000000 00 0 0
0 00 01 10 00 00 00 00 00 00 0 Guns Screen
0 00 00 01 10 00 00 00 00 00 0 Registers
0001000000
0000100000
Frame Buffer
Raster Images: Raster image is an image of a 2-dimensional array of square (or generally rectangular) cells
called pixels (short for “picture elements”). Such images are sometimes called pixel maps.
The simplest example is an image made up of black and white pixels, each represented by a single bit (0 for
black and 1 for white). This is called a bitmap. For gray-scale (or monochrome) raster images raster images,
each pixel is represented by assigning it a numerical value over some range (e.g., from 0 to 255, ranging from
black to white). There are many possible ways of encoding color images.
Graphics Devices: The standard interactive graphics device is called a raster display. As with a television, the
display consists of a two-dimensional array of pixels. There are two common types of raster displays.
Video displays: consist of a screen with a phosphor coating, that allows each pixel to be illuminated
momentarily when struck by an electron beam. A pixel is either illuminated (white) or not (black). The level of
intensity can be varied to achieve arbitrary gray values. Because the phosphor only holds its color briefly, the
image is repeatedly rescanned, at a rate of at least 30 times per second.
Liquid crystal displays (LCD’s): use an electronic field to alter polarization of crystalline molecules in each
pixel. The light shining through the pixel is already polarized in some direction. By changing the polarization of
the pixel, it is possible to vary the amount of light which shines through, thus controlling its intensity.
Irrespective of the display hardware, the computer program stores the image in a two-dimensional array in
RAM of pixel values (called a frame buffer). The display hardware produces the image line-by-line (called
raster lines). A hardware device called a video controller constantly reads the frame buffer and produces the
image on the display. The frame buffer is not a device. It is simply a chunk of RAM memory that has been
allocated for this purpose. A program modifies the display by writing into the frame buffer, and thus instantly
altering the image that is displayed. An example of this type of configuration is shown below.
More sophisticated graphics systems, which are becoming increasingly common these days, achieve great speed
by providing separate hardware support, in the form of a display processor (more commonly known as a
graphics accelerator or graphics card to PC users). This relieves the computer’s main processor from much of
the mundane repetitive effort involved in maintaining the frame buffer. A typical display processor will provide
assistance for a number of operations including the following:
Transformations: Rotations and scalings used for moving objects and the viewer’s location.
Clipping: Removing elements that lie outside the viewing window.
Projection: Applying the appropriate perspective transformations.
Shading and Coloring: The color of a pixel may be altered by increasing its brightness. Simple shading
involves smooth blending between some given values. Modern graphics cards support more complex procedural
shading.
Texturing: Coloring objects by “painting” textures onto their surface. Textures may be generated by images or
by procedures.
Hidden-surface elimination: Determines which of the various objects that project to the same pixel is closest
to the viewer and hence is displayed.
Color: The method chosen for representing color depends on the characteristics of the graphics output device
(e.g.,whether it is additive as are video displays or subtractive as are printers). It also depends on the number of
bits per pixel that are provided, called the pixel depth. For example, the most method used currently in video
and color LCD displays is a 24-bit RGB representation. Each pixel is represented as a mixture of red, green and
blue components, and each of these three colors is represented as a 8-bit quantity (0 for black and 255 for the
brightest color).
In many graphics systems it is common to add a fourth component, sometimes called alpha, denoted A. This
component is used to achieve various special effects, most commonly in describing how opaque a color is. In
some instances 24-bits may be unacceptably large. For example, when downloading images from the web, 24-
bits of information for each pixel may be more than what is needed. A common alternative is to used a color
map, also called a color look-up-table (LUT). (This is the method used in most gif files, for example.) In a
typical instance, each pixel is represented by an 8-bit quantity in the range from 0 to 255. This number is an
index to a 256-element array, each of whose entries is a 234-bit RGB value. To represent the image, we store
both the LUT and the image itself. The 256 different colors are usually chosen so as to produce the best possible
reproduction of the image. For example, if the image is mostly blue and red, the LUT will contain many more
blue and red shades than others.
A typical photorealistic image contains many more than 256 colors. This can be overcome by a fair amount of
clever trickery to fool the eye into seeing many shades of colors where only a small number of distinct colors
exist. This process is called digital halftoning. Colors are approximated by putting combinations of similar
colors in the same area. The human eye averages them out.
Computer Graphic
Assignment 01
2-5 suppose an RGB raster system is to be designed using an 8-inch by 10-inch screen
with a resolution of 100 pixels per inch in each direction. If we want to store 6 bits per
pixel in the frame buffer, how much storage (in bytes) do we need for the frame
buffer?
The size of frame buffer is (8 x 10 x 100 x 100 x 6)/ 8= 600000bytes
2-6 how long would it take to load a 640 by 480 frame buffer with 12 bits per pixel, if
105bits can be transferred per second? How long would it take to load a 24-bit per
pixel frame buffer with a resolution of 1280 by 1024 using this same transfer rate?
Total number of bits for the frame = 640 x 480 x 12 bits = 3686400 bits
The time needed to load the frame buffer = 3686400 / 105 sec = 36.864 sec
Total number of bits for the frame = 1280 x 1024 x 24 bits = 31457280 bits
The time needed to load the frame buffer = 31457280 / 105 sec = 314.5728 sec
2-8 consider two raster systems with resolutions of 640 by 480 and 1280 by 1024.
How many pixels could be accessed per second in each of these systems by a display
controller that refreshes the screen at a rate of 60 frames per second? What is the
access time per pixel in each system?
The access time per pixel is 1 / (640x480x60)sec
The access time per pixel is 1 / (1280x1024x60)sec
2-12 what is the fraction of the total refresh time per frame spent in retrace of the
electron beam for a noninterlaced raster system with a resolution of 1280 by 1024, a
refresh rate of 60 Hz, a horizontal retrace time of 5 microseconds, and a vertical
retrace time of 500 microseconds?
1sec = 10^6 usec
Refresh rate = 60Hz = 1/60 sec to scan = 16.7 msec
The time for horizontal retrace = 1024 x 5 usec
The time for vertical retrace = 500 usec
Total time spent for retrace = 5120 + 500 = 5620 usec = 5.62 msec
The fraction of the total refresh time frame spent in retrace = 5.62 / 16.7 = 0.337
2-13 Assuming that a certain full-color (24-bit per pixel) RGB raster system has a
512-by-512 frame buffer, how many distinct color choices (intensity levels) would we
have available? How many different colors could we display at any one time?
Total number of distinct color available is 224
Total number of colors we could display at one time is 512 x 512
2 Assuming that a certain RGB raster system has 512*512 frame buffer with 12
bit per pixel and color lookup table with 24 bit for each entry
1 How many distinct color choice we have available
2 How many different color could we display at any one time?
3 How much storage spent altogether for the frame buffer and the color lookup table?
Total number of distinct color available is 224
Total number of different color could display at any one time is 212
The storage spent for frame buffer is 512 X 512 X 12 bit = 3145728 bit
The storage spent for the color lookup table is 212 X 24 bit = 98304 bit
So the total storage spent altogether is 3145728 + 98304 = 3244032 bit
Hardware Concepts
Input devices
Tablet
Tablet a tablet is a digitizer. In general, a digitizer is a device which is used to scan over an
object and input a set of discrete coordinate positions.
These positions can then be joined with straight line segments to approximate the shape of the
original object.
A tablet digitizes an object detecting the position of a movable stylus (a pencil shaped device)
or a puck(a mouse like device with cross hairs for sighting positions) held in the user‟s hand
A tablet is a flat surface and it‟s size varies from 6 by 6 inches up to 48 by 72 inches or more
The accuracy of the tablets usually falls below 0.2 mm
There are three types of tablets
i. Electrical Tablet
A grid of wires on ¼ to ½ inch centers is embedded in the tablet surface
Electromagnetic signals generated by electrical pulses applied in sequence to the wires in the
grid induce an electrical signal in a wire coil in the stylus or puck
The strength of the signal induced by each pulse is used to determine the position of the stylus.
The signal strength is also used to determine roughly how far the stylus is from the tablet
When the stylus is within ½ inch from the tablet it taken as near other wise it is either “far” or
“touching”
When the stylus is near or touching, a cursor is usually shown on the display to provide visual
feedback to the user
A signal is sent to the computer when the tip of the stylus is pressed against the tablet or when
any button on the puck is pressed
The information provided by the tablet repeats 30 to 60 times per second
Touch Panels
The touch panel allows the user to point at the screen directly with a finger to move the cursor
around the screen or to select the icons.
i. Optical Touch Panel
It uses a series of infrared light emitting diodes (LED) along one vertical edge and along one
horizontal edge of the panel
The opposite vertical and horizontal edges contain photo detectors to form a grid of invisible
infrared light beams over the display area.
Touching the screen breaks one or two vertical and horizontal light beams thereby indicating
the fingers position
The cursor is then moved to this position or the icon at this position is selected
This is a low resolution panel which offers 10 to 50 positions in each direction
- Region between two glass plates is filled with a mixture of gases such as neon, xenon.
- A series of vertical conducting ribbons is placed on one glass panel and a set of
horizontal ribbons is built into other gas panel.
- Firing voltages applied to a pair of horizontal and vertical conductors cause gas at
intersection of the two conductors to break down into a glowing plasma of electrons and
ions
- By controlling the amount of voltage applied at various points on grid, each point acts as
a pixel(intersection of conductors) to display an image.
- Picture definition is stored in a refresh buffer and firing voltages are applied to refresh
pixel positions 60 times per second.
Stereoscopic views
- Another method for representing three dimensional objects by overlapping images by
60%.
- Does not produce true 3D image but provides 3D effect by presenting a different view to
each eye of an observer so that scenes appears to have depth.
- Two views of a scene generated from a viewing direction corresponding to each eye.
- A component in virtual reality (a computer generated simulation of real or imagined
physical space, ultimate multimedia experience) systems.
Scanner
- Converts any printed image of an object into electronic form by shinning light onto the
image and sensing the intensity of light‟s reflection at any point.
- Color scanners use filters to separate components of color into primary additive colors
(red, green, blue) at each point.
- R G B are primary additive colors because they can be combined to create any other
color.
- Image scanners translate printed images into electronic format that can be stored into a
computer memory.
- Software is then used to manipulate the scanned electronic image.
- Images are enhanced or manipulated by graphics programs like Adobe.
Impact printers
Impact printers press the formed character faces against an inked ribbon onto paper
Character impact printers often have a dot matrix print head containing a rectangular array of
protruding wire pins with a number of pins depending on the quality of the printer
Non-Impact Printers
Non impact printers use laser techniques, ink-jet sprays etc to get images onto paper.
Ink-jet Devices
Ink-jet methods produce output by squirting ink in horizontal rows across a roll of paper
wrapped on a drum.
When a heater is activated a drop of ink is exploded onto the paper
The print head contains an ink cartridge which is made up of a number of ink filled firing
chambers each attached to a nozzle thinker that n a human hair
When an electric current is passed thru a resistor the resistor heats a thin layer of ink at the
bottom of the chamber
Causing ink to boil and form a vapor bubble that expands and pushes ink thru the nozzle to
form a droplet at the tip of the nozzle
The pressure of vapor bubble forces the droplet to move to the paper
A color ink jet printer employs four ink cartridges: one each for cyan, magenta, yellow and
black
The ink of desired color can be placed at any desired point of the page in a single pass
Laser Devices
These are page printers
They use laser beam to produce an image of the page containing text graphics on a
photosensitive drum which is coated with negatively charge photo conductive material
In a laser device a laser beam creates a charge distribution on a rotating drum coated with a
photo electric material such as selenium. Toner is applied to drum and then transferred to paper.
Potters
Plotter is a device that draws pictures on paper based on commands from a computer
They are used to produce precise and good quality graphics and drawing under computers
control
They use motor driven ink pen or ink jet to draw graphic or drawings
Drawings can be prepared on paper, Velluym or Mylar (Polyester film)
Drum plotters
A drum plotter contains as long cylinder and a pen carriage
Paper is placed over the drum and the drum rotates back and forth to give up and down
movement
The pen is mounted horizontally on the carriage that moves horizontally along with the carriage
left to right or right to left on the paper to produce drawings
The pen and drum both mover under the computer control to produce the desired drawing
Several pens with different color dinks can be mounted on the carriage for multicolor drawing
Inkjet plotters
Many plotters us ink jets in place of ink pens
It uses liquid crystals between two sheets of material to present information on a screen
The LCD monitor creates images with a special kind of liquid crystal that is normally transparent
but becomes opaque when charged with electricity.
Depending on how much they twist, some light waves are passed through while other light
waves are blocked. This creates the variety of color that appears on the screen
Active matrix displays use thin-film transistor (TFT) technology, which employs as many as
four transistors per pixel.
Passive Matrix Display
The passive matrix LCD relies on transistors for each row and each column of pixels, thus
creating a grid that defines the location of each pixel. The color displayed by a pixel is
determined by the electricity coming from the transistors
The passive matrix LCD relies on transistors for each row and each column of pixels, thus
creating a grid that defines the location of each pixel. The color displayed by a pixel is
determined by the electricity coming from the transistors
Passive-matrix display uses fewer transistors and requires less power than an active-matrix
display
Users view images on a passive-matrix display best when working directly in front of it.
Another disadvantage is that they don’t refresh the pixels very quickly.
Most passive matrix screens now use dual-scan LCD technology, which scans the pixels twice
as often.
An importance measure of LCD monitors is the response time, which is the time in millisecond
(ms) that it takes to turn a pixel on or off
Region between two glass plates is filled with a mixture of gases such as neon, xenon.
A series of vertical conducting ribbons is placed on one glass panel and a set of horizontal
ribbons is built into other gas panel.
Firing voltages applied to a pair of horizontal and vertical conductors cause gas at intersection of
the two conductors (release ultraviolet (UV) light) to break down into a glowing plasma of
electrons and ions to form an image.
By controlling the amount of voltage applied at various points on grid, each point acts as a
pixel(intersection of conductors) to display an image.
Picture definition is stored in a refresh buffer and firing voltages are applied to refresh pixel
positions 60 times per second.
Larger screen sizes and higher display quality than LCD, but much more expensive
Scanners
# Convert any printed image of an object into electronic form by shinning light onto the
image and sensing the intensity of light’s reflection at any point.
# Color scanners use filters to separate components of color into primary additive colors
(red, green, blue) at each point.
# R G B are primary additive colors because they can be combined to create any other
color.
# Image scanners translate printed images into electronic format that can be stored into a
computer memory.
# When a scanner first creates an image from a page the image is stored in computer’s
memory as bitmap.
# OCR software translates the array of dots into text that the computer can interpret as
number and letters by looking at each character and trying to match the character with
its own assumption about how the image should look like.
Algorithm
For |m| <=1
i. Read xa,ya,xb,yb (Assume –1 <= m <=1)
ii. Load (x0, y0) into the frame buffer (i.e. plot the first point)
iii. Calculate constants y,x, 2y and 2y - 2x
Obtain the first decision parameter p0 = 2y - x
iv. At each xk along the line starting at k = 0 perform
the following tests:
If pk < 0 then the next point to plot is (xk +1 , yk) and
pk+1 = pk + 2y
else the next point to plot is (xk +1 , yk+1) and
pk+1 = pk + 2y - 2x
v. Repeat step iv x times
Here first we initialize the decision parameter and set the first pixel. Next, during each iteration, we
increment ‘x’ to the next horizontal position, then use the current value of the decision parameter to
select the bottom or top pixel (increment y) and update the decision parameter and at the end set the
chosen pixel.
Advantages
It is a faster incremental algorithm that makes use of integer arithmetic calculations only , avoids
floating point computations
So it uses faster operations such as addition/subtraction and bit shifting
CPU intensive rounding off operations are avoided
Disadvantages
It is used for drawing basic lines, antialiasing is not a part of this algorithm so for drawing smooth lines
it is not suitable
Bresenham’s Line Drawing Algorithm for Lines with Negative slope and
Magnitude of the slope greater than one (-2,-6) (-4,-9)
For the line with slope greater than equal to one and negative slope, the pixel positions are determined
by sampling at unit ‘y’ interval i.e. yk+1 = yk + 1, with the starting pixel at (x0, y0) from right hand.
For any kth step, assuming position (xk , yk) has been selected at previous step, we determine next
position (xk+1 , yk+1) as either (xk, yk-1) or (xk -1, yk - 1)
At xk+1 label vertical pixel separations from ideal line path as d1 and d2, ‘y’ coordinate at xk+1 will be
(yk -1)= m xk + c
The distance of right pixel from the ideal location d1 = x – xk or d1 = ((yk-1) + c)/m – xk
The distance of the ideal location from the left pixel d2 = xk -1 – x or d2 = xk -1 – ( (yk-1) – c)/m
Thus the difference between the separations of two pixel positions from the actual line path,
d1 – d2 = ((yk-1) - c)/m – xk - xk +1 + ( (yk-1) – c)/m
d1 – d2 = 2 ((yk-1) - c)/m – 2xk + 1
d1 – d2 = 2 Δx yk- 2 Δx - 2c Δx – Δy 2xk + Δy
Substituting m = Δy/Δx, we get
Δy (d1 – d2) = 2.Δx. yk – 2.Δy.xk + Δy - 2Δx - Δx.2c
Pk = Δy.(d1 – d2) = 2.Δx. yk – 2.Δy.xk + b …. ( i ) where b = -2Δx + Δx.2c +Δy and Pk is the
decision parameter at the kth step
At the k+1th step
Pk+1 = 2.Δx. yk+1 – 2.Δy.xk+1 + b …. ( ii )
Now subtracting ( i ) and ( ii )
Pk+1 = Pk + 2Δx. (yk+1 – yk ) - 2Δy. (xk+1 – xk )
Since the slope of the line is greater than one, we sample in decreasing ‘y’ direction i.e. yk+1 – yk = -1 so,
Pk+1 = Pk - 2Δx – 2Δy (xk+1 – xk ) …. ( iii )
Case 1:
if Pk < 0 then the pixel on scanline
yk
‘xk’ is closer to the line path and xk+1 = xk
i.e. from equation (iii)
Pk+1 = Pk - 2Δx
yk -1
xk -1 xk
Case 2:
if Pk >= 0 then the pixel on scanline yk
‘xk - 1’ is closer to the line path and xk+1 = xk - 1
i.e. from equation (iii)
Pk+1 = Pk - 2Δx + 2Δy
yk -1
xk -1 xk
1. Input radius r and circle center (xc yc), and obtain the first point on the circumference of a
circle centered on the origin as (x0, y0) = (0, r)
2. Calculate the initial value of the decision parameter as
P0 = 5/4 – r
3. At each xk position, starting at k = 0, perform the following test:
If Pk < 0, the next point along the circle centered on (0,0) is (xk+1, yk) and Pk+1 = Pk + 2xk+1 + 1
Otherwise, the next point along the circle is (xk + 1, yk - 1) and Pk+1 = Pk + 2xk+1 - 2yk+1+ 1
where 2x k+1 = 2xk + 2 and 2yk+1 = 2yk - 2.
4. Determine symmetry points in the other seven octants.
5. Move each calculated pixel position (x, y) onto the circular path centered on (xc, yc) and plot the
coordinate values:
x = x + xc, y = y + yc
6. Repeat steps 3 through 5 until x >= y.
Line Drawing
Point plotting is accomplished by converting a single coordinate position by an application program into
approximate operation for the output device in use.
CRT electron beam is turned on to illuminate the screen phosphor at selected location.
In random scan system, point plotting commands are stored in display list and coordinate values in these
instructions are converted to deflection voltages that position the electron beam at that screen location to be
plotted during each refresh cycle.
In case of black and white raster scan system a point is plotted by setting the bit value corresponding to
specified screen position within frame buffer to 1.
For drawing lines, we need to calculate intermediate positions along the line path between
two end points e.g. 10.45 is rounded off to 10 (causes stair cases or jaggies to be formed)
To load intensity value into frame buffer at position x, y use setpixel (x, y, intensity)
To retrieve current frame buffer intensity value for specified location use getpixel(x,y)
iii. For Lines with slope < 1 (Moving from Right to Left)
Perform unit increment in x direction x = -1 as x or yk+1 = yk - m
> y i.e. xk+1 – xk = -1 and compute successive y The y value computed must be rounded off to the
value as nearest whole number
yk+1 – yk = - m
iv. For Lines with slope > 1 (Moving from Right to Left)
Perform unit increment in y direction y = -1 as y or xk+1 = xk - 1/m
> x i.e. yk+1 – yk = -1 and compute successive x The x value computed must be rounded off to the
value as nearest whole number
xk+1 – xk = -1/m
For Lines with Negative Slope
v. For Lines with |m| <=1 (Moving from Left to Right)
Perform unit increment in x direction x = 1 as x
> y i.e. xk+1 – xk = 1 and compute successive y
value as
yk+1 – yk = m
or yk+1 = yk + m
The y value computed must be rounded off to the
nearest whole number
vi. For Lines with |m| > 1 (Moving from Left to Right)
Perform unit increment in y direction y = -1 as y
> x i.e. yk+1 – yk = -1 and compute successive x
value as
xk+1 – xk = - 1/m
or xk+1 = xk - 1/m
The x value computed must be rounded off to the
nearest whole number
vii. For Lines with |m| < 1 (Moving from Right to Left)
Perform unit increment in x direction x = - 1 as x > y i.e. xk+1 – xk = -1 and compute successive y
value as
yk+1 – yk = - m
or yk+1 = yk - m
The y value computed must be rounded off to the nearest whole number
viii. For Lines with |m| > 1 (Moving from Right to Left)
Perform unit increment in y direction y = 1 as y > x i.e. yk+1 – yk = 1 and compute successive x value
as
xk+1 – xk = 1/m
or xk+1 = xk + 1/m
The x value computed must be rounded off to the nearest whole number
This algorithm is based on floating point arithmetic so it is slower than Bresenham’s line drawing
algorithm for drawing lines as Bresenham’s line drawing algorithm is based on integer arithmetic
approach.
Algorithm:
void lineDDA (int xa, int ya, int xb, int yb){
int dx = xb - xa, dy = yb - ya, steps, k;
float xIncrement, yIncrement, x = xa, y = ya;
if (abs (dx) > abs (dy)) steps = abs (dx) ;
else steps = abs (dy);
xIncrement = dx / (float) steps;
yIncrement = dy / (float) steps
setpixel (ROUND(x), ROUND(y));
for (k=O; k<steps; k++){
x += xIncrement;
y += yIncrement;
setpixel (ROUND(x), ROUND(y));
}
}
Ellipse
Definition: An ellipse is defined as set of points such that sum of the distances from the two fixed
points is the same for all points. If the distance to two fixed points from any point P(x,y) on ellipse
are d1 , d2 then general equation of an ellipse is d1 + d2 = constant
Or expressing distance d1 , d2 in terms of focal coordinates F1 = (x1 ,y1 ) and F2 = (x2 ,y2 )
We have, ______________ ________________
√(x – x1)2 + (y – y1)2 + √( x – x2) 2 + (y – y2)2 = constant
Mid point ellipse method is applied throughout first quadrant in two parts( according to the slope
of ellipse)
The equation of an ellipse is given by
d1
x2 / r2x + y2 / r2y = 1 F1 p (x,y)
d2
F2
2 2 2 2 2 2
or Fellipse(x,y) = r y x + r xy - r xr y
Case 1:
if P1k < 0 then the mid point is
yk
inside the ellipse, so pixel on scanline
‘yk’ is closer to the ellipse boundary
and yk+1 = yk
yk -1
so the increment will be 2r2y xk+1 + r2y xk xk+1
i.e. from equation (iii)
Or P1k+1 = P1k + 2r2y xk+1 + r2y ---------- (a)
Where xk+1 = xk + 1
or 2r2y xk+1 = 2r2y xk + 2 r2y
Case 2:
if P1k >= 0 then the mid point is
yk
outside or on the boundary of the
ellipse, so we select the pixel on
scan line ‘yk - 1’ then yk+1 = yk - 1
yk -1
so the increment will be 2r2y xk+1 - 2r2x yk+1
xk xk+1
i.e. from equation (iii)
Or P1k+1 = P1k + 2r2y xk+1 - 2r2x yk+1 + r2y ---------- ( b)
Where 2r2x yk+1 = 2r2x yk – 2 r2x
or 2r2y xk+1 = 2r2y xk + 2 r2y
Initial decision parameter for Region 1 = P10
The starting position is (0,ry)
Next pixel to plot is either (1,ry) or (1,ry - 1)
So, midpoint coordinate position is (1, ry - ½ )
Fellipse( 1 ,ry - ½ ) = r2y + r2x (ry - ½ ) 2 - r2y r2y
Thus,
P10 = r2y + ¼ r2x - r2x ry
Region 2.
Sample at unit steps in ‘y’ direction, the midpoint is taken between horizontal pixels at each step
now. Assuming, (xk ,yk) has been plotted, next pixel to plot is (xk+1 , y k+1) where
x k+1 is either x k or x k+1
and y k+1 is yk- 1
i.e. we choose either (xk , yk - 1) or (xk+1 , yk - 1)
So, midpoint coordinate position is (xk + ½ , yk - 1)
Fellipse(xk + ½ , yk - 1)
Or, P2k = r2y (xk + ½) 2 + r2x (yk - 1) 2 - r2x r2y ---------------------------------------- ( iv )
now, at next sampling position , the next pixel to plot will either be
(xk+1 , yk+1 - 1) or (xk+1 +1, yk+1 - 1)
thus,
Fellipse(xk+1 + ½ , yk+1 - 1)
Or, P2k+1 = r2y (xk+1 + ½) 2 + r2x (yk+1 - 1) 2 - r2x r2y
= r2y (xk+1 + ½) 2 + r2x [(yk – 1) – 1] 2 - r2x r2y ---------------------------------------- ( v)
Now subtracting eq (iv) and (v),
P2k+1 = P2k - 2r2x (yk – 1) + r2x + r2y [(xk+1 + ½ ) 2 - (xk + ½ ) 2 ] ---------- ( vi )
where xk+1 is either xk or x k + 1 depending on the sign of P2k.
Case 1:
yk
if P2k > 0 then the mid point is
outside the boundary of the
ellipse, so we select the pixel at ‘xk’
yk -1
Or P2k+1 = P2k - 2r2x (yk - 1) + r2x xk xk+1
2 2
= P2k - 2r x yk+1 + r x ---------- (c) Where yk+1 = yk - 1
or 2r2x yk+1 = 2r2x yk - 2 r2x
Case 2:
yk
if P2k < = 0 then the mid point is
inside or on the boundary of the
ellipse , so we select pixel at ‘xk + 1’
yk -1
i.e. from equation (vi) xk+1
xk
2 2 2 2 2
Or P2k+1 = P2k - 2r x (yk – 1) + r x + r y [(xk+1 + ½ ) - (xk + ½ ) ]
= P2k - 2r2x (yk – 1) + r2x + r2y [(xk +1) + ½ ) 2 - (xk + ½ ) 2 ]
= P2k - 2r2x (yk – 1) + r2x + r2y [(xk +3/2) 2 - (xk + ½ ) 2 ]
= P2k - 2r2x (yk – 1) + r2x + r2y [x2k + 3xk + 9/4 - x2k - x – 1/4]
= P2k - 2r2x (yk – 1) + r2x + r2y [2xk + 2]
= P2k - 2r2x yk+1 + r2x + 2r2y xk+1 ---------- ( d )
Where 2r2x yk+1 = 2r2x yk – 2 r2x
or 2r2y xk+1 = 2r2y xk + 2 r2y
For region 2, the initial position (x0 , y0) is taken as the last position selected in region 1 and thus
the initial decision parameter in region 2 is
P20 = Fellipse( x0 + ½ ,y0 - 1)
= r2y (x0 + ½ ) 2 + r2x (y0 - 1) 2 - r2x r2y
FILLED-AREA PRIMITIVES
Objects can be filled with solid-color or patterned polygon area in some graphical packages.
Standard output primitive in graphics packages is solid color ,patterned polygon area
Determine the overlap intervals for scan lines that cross the area. Typically useful for filling polygons, circles, ellipses
Start from a given interior position and paint outwards from this point until we encounter the specified boundary
conditions useful for filling more complex boundaries, interactive painting system.
Move along scan line (from left to right) that intersect the primitive and fill in pixels that lay inside
Set each pixel lying on scan line running from left edge to right with same pixel value, each span from x max to x min
for( y from y min to y max of rectangle) /*scan line*/
for( x from x min to x max of rectangle) /*by pixel*/
writePixel(x, y, value);
For each scan line crossing a polygon, the area-fill algorithm locates the intersection points of the scan line with the
polygon edges.
These intersection points are then sorted from left to right, and the corresponding frame-buffer positions between each
intersection pair are set to the specified fill color.
Four pixel intersection positions with the polygon boundaries define two stretches of interior pixels from x = 10 to x = 14
and from x = 18 to x = 24.
For a scan line passing through a vertex with two edges intersecting at that position, adding two points to the list of
intersections for the scan line gives a solution.
Scan line y intersects five polygon edges. Scan line y', however, intersects an even number of edges although it also
passes through a vertex.
For scan line y, the two intersecting edges sharing a vertex are on opposite sides of the scan line. But for scan line y', the
two intersecting edges are both above the scan line
One way to resolve the question as to whether we should count a vertex as one intersection or two is to shorten some
polygon edges to split those vertices that should be counted as one intersection
Boundary-Fill Algorithm
Starts at a point inside a region and paint the interior outward toward the boundary.
If the boundary is specified in a single color, the fill algorithm proceeds outward pixel by pixel until the boundary color is
encountered.
Four connected approach considers neighboring pixels as the ones that are to the
right, left, bottom and top of the seed pixel
Used for filling an area that is not defined within a single color boundary.
Such areas can be filled by replacing a specified interior color instead of searching for a boundary color value.
This approach does not require boundary to be specified, but interior pixels must be of the same color
Algorithm:
Start from specified interior point (x , y) and reassign all pixel values that are
currently set to a given interior color with the desired color.
If the area we want to paint has more than 1 interior color , first assign pixel values
so that all interior points have same color.
We can then use 8 or 4 connected approach to move on until all interior points have been
repainted.
Vector has single direction and length and may be denoted by [ Dx ,Dy ].
Vectors tell us how far and what direction to move but hot where to start. e.g.
command for pen to move so far from its current position in given direction.
Position of these points is controlled by manipulating matrix that defines the points.
A straight line is transformed by transforming its end points then redrawing the line
between the transformed end points.
A curve is transformed by transforming its control point such as it’s center point in
case of a circle and then redrawing the curve using the transformed control points.
Saroj Shakya Computer Graphics Nepal College of Information Technology[Type text] Page 1
Clipping in Raster World
It is a procedure that identifies those operations of picture that are either inside or outside is called
clipping.
The region against which an object is to be clipped is called a clip window.
Depending on application it can be polygons or even curve surfaces.
Applications
i. Extracting parts of defined scene for viewing
ii. Identifying visible surfaces in three dimension views
iii. Drawing, painting operations that allow parts of a picture to be selected for copying, moving,
erasing or duplicating etc.
Displaying only those parts of picture that are within window area, discard everything outside window.
World coordinate clipping removes those primitives outside window from further consideration thus
eliminating process necessary to transfer those primitives to device space.
View port clipping requires transformation to device coordinate be performed for all objects including
those outside window area.
Point Clipping
Assuming the clip window is a rectangle, the lower left corner of the window is define by x = xwmin
and y = ywmin and the upper left corner of the window is define by x = xwmax and y = ywmax a point
p=(x,y) if visible for display if the following inequalities are satisfied:
(xwmax , ywmax)
xwmin ≤ x ≤ xwmax
ywmin ≤ y ≤ ywmax (xwmin , ywmin)
Applied to scenes involving explosions or sea foam that are modeled with points distributed in some
region of the scene.
Cohen Shutherland Line Clipping Algorithm
It is based on a coding scheme
Makes clever use of bit operations to perform this test efficiently
For each end point , a 4 bit binary code is used
The lower order (bit 1) is set to 1 if the end point is at the left side of the window otherwise set to 0
Bit 2, is set to 1 if the end point is at the right side of the window otherwise set to 0
Bit 3, is set to 1 if the end point is at the bottom of the window otherwise set to 0
Bit 4, is set to 1 if the end point is above the window otherwise set to 0
By numbering bit positions in region code as 1 – 4 from right to left coordinate regions can be co-
related with bit positions as
Bit 4 Bit 3 Bit 2 Bit 1
x x x x
up bottom right left
Value of 1 in any bit position indicates that the point is in that relative position otherwise bit position is
set to 0.
# if point is within clipping rectangle, region code is 0000.
# if point is below and to the left of rectangle then region code is 0101.
The region codes are shown
1001 1000 1010
0001 0000 0010
0101 0100 0110
Algorithm:
Step 1: Establish the region codes for all line end points:
Bit 1 is set to 1 if x < xwmin otherwise set it to 0 Bit 3 is set to 1 if y < ywmin otherwise set it to 0
Bit 2 is set to 1 if x > xwmax otherwise set it to 0 Bit 4 is set to 1 if y > ywmax otherwise set it to 0
Step 2: Determine which lines are completely inside the window and which are completely outside ,
using the following tests:
a. If both end points of the line have region codes 0000 the line is completely inside the window
b. If the logical AND operation of the region codes of the two end points is NOT 0000 then the
line is completely outside (same bit position of the two end points have 1)
Step 3: if both the tests in step 2 fail then the line is not completely inside nor outside . so we need to
find out the intersection with the boundaries of the window
slope (m) = (y2 –y1)/(x2-x1)
a. If bit 1 is 1 then the line interests with the left boundary and yi = y1 + m * (x – x1) where x = xwmin
b. If bit 2 is 1 then the line interests with the right boundary and yi = y1 + m * (x – x1) where x = xwmax
c. If bit 3 is 1 then line interests with the bottom boundary and xi = x1 + (y – y1)/m where y = ywmin
d. If bit 4 is 1 then line interests with the upper boundary and xi = x1 + (y – y1)/m where y = ywmax
Here, xi and yi are the x and y intercepts for that line
Step 4: repeat Step1 to Step3 until the line is completely accepted or rejected
Homogenous coordinate
Consider the effect of general 2 by 2 transformation applied to the origin
This can be accomplished by translating origin or any other point in 2 dimension plane
If x* = ax + by + m
y* = bx + dy + n
We say that 2 sets of homogenous coordinates ( x, y, h) and (x*, y*,h*) represent the
same point if and only if one is multiple of another i.e. (2,3,6) , (4,6,12) are same points
represented by different coordinate triples.
[ 2, 3, 1] for h = 1
[ 4, 6, 2] for h = 2
[-2,-3,-1] for h = -1
a b m
[T] = c d n
0 0 1
x* 1 0 tx x
y* = 0 1 ty y
1 0 0 1 1
x + tx
= y + ty
1
x* cos0 -sin0 0 x
y* = sin0 cos0 0 y
1 0 0 1 1
xcos0 - ysin0
= xsin0 + ycos0
1
x* sx 0 0 x
y* = 0 sy 0 y
1 0 0 1 1
x . sx
= y . sy
1
A polygon can be clipped by processing the polygon boundary as a whole against each (clip)
window edge. This could be accomplished by processing all polygon vertices against each clip
rectangle boundary (left, right, bottom and top) in turn.
Beginning with the initial set of polygon vertices, first clip the polygon against the left rectangle
boundary to produce a new sequence of vertices. The new set of vertices could then be
successively passed to a right boundary clipper, a bottom boundary clipper, and a top boundary
clipper. At each step, a new sequence of output vertices is generated and passed to the next
window boundary clipper
.
Four possible cases when processing vertices in sequence around perimeter of a polygon:
As each pair of adjacent polvgon vertices is passed to a window boundary clipper, we make the following
tests:
Case 1: If the first vertex is outside the window boundary and the second vertex is inside, both the
intersection point of the polygon edge with the window boundary and the second vertex are added to the output
vertex list.
Case 2: If both input vertices are inside the window boundary, only the second vertex is added to the output
vertex list.
Case 3: If the first vertex is inside the window boundary and the second vertex is outside, only the edge
intersection with the window boundary is added to the output vertex list.
Case 4: If both input vertices are outside the window boundary, nothing is added to the output list.
Once all vertices have been processed for one clip window boundary, the output list of vertices is clipped
against the next window boundary.
Example
Window Window
3
2’ 2’
2 1’ 1’
3’ 4 3’
1
6 5’ 4’ 5’ 4’
5
55
6
Before Clipping After Clipping
Required setting up storage for an output list vertices as a polygon is clipped against each
window boundary
First, we construct the scene in World Coordinates from objects modeled in their individual coordinate systems
called Modeling Coordinate System
Next. to obtain a particular orientation for the window, we can set up a two-dimensional Viewing-Coordinate
System in the world-coordinate plane, and define a window
In the viewing-coordinate system, the viewing coordinate reference frame is used to provide a method for
setting up arbitrary orientations for rectangular windows. Once the viewing reference frame is established,
descriptions from world coordinates are transferred to viewing coordinates. So clipping and 2D
transformations take place in bringing objects from World Coordinate System to Viewing Coordinate System.
Then a viewport in normalized coordinates (in the range from 0 to 1 ) is defined and the viewing-coordinate
description of the scene are mapped to normalized coordinates.
At the final step, the parts of the picture that are outside the viewport are clipped, and the contents of the
viewport are transferred to device coordinates.
By changing the position of the viewport, we can view objects at different positions on the display area of an
output device. Also, by varying the size of viewports, we can change the size and proportions of displayed
objects. We achieve zooming effects by successively mapping different-sized windows on a fixed-size
viewport.
3D Viewing Transformation
At the first step, a scene is constructed by transforming object descriptions from modeling coordinates to
world coordinates.
Next, a view mapping convert: the world descriptions to viewing coordinates.
At the projection stage, the viewing coordinates are transformed to projection coordinates, which effectively
converts the view volume into a rectangular parallelepiped.
Then, the parallelepiped is mapped into the unit cube, a normalized view volume called the normalized
projection coordinate system.
The mapping to normalized projection coordinates is accomplished by transforming points within the
rectangular parallelepiped into a position within a specified three-dimensional viewport, which occupies part or
all of the unit cube.
Finally, at the workstation stage, normalized projection coordinates are converted to device coordinates for
display. The normalized view volume is a region defined by the planes
We can express the three-dimensional transformation matrix for these operations in the form
Factors Dx Dy , and Dz are the ratios of the dimensions of the viewport and regular parallelepiped view volume
in the x, y, and z directions where the view-volume boundaries are established by the window limits (xwmin ,
xwmax, , ywmin , ywmax) and the positions zfront and zback of the front and back planes.
Viewport boundaries are set with the coordinate values xvmin , xvmax, , yvmin , yvmax, zvmin and zvmax. The additive
translation factors Kx, Ky, and Kz in the transformation are
Window /View port
View ports refer to rectangular areas inside window that display graphical data.
View ports are or can be of different sizes but are always smaller than size if window.
For practical applications we need a transformation to translate and scale window to any size by moving it
to specified rectangular area on screen
The choice of window decides what we want to see on display and choice of view port decides where we
want to see on display screen
The display hardware divides screen into a number of pixels arranged in a grid with each pixel are
associated its x , y coordinates.
For the VGA graphics card the size of display grid is 640 x 480.
It may sometimes be desirable to select a part of object or drawing for display in order to obtain sufficient
detail of the object on display
For displaying one has to convert world coordinates into screen coordinates
This transformation is called viewing transformation
Saroj Shakya Computer Graphics Nepal College of Information Technology[Type text] Page 1
In general view transformation consists of operations such as scaling, translation , rotation etc.
Construct
World Map viewing
Convert coordinates to Map
coordinate world normalized
Scene using normalized
coordinates viewing view port to
modeling- to viewing device
coordinate coordinates
coordinates using window coordinates
transformations
view port
specifications
A point in position(xw,yw) in window is mapped into position (xv, yv) in the view port
Sequence of Transformations
i. Perform scaling transformation using fixed point position(xwmin , ywmin) that scales the window
area to the size of the view port
ii. Translate the scaled window area to the position of the view port
(xwmax,ywmax) (xvmax,yvmax)
(xwmin,ywmin) (xvmin,yvmin)
Saroj Shakya Computer Graphics Nepal College of Information Technology[Type text] Page 2
Window and View ports
A rectangular area specified in world coordinates is called a window.
A rectangular area on the display device to which a window is mapped is called a view port.
The window defines what is to be viewed; the view port defines where it is to be displayed.
Often windows and view ports are rectangles in standard position with rectangle edges parallel to coordinate axes
The mapping of a part of world coordinate scene to device coordinate is referred to as viewing transformation.
The overall transformation which performs these three steps called the viewing transformation. Let the window
coordinates be (xwmin, ywmin)and (xwmax, ywmax) where as the view port coordinates be (xvmin, yvmin)and (xvmax, yvmax)
1 0 -xwmin sx 0 0 1 0 xvmin
Tw = 0 1 -ywmin Swv = 0 sy 0 Tv = 0 0 ywmin
0 0 1 0 0 1 0 0 1
The part of the object which lies inside the view volume will be displayed and the part that lies outside will be
clipped
For a parallel projection a box or a region is a rectangular area and in case of perspective projection it is a
truncated pyramidal volume called a frustum of vision
The view volume has 6 sides: Left, Right , Bottom, Top, Near and Far
Cohen Sutherland ‘s region code approach can be extended for 3D clipping as well
In 2D, a point is checked if it is inside the visible window/region or not but in 3D clipping a point is compared
against a plane
i. Parallel Projection
If both the end points have region codes 000000 then the line is completely visible
If the logical AND of the two end points region codes are not 000000 i.e. the same bit position of both the end
points have the value 1, then the line is completely rejected or invisible else it is the case of partial visibility
so the intersections with the planes must be computed
For a line with end points P1 (x1,y1,z1) and P2 (x2,y2,z2), the parametric equation can be expressed as:
If we are testing a line against the front plane of the viewport then z = zvmin and
Thus instead of representing a point as (x , y ,z) , we represent if as (x , y , z, H), where two these
quadruples represent the same point it one is a non zero multiple of the other the quadruple (0,0,0,0) is
not allowed.
A standard representation of a point (x , y ,z ,H) with H not zero is given by (x/H , y/H, z/H, 1).
Translation:
A point is translated from position P=(x,y,z) to position P’ = (x’,y’,z’) with the matrix operation
x’ 1 0 0 tx x
y’ = 0 1 0 ty . y
z’ 0 0 1 tz z
1 0 0 0 1 1
or P’ = T.P
Parameters tx , ty ,tz specify translation distances for the coordinate directions x ,y and z.
Scaling:
Scaling changes size of an object and repositions the object relative to the coordinate origin.
If transformation parameters are not all equal then figure gets distorted
So we can preserve the original shape of an object with uniform scaling(sx= sy= sz)
Matrix expression for scaling transformation of a position P = (x,y,z) relative to the coordinate origin can be
written as :
x’ sx 0 0 0 x
y’ 0 sy 0 0 . y
z’ = 0 0 sz 0 z
1 0 0 0 1 1
or P’ = S . P
Reflection:
Reflections with respect to a plane are equivalent to 180* rotations in four dimensional space.
1 0 0 0
RFz = 0 1 0 0
0 0 -1 0
0 0 0 1
This transformation changes the sign of the z coordinates, leaving the x and y coordinate
values unchanged.
Transformation matrices for inverting x and y values are defined similarly, as reflections relative to the yz
plane and xz plane.
Shearing:
Shearing transformations are used to modify object shapes.
E.g. shears relative to the z axis:
1 0 a 0
SHz = 0 1 b 0
0 0 1 0
0 0 0 1
It alters the x and y coordinate values by an amount that is proportional to the z value while leaving the z
coordinate unchanged.
Axes that are parallel to the coordinate axes are easy to handle.
x’ cosθ -sinθ 0 0 x
y’ = sinθ cosθ 0 0 . y
z’ 0 0 1 0 z
1 0 0 0 1 1
or,
P’ = Rz (θ) . P
Cyclic permutation of the coordinate parameters x , y and z are used to get transformation equations for
rotations about the other two coordinates
xyz so,
substituting permutations in (i) for an x axis rotation we get,
y’ = ycosθ - zsinθ z’ = ysinθ + zcosθ x’ = x
x’ 1 0 0 0 x
y’ = 0 cosθ -sinθ 0 . y
z’ 0 sinθ cosθ 0 z
1 0 0 0 1 1
or,
P’ = Rx (θ) . P
substituting permutations in (i) for a y axis rotation we get,
z’ = zcosθ - xsinθ x’ = zsinθ + xcosθ y’ = y
3D y-axis rotation equations are expressed in homogenous coordinate form as
or, P’ = Ry (θ) . P
d= cy2 + cz2
so,
cos = cz/d sin = cy/d
so, transformation matrix for rotation about x axis is:
1 0 0 0 1 0 0 0
Rx( ) = 0 cos -sin 0 0 cz/d -cy/d 0
0 sin cos 0 0 cy/d cz/d 0
0 0 0 1 0 0 0 1
1 0 0 x0
0 1 0 y0
T = 0 0 1 z0
0 0 0 1
finally,
rotation about the arbitrary axis is given by z axis rotation matrix,
so the transformation matrix for rotation about an arbitrary axis then can be expressed as the composition of
these seven individual transformations:
R(θ) = T-1 . Rx-1 (θ) . Ry-1 (θ) . Rz (θ) . Ry(θ) . Rx (θ) . T
If the values of direction cosines are not known (cx, cy, cz) then they can be obtained knowing a second point on
the axis (x1, y1, z1) by normalizing the vector from the first to second point.
Vector along the axis from (x0, y0, z0) to (x1, y1, z1) is
[V] = [(x1 - x0 ) (y1 - y0) (z1 - z0)]
Normalized ,it yields the direction cosines,
Often it is necessary to reflect an object thru a plane other than one of these.Which is obtained with the help
of a series of transformations (composition).
i. translate a known point P that lies in the reflection plane to the origin of the coordinate system
ii. rotate the normal vector to the reflection plane at the origin until it is coincident with the z axis
this makes the reflection plane the z = 0 coordinate plane
iii. after applying the above transformation to the object reflect the object thru the z = 0 coordinate
plane.
i.e.
1 0 0 0
RFz = 0 1 0 0
0 0 -1 0
0 0 0 1
iv. Perform the inverse transformation to those given above to achieve the desired result.
So the general transformation matrix is:
M(θ) = T-1 . Rx-1 (θ ) . Ry-1 (θ ) . Rflctz (θ ) . Ry(θ) . Rx (θ ) . T
NOTE:
- See class notes for rotation about arbitrary axis and rotation about arbitrary
plane
In 3D we specify view volume(only those objects within view volume will appear in display on output device
others are clipped from display) in world , projection onto projection plane and view port on view surface
So objects in 3D world are clipped against 3D view volume and are then projected
the contents of projection of view volume onto projection plane called window are then transformed
(mapped onto ) view port for display.
Projections
Transform points in coordinate system of dimension ‘n’ into points in a coordinate system of dimension less
than ‘n’.
These are planar geometric projection as the projection is onto a plane rather than some curved surface and
uses straight rather than curved projectors.
2 types of projections
i. distinction is in relation of center of projection to projection plane
Visual effect of perspective projection is similar to that of photographic system and human visual system.
Size of perspective projection of an object varies inversely with distance of that object from the center of
projection
Although objects tend to look realistic, is not particularly useful for recording exact shape and
measurements of objects.
Perspective projection of any set of parallel lines that are not parallel to projection plane converge to
vanishing point
If the set of lines parallel to one of three principal axes then vanishing point is called axis vanishing point
eg. If projection plane cuts only z axis and normal to it , only z axis has principle vanishing point as lines
parallel to either y or x axes are also parallel to projection plane and has no vanishing points
In fig lines parallel to x,y do not converge only lines parallel to z axis converge
Parallel
Coordinate positions are transformed to the view plane along parallel lines
Preserves relative proportions of objects so that accurate views of various sides of an object are obtained but
doesn’t give realistic representation of the 3D object.
Used to produce the front, side and top views of an object. Front, side and rear
orthographic projections of an object are called elevations
Views that display more than one face of an object are called axonometric orthographic
projections. Most commonly used axonometric projection is the isometric projection.
Transformation equations
If view plane is placed at position zvp along zv axis then any point (x,y,z) is
transformed to projection as
xp = x , yp = y
z value is preserved for depth information needed (visible surface detection).
yv
This line of length L is at an angle Θ with the horizontal direction in the projection plane.
Expressing projection coordinates in terms of x,y,L and Θ as
xp = x + LcosΘ
yp = y + LsinΘ
L depends on the angle α and z coordinate of point to be projected
tan α = z/L
1 0 L1cosΘ 0
0 1 L1sinΘ 0
Mparallel = 0 0 0 0
0 0 0 1
u1 u2
P3(x3,y3) P3(x3,y3)
u5
u0 u4
u3
P0(x0,y0) P0(x0,y0)
P2(x2,y2) P2(x2,y2)
The above figure shows a smooth curve comprising of a large number of very small line segments. for understanding
the concept to draw such a line we deal with a curve as show above which is an approximation of the curve with five
line segments only
The approach below is used to draw a curve for any number of control points
Suppose P0,P1,P2,P3 are four control points
Number of segments in a line segment : nSeg
i = 0 to nSeg
u = i/nSeg [0,1] 0<= u <= 1
u0,u1 ……..u3
x(u) =∑ n : number of control points
x(u) = x0 BEZ0,3(u) + x1 BEZ1,3(u) + x2 BEZ2,3(u) + x3 BEZ3,3(u)
similarly
y(u) =∑ n : number of control points
y(u) = y0 BEZ0,3(u) + y1 BEZ1,3(u) + y2 BEZ2,3(u) + y3 BEZ3,3(u)
P0
or P1
Q(u) = [(1-3u +3u2 –u3) (3u-6u2 +3u3) (3u2 –3u3) u3] P2
P3
-1 3 -3 1 P0
or 3 -6 3 0 P1
Q(u) = [u3 u2 u1 1] -3 3 0 0 P2
1 0 0 0 P3
P1 P3
P0
2. Four Bezier polynomials are used in the construction of curve to fit four control points
3. It always passes thru the end points
4. Closed curves can be generated by specifying the first and last control points at the same position
P1
P0 P2
32
P4
P3
5. Specifying multiple control points at a single position gives more weight to that position
6. Complicated curves are formed by piecing several sections of lower degrees together
7. The tangent to the curve at an end point is along the line joining the end point to the adjacent control point
Bezier Surfaces
Two sets of orthogonal Bezier curves can be used to design an object surface be specifying by an input
mesh of control points. The parametric vector function for the Bezier surface is formed as the Cartesian
product of Bezier blending functions:
Bezier surfaces are defined by simple generalization of the curve formulation. Here, tensor product
approach is used with two directions of parameterization ‘u’ and ‘v’.
Any point on the surface can be located to given values of parametric pair by
P(u,v) = ∑ ∑ ( ) ( ) 0 ≤ u,v ≤ 1
As in the case of Bezier curves the Pj,k define the control vertices and the BEZj,m(u) and Bk,n`(v) are the
Bernstein blending functions in the u and v directions
The Bézier functions specify the weighting of a particular knot. They are the Bernstein coefficients. The
definition of the Bézier functions is
( ) ( ) ( )
( ) ( ) ( )
where C(m,j) represents the binomial coefficients. where C(n,k) represents the binomial coefficients.
( ) ( )
( ) ( )
When t = 0, the function is one for j = 0 and zero for all other points.
When we combine two orthogonal parameters, we find a Bézier curve along each edge of the surface, as
defined by the points along that edge.
Bézier surfaces are useful for interactive design and were first applied to car body design.
The degree of blending functions does not have to be the same in two parametric directions it could be cubic
in ‘u’ and quadratic in ‘v’
- The surface is contained within the convex hull of the control points
- The corners of the surface and the corner control vertices are coincident
Two sets of orthogonal Bezier curves can be used to design an object surface by specifying by an input
mesh of control points.
The parametric vector function for the Bezier surface is formed as the Cartesian product of Bezier blending
functions with Pj,k specifying the location of the ( m + 1) by ( n + 1) control points.
Bezier surfaces constructed for m = 3 n = 3 and m=4n=4
Dashed lines connect the control points
FRACTAL-GEOMETRY METHODS
Natural objects, such as mountains and clouds, don’t have smooth surface or regular shapes instead they have
fragmented features, and Euclidean methods do not realistically model these objects.
Natural objects can be realistically described with fractal-geometry methods, where procedures rather than
equations are used to model objects.
In computer graphics, fractal methods are used to generate displays of natural objects and visualizations of
various mathematical and physical systems.
We describe a fractal object with a procedure that specifies a repeated operation for producing the detail in the
object subparts. Natural objects are represented with procedures that theoretically repeat an infinite number of
times. Graphics displays of natural objects are, of course, generated with a finite number of steps.
If we zoom in on a continuous Euclidean shape, no matter how complicated, we can eventually get the zoomed-
in view to smooth out.
But if we zoom in on a fractal object, we continue to see as much detail in the magnification as we did in the
original view.
Zooming in on a graphics display of a fractal object is obtained by selecting a smaller window and repeating
the fractal procedures to generate the detail in the new window.
A consequence of the infinite detail of a fractal object is that it has no definite size. As we consider more and
more detail, the size of an object tends to infinity, but the coordinate extents of the object remain bound within a
finite region of space.
The amount of variation in the object detail with a number called the fractal dimension.
In graphics applications, fractal representations are used to model terrain, clouds, water, trees and other plants,
feathers, fur, and various surface textures, and just to make pretty patterns.
Fractal-Generation Procedures
A fractal object is generated by repeatedly applying a specified transformation function to points within a
region of space.
If P0 = (x0, y0, z0) is a selected initial point, each iteration of a transformation function F generates successive
levels of detail with the calculations
In general, the transformation function can be applied to a specified point set, or we could apply the
transformation function to an initial set of primitives, such as straight lines, curves, color areas, surfaces, and
solid objects.
Also, we can use either deterministic or random generation procedures at each iteration. The transformation
function may be defined in terms of geometric transformations (scaling, translation, rotation), or it can be set up
with nonlinear coordinate transformations and decision parameters.
Although fractal objects, by definition, contain infinite detail, we apply the transformation function a finite
number of times. Therefore, the objects we display actually have finite dimensions.
A procedural representation approaches a "true" fractal as the number of transformations is increased to produce
more and more detail.
The amount of detail included in the final graphical display of an object depends on the number of iterations
performed and the resolution of the display system
We cannot display detail variations that are smaller than the size of a pixel.
To see more of the object detail, we zoom in on selected sections and repeat the transformation function
iterations.
Classification of Fractals
i. Self-similar fractals
Self-similar fractals have parts that are scaled-down versions of the entire object.
Starting with an initial shape, the object subparts are constructed by apply a scaling parameter ‘s’ to the overall
shape.
The same scaling factors can be used for all subparts, or different scaling factors can be used for different
scaled-down parts of the object.
If random variations are applied to the scaled-down subparts, the fractal is said to be statistically self-similar.
The parts then have the same statistical properties.
Statistically self-similar fractals are commonly used to model trees, shrubs, and other plants.
Self-affine fractals have parts that are formed with different scaling parameters, sx, sy, sz in different coordinate
directions.
Random variations can also be used to obtain statistically self-affine fractals.
Terrain, water, and clouds are typically modeled with statistically self-affine fractal construction methods.
This class of fractals includes self-squaring fractals, such as the Mandelbrot set, which are formed with squaring
functions in complex space; and self-inverse fractals, formed with inversion procedures.
Fractal Dimension
The detail variation in a fractal object can be described with a number D, called the fractal dimension, which is
a measure of the roughness, or fragmentation, of the object.
Iterative procedures can be set up to generate fractal objects using a given value for the fractal dimension D.
An expression for the fractal dimension of a self-similar fractal, constructed with a single scalar factor s, is
obtained by analogy with the subdivision of a Euclidean object.
Suppose an object is composed of clay or elastic. If it is deformed into a line then its topological dimension D t
is 1, if it is deformed into a plane or a disk then the topological Dimension is 2 and if it is deformed into a ball
or a cube then its topological dimension is 3
The relationships between the scaling factor s; and the number of subparts n for subdivision of a unit straight-
line segment, A square, and a cube can be shown
If take a line segment having length L and divide it into n pieces each piece having length ‘l’
The scaling factor s = 1/n
If it is broken into two pieces, s1 = 1/2, the unit line segment is divided into two equal-length subparts.
For each of these objects, the relationship between the number of subparts and the scaling factor is n . sD = 1. In
analogy with Euclidean objects, the fractal dimension D for self-similar objects can be obtained from
n . sD = 1.
The fractal dimension gives the measure of the roughness or fragmentation of objects and is always greater than
the corresponding topological dimension
For a self-similar fractal constructed with different scaling factors for the different parts, the fractal similarity
dimension is obtained from the implicit relationship where sk is the scaling factor for subpart number k.
To geometrically construct a deterministic (nonrandom) self-similar fractal, we start with a given geometric
shape, called the initiator. Subparts of the initiator are then replaced with a pattern, called the generator.
Koch Curve
2. Divide it into thirds i.e. scaling factor = 1/3 and replace the center third by the two adjacent sides of an
equilateral triangle
3. There are now 4 equal length segments each 1/3 the original length , so the new curve has 4/3 length of the
original length
7. Repeat this indefinitely and the length every time increases by 4/3 factor. The curve will be infinite but is
folded in lots of tiny wiggles
8. Its topological dimension is 1 and it’s Fractal dimension can be calculated as follows
We have to assemble 4 such curves to make the original curve so N = 4 and Scaling factor S = 3 as each
segment has 1/3 the original segment length
Peano Curve
It is also called space filling curve and is used for filling two dimensional object e.g. a square
1. Sub-divide a square into 4 quadrants and draw the curve which connects the center points of each
2. Further subdivide each of the quadrants and connect the centers of each of these finer divisions before
moving to the next major quadrant
3. The third approximation subdivides again it again connects the centers of the finest level before stepping to
the next level of detail
The above process is indefinitely continued depending upon the degree of roughness of the curve generated
- With each subdivision the length increases by a factor of 4 and since there is no limit to subdivision
there is no limit to the length
- The curve constructed is topologically equivalent to the line Dt = 1 but it is so twisted and folded that it
exactly fills up a square
- The Fractal dimension of the curve:
For the square it takes 4 curves of half scale to build the full sized object so the Dimension is
given by
D = ( log 4) / ( log (2)) So, the Fractal Dimension is 2 and the Topological Dimension is 1
One way to introduce some randomness into the geometric construction of a self-similar fractal is to choose a
generator randomly at each step from a set of predefined shapes. Another way to generate random self-similar
objects is to compute coordinate displacements randomly.
A random snowflake pattern can be created by selecting a random, midpoint displacement distance at each step.
Highly realistic representations for terrain and other natural objects can be obtained using affine fractal methods
that model object features as fractional Brownian motion.
This is an extension of standard Brownian motion, a form of "random walk", that describes the erratic, zigzag
movement of particles in a gas or otter fluid.
To be able to solve (iii) the twelve unknown coefficients aij (algebraic coefficients) must be specified
From the known end point coordinates of each segment, six of the twelve needed equations are obtained.
The other six are found by using tangent vectors at the two ends of each segment
The direction of the tangent vectors establishes the P1 tangent at P1
slopes(direction cosines) of the curve at the end
points P2
This procedure for defining a cubic curve using end points and tangent vector is one form of hermite
interpolation
Each cubic curve segment is parameterized from 0 to 1 so that known end points correspond to the limit
values of the parametric variable t, that is P(0) and P(1)
Substituting t = 0 and t = 1 the relation ship between two end point vectors and the algebraic coefficients
are found
P(0) = a0 P(1) = a3 + a2 + a1 + a0
To find the tangent vectors equation ii must be differentiated with respect to t
P’(t) = 3a3t2 + 2a2t + a1
The tangent vectors at the two end points are found by substituting t = 0 and t = 1 in this equation
P’(0) = a1 P’(1) = 3a3 + 2a2 + a1
The algebraic coefficients ‘ai‘ in equation (ii) can now be written explicitly in terms of boundary
conditions – endpoints and tangent vectors are
a0= P(0) a1= P’(0)
a2= -3 P (0) + 3 P (1) -2 P’(0) - P’(1) a3= 2 P (0) - 2 P (1) + P’(0) + P’(1)
substituting these values of ‘ai‘ in equation (ii) and rearranging the terms yields
P(t) = (2t3 - 3t2 + 1) P(0) + (-2t3 + 3t2) P(1) + (t3 - 2t2 + t) P’(0)+ (t3 - t2 ) P’(1)
The values of P(0), P(1), P’(0), P’(1) are called geometric coefficients and represent the known vector
quantities in the above equation
The polynomial coefficients of these vector quantities are commonly known as blending functions
By varying parameter t in these blending function from 0 to 1 several points on curve segments can be found
Parameter ‘u’ takes values from 0 to 1, and coordinate position (x', y', z') represents any point along the projection line.
When u = 0, we are at position P = (x,y, z)
At the other end of the line, u = 1 and we have the projection reference point coordinates (0,0, zprp) On the view plane, z' = zvp , and we can solve the
z' equation for parameter u at this position along the projection line:
Substituting this value of u into the equations for x' and y', we obtain the perspective transformation equations
where, dp = zprp - zvp is the distance of the view plane from the projection reference point.
Using a three-dimensional homogeneous-coordinate representation, we can write the perspective projection transformation in matrix form as
and the projection coordinates on the view plane are calculated from the homogeneous coordinates as
where the original z-coordinate value would be retained in projection coordinates for visible-surface and other depth processing.
In general, the projection reference point does not have to be along the zv axis.
There are a number of special cases for the perspective transformation equations.
If the view plane is taken to be the uv plane, then zvp = 0 and the projection coordinates are
And, in some graphics packages, the projection reference point is always taken to be at the viewing-coordinate origin. In this case, zprp = 0 and the
projection coordinates on the viewing plane are
When a three-dimensional object is projected onto a view plane using perspective transformation equations, any set of parallel lines in the object that
are not parallel to the plane are projected into converging lines.
Parallel Lines that are parallel to the view plane will be projected as parallel lines.
The point at which a set of projected parallel lines appears to converge is called a vanishing point.
Each such set of projected parallel lines will have a separate vanishing point; and in general, a scene can have any number of vanishing points,
depending on how many sets of parallel lines there are in the scene.
The vanishing point for any set of lines that are parallel to one of the principal axes of an object is referred to as a principal vanishing point.
The number of principal vanishing points (one, two, or three) are controlled with the orientation of the projection plane, and perspective projections
are accordingly classified as one-point, two-point, or three-point projections.
The number of principal vanishing points in a projection is determined by the number of principal axes intersecting the view plane.
Perspective Projection (Standard)
Y
P(x,y,z)
P’(x’,y’,0)
B(0,y,z) o(0,0,0)
Z x c(0,0,-D) -Z
z
A’(x’,0,0)
A(x,0,z)
X
Here center of Projection is c (0,0,-D) along the direction of Z axis so the reference point is taken
of world coordinate space Wc and the normal vector N is aligned with the y axis.
So now the view plane vp is the xy plane and center of projection is c (0,0,-D) now from similar
triangles ABC and A’OC
x = z+D = AC
x’ D A’C
or xD = x’ or x’ = Dx
z+D z+D
similarly from triangles APC and A’P’C
y = z+D = AC
y’ D A’C
or y’ = yD
z+D
and z’ =0
now in homogenous coordinates
X’ Dx D 0 0 0 X
Perk = Y’ = 1 Dy = 1 0 D 0 0 Y
Z’ z+D 0 z+D 0 0 0 0 Z
1 z+D 0 0 1 D 1
A unit cube is projected into xy plane . Draw the projected image using standard perspective
transformation where center of projection is (0,0,-20)
Here,
Center of projection = (0,0,-20) i.e. d = 20
20 0 0 0
Persp = 0 20 0 0
0 0 0 0
0 0 1 20
0 0 1 1 1 0 0 1
0 0 0 0 1 1 1 1
1 1 1 1 1 1 1 1
V’ = Persp * V
20 0 0 0 0 1 1 0 0 0 1 1
0 20 0 0 0 0 1 1 1 0 0 1
0 0 0 0 0 0 0 0 1 1 1 1
0 0 1 20 1 1 1 1 1 1 1 1
0 20 20 0 0 0 20 20
= 1/z+D 0 0 20 20 20 0 0 20
0 0 0 0 20 20 20 20
20 20 20 20 21 21 21 21
Hence,
1/(z+D) = 1/ (0 +20) == 1/ 20
A’ =(0,0,0) B’ =(1,0,0) C’ =(1,1,0) D’ =(0,1,0)
1/(z+D) = 1/ (1 +20) = 1/ 21
E’ =(0,20/21,0) F’ =(0,0,0) G’ =(20/21,0,0) H’ =(20/21, 20/21,0)
3D Object Representation (Polygon Surfaces: Planar)
Graphics scenes can contain trees, flowers , clouds rocks water, rubber, paper , bricks etc.
Polygon and quadratic surfaces provide precise descriptions for simple Euclidean objects such as polyhedrons
ellipsoid.
Polygon Surfaces
3-D graphics object is a set of surface polygons
that enclose the object interior
A polygon mesh is a set of connected polygonally
bounded planar surfaces
Equation of plane is Ax + By + Cz + D = 0
A polygon mesh is a collection of edges, vertices
and polygons connected such
that each edge is shared by at most two polygons.
Polygon Tables
Polygon data tables can be organized into two groups: geometrical and attribute tables
Geometric data tables contain vertex coordinates and parameters to identify the spatial orientation of polygon
surfaces
Attribute information for an object includes parameters specifying the degree of transparency of object and
its surface reflectivity and texture characteristics.
Plane Equations
The equation for a plane surface can be expressed as
Ax + By + Cz + D =0 …….(i)
Where, (x,y,z) is any point on the plane- coefficients ABCD are constants describing the spatial
properties of the plane
For solving ABCD consider three successive polygon vertices (x1, y1, z1), (x2, y2, z2), (x3, y3, z3)
So equation (i) is modified to,
A xk + B yk + C zk = -1 …….(ii) k = 1,2,3
D D D
Expanding Determinants,
A = y1 ( z2 - z3 ) + y2 ( z3 – z1 ) + y3 ( z1 – z2 )
B = z1 ( x2 - x3 ) + z2 ( x3 – x1 ) + z3 ( x1 – x2 )
C = x1 ( y2 - y3 ) + x2 ( y3 – y1 ) + x3 ( y1 – y2 )
D = -x1 (y2z3 – y3z2 ) - x2 (y3z1 – y1z3 ) - x3 (y1z2 – y2z1 )
The vector N, normal to the surface of a plane described by equation Ax + By + Cz + D =0 has Cartesian
Components (A,B,C)
Plane equations are used to identify the position of spatial points relative to the plane surfaces of an object. For
any point (x,y,z) not on plane with parameters ABCD we have
Ax + By + Cz + D = 0
We can identify the point as either inside or outside the plane surface according to the sign( + or -) of
Ax + By + Cz + D
Polygon Meshes
Some graphics packages (PHIGS programmer’s Hierarchical Interactive Graphics Standard) provide several
polygon functions for modeling objects.
One type of polygon mesh is the triangle strip
This can be due to numerical error or errors in selecting coordinate positions for the vertices
Remedies:
i. Simply divide the polygons into triangles
ii. Approximate the plane parameters ABC and project in same plane (i.e. approximate A to yz plane, B to
xz plane, C to xy plane etc)
High quality graphics systems typically model objects with polygon meshes and set up a database of geometric
and attribute information to facilitate processing of the polygon facets.
Spline representations are examples of generating curves and surfaces. These are commonly used to design
new object shapes , to digitize drawings and to describe animation paths.
Curve fitting methods are also used to display graphs of data values by fitting specified curve functions to the
discrete data set, using regression techniques such as the least square methods.
Monitor
Mouse
Features of Open GL:
Texture Mapping : The ability to apply an image to a graphics surface, this technique is used to
rapidly generate realistic images without having to specify on excessive amount of detail
regarding pixel coordinates, textures etc
Z-Buffering: The ability to calculate the distance from the viewer’s location. this makes it easy
for the program to automatically remove surface or parts of surfaces that are hidden from view
Double Buffering: Support for smooth animation using double buffering. A smooth animation
sequence is achieved by drawing into the back buffer while displaying the front buffer and then
swapping the buffers when ready to display the next animation sequence
Lighting Effects: The ability to calculate the effects on the lightness of a surface’s color when
different lighting models are applied to the surface from one or more light sources
Saroj Shakya, Nepal College of Information Technology, OpenGL
Smooth Shading: The ability to calculate the shading effect that occurs when light hits a surface
at an angle and results in subtle color differences across the surface. This effect is important for
making model look realistic
Material Properties: The ability to specify the material properties of a surface. These properties
modify the lighting effects on the surface by specifying such this as dullness or shininess of the
surface
Alpha Blending: The ability to specify alpha or opacity value in addition to the regular red ,
green, blue values. The alpha component is used to specify opacity, allowing the full range from
completely transparent to totally opaque. When used in combination with z buffer, Alpha
blending gives the effect of being able to see through objects
Developer’s Advantage:
Industry Standard: the Open GL architecture Review board an independent consortium guides
the Open GL specification Open GL is the only true open vendor-neutral, multiplatform graphics
standard with a broad industry support
Stability: Updating to Open GL specification are carefully controlled and updates are announced
in time for developers to adopt changes. Backward compatibility requirements ensure the
viability of existing applications
Portability: applications produce consistent visual display result on any OpenGL API compliant
hardware regardless of OS or windowing system. So once a program is written for any platform,
it can be ported for other platforms as well.
Evolving: New hardware innovations are accessible thru the API via the OpenGL extension
mechanism Innovations are phased in to enable developers and hardware vendors to incorporate
new features into their product release cycles
Scalability: Open GL applications can be scaled to any class of machine, everything from
consumer electronics to PCS, workstations, Super Computers
Ease of Use: Efficient OpenGL routines typically result in applications with fewer lines of code
than programs created with other graphics libraries or packages. OpenGL driver encapsulates the
information about the underlying hardware so the application programmer does not need to b e
concerned about having to design for specific hardware features
Saroj Shakya, Nepal College of Information Technology, OpenGL
History
OpenGL was first created as an open and reproducible alternative to Iris GL which had been
the proprietary graphics API on Silicon Graphics workstations.
Although OpenGL was initially similar in some respects to IrisGL the lack of a formal
specification and conformance tests made Iris GL unsuitable for broader adoption. Mark Segal
and Kurt Akeley authored the OpenGL 1.0 specification which tried to formalize the definition
of a useful graphics API and made cross platform non-SGI 3rd party implementation and support
viable. One notable omission from version 1.0 of the API was texture objects.
IrisGL had definition and bind stages for all sorts of objects including materials, lights, textures
and texture environments. OpenGL avoided these objects in favor of incremental state changes
with the idea that collective changes could be encapsulated in display lists. This has remained the
philosophy with the exception that texture objects (glBindTexture) with no distinct definition
stage are a key part of the API.
OpenGL has been through a number of revisions which have predominantly been incremental
additions where extensions to the core API have gradually been incorporated into the main body
of the API. For example OpenGL 1.1 added the glBindTexture extension to the core API.
OpenGL 2.0 incorporates the significant addition of the OpenGL Shading Language (also called
GLSL), a C like language with which the transformation and fragment shading stages of the
pipeline can be programmed.
OpenGL 3.0 adds the concept of deprecation: marking certain features as subject to removal in
later versions. GL 3.1 removed most deprecated features, and GL 3.2 created the notion of core
and compatibility OpenGL contexts.
Official versions of OpenGL released to date are 1.0, 1.1, 1.2, 1.2.1, 1.3, 1.4, 1.5, 2.0, 2.1, 3.0,
3.1, 3.2, 3.3, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5.
Application
Hardware Generic
Display
Specific Software
Management System
OpenGL OpenGL
Driver Driver
Hardware
Model-View-Controller Architecture
Interactive program design involves breaking the program into three parts:
The Model: all the data that are unique to the program reside there. This might be game
state, the contents of a text file, or tables in a database.
The View of the Model: it is a way of displaying some or all of the data to user. Objects
in the game state might be drawn in 3D, text in the file might be drawn to the screen with
formatting, or queries on the database might be wrapped in HTML and sent to a web
browser.
The Controller: the methods for user to manipulate the model or the view. Mouse and
keystrokes might change the game state, select text, or fill in and submit a form for a new
query.
Object-oriented programming was developed, in part, to aid with modularising the components
of Model-View-Architecture programs. Most user interface APIs are written in an Object
Oriented language.
Controller is usually a set functions you write that respond to event signals sent to your
program via the Windowing API.
GL Related Libraries
OpenGL Utility Library (GLU): Contains several routines that use lower level OpenGL
Commands to perform tasks as setting up matrices, for specific viewing orientations and
projections etc. e.g. gluPerspective(…………..);
OpenGL Utility Toolkit (GLUT): A window system independent toolkit written by Mark
Kilgard to hide the complexities of differing system APIs
OpenGL contains only rendering commands and no commands for opening windows or
reading events from keyboard or mouse
GLUT provides several routines for opening windows detecting input and creating complicated
3D objects like spehere , torus , teapot
GLUT routines use the prefix glut e.g. glutCreateWindow(…); glutInit(…);
glutMouseFunc(…..); glutKeyboardFunc(…..);
//Create a rectangle
void display(){
glClearColor(0.0,0.0,0.0,0.0);
glShadeModel(GL_FLAT);
glClear(GL_COLOR_BUFFER_BIT);
glPushMatrix();
glRotatef(spins,0.0,0.0,1.0);
glColor3f(1.9,1.0,0.8);
glRectf(-25.0,-25.0,25.0,25.0);
glPopMatrix();
glutSwapBuffers();
}
void display(void){
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1,1,1);
glFlush();
}
void show(){
glRasterPos2i(20,20);
glBitmap(10,12,0,0,11,0,rasters);
glBitmap(10,12,0,0,11,0,rasters);
glBitmap(10,12,0,0,11,0,rasters);
glFlush();
}
void reshape(int w, int h){
glViewport(0,0,(GLsizei)w, (GLsizei)h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, w,0 , h, -1.0,1.0);
glMatrixMode(GL_MODELVIEW);
}
void init(){
glClearColor(0.0,0.0,0.0,0.0);
glShadeModel(GL_SMOOTH);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_DEPTH_TEST);
}
void display(){
GLfloat position[]={0.0,0.0,1.5,1.0};
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glPushMatrix();
glTranslatef(0.0,0.0,-5.0);
glPushMatrix();
glRotated ((GLdouble)spin,1.0,0.0,1.0);
glLightfv(GL_LIGHT0,GL_POSITION,position);
glTranslated(0.0,0.0,1.5);
glDisable(GL_LIGHTING);
Saroj Shakya, Nepal College of Information Technology, OpenGL 16
glColor3f(0.0,1.0,1.0);
glutWireCube(0.1);
glEnable(GL_LIGHTING);
glPopMatrix();
glutSolidTorus(0.275,0.85,8,15);
glPopMatrix();
glFlush();
}
}
void mouse(int button, int state, int x, int y){
switch(button){
case GLUT_LEFT_BUTTON:
if (button == GLUT_DOWN){
spin = (spin + 30) % 360;
glutPostRedisplay();
}
break;
default:
break;
}
}
glLightfv(GL_LIGHT0,GL_POSITION,position):
- Set light-source parameters.
void glLightfv(
GLenum light,
GLenum pname,
const GLfloat *params
);
light
The identifier of a light
GL_LIGHTi where 0 ≤ i < GL_MAX_LIGHTS
pname
A single-valued light-source parameter for light
param
The value to which parameter pname of light source light will be set.
This interface contains basic functions for dynamic interactive 2D and 3D graphics on wide
variety of graphics equipment
The models are stored in a graphical database known as centralized structure store (CSS).
The fundamental entity of data is a structure element and these are grouped together into units
called structures.
The two abstract concepts (input and output) are the building blocks of an abstract
workstation.
A PHIGS workstation represents a unit consisting of zero or one display surfaces and zero or
more input devices such as keyboard, tablet , mouse and light pen.
The workstation presents this devices to the application program as a configuration of abstract
devices thereby shielding the hardware peculiarities
MI- supports input graphical and application data from the external storage to the
application
For every type of workstation present in a PHIGS implementation, there exists a generic
workstation description table which describes the standard capabilities and characteristics of
the workstation.
When the workstation is opened, a new specification workstation description table is created
for that workstation description table obtained from the device itself. And possibly from other
implementation dependent source.
The content of the specific workstation description table may change at any time while the
workstation is open.
The application program can inquire which generic capabilities are available before the
workstation is open.
The specific capabilities may be inquired while the workstation is open by first inquiring the
workstation type of an open workstation to obtain the workstation type of the specific
workstation description table, and then using this workstation type as a parameter to the inquiry
functions which query the workstation description table. This information may be used by the
application program to adapt its behavior accordingly
The current state of each open workstation . state values of the WSSL maybe set up by the “set
functions” and may be inquired by “inquire functions”.
2. Structure Entity
A structure element is the fundamental entity of data. Structure elements are used to represent
application specified graphics data for output primitives, attribute selections, modeling
transformations and clipping , invocation of other structures, and to represent application data.
Graphical Output
Picture generated by PHIGS are build up of basic pieces called output primitives. Output
primitives are generated from structure elements by structural traversal
FILL AREA single polygonal area which may be hollow or filled with uniform color, pattern
or hatch style
FILL AREA SET a set of polygonal areas which may be filled similar as FILL AREA
for individual specifications of aspects, there is a separate attribute for each aspect. These
attributes are workstation independent
bundled aspects are selected by a bundle index into a bundle table each entry of which contains
non geometric aspects of a primitive. The non geometric aspects are workstation dependent in
that each workstation has its own set of bundle tables (stored in the workstation state list) the
values in a particular bundle may be different for different workstations
Attributes of the first type control the geometric aspects of primitives. These are aspects that
affect the shape or size for the entire primitive (e.g. CHARACTER HEIGHT for TEXT)
Hence they are sometimes referred to as geometric attributes. Geometric attributes are
workstation independent and if they represent coordinate data they are expressed in modeling
coordinates
Attributes of the second type control the non geometric aspects of primitives these are aspects
that affect a primitive’s appearance( for example. Line type for POLYLINE, or color index for all
primitives except CELL ARRAY) or the shape or size of the component parts of the primitive (for
example , marker size scale factor for POLYMARKER)
The non geometric aspects of primitive may be specified either via a bundle or individually the
geometric aspects only individually
PHIGS supports the storage and manipulation of data in CSS the CSS contains graphical and
application data organized into units called structures which may be related each other
hierarchically to form structure networks
So a structure may contain invocations of other structures containing in CSS. The invocation of
a structure is achieved using the execute structure element. Such an invocation is known as a
structure reference. Structure can not be referenced recursively
When a reference to the nonexistent structure is inserted into a structure in the CSS
When the structure is opened for the first time (function OPEN STRUCTURE)
When the structure is posted for display on a workstation (POST STRUCTURE)
When the structure is referenced in any function changing the structure identifier
When the not existing structure in CSS is retrieved from an archive (RETRIEVE STRUCTURE)
When the not existing structure is emptied (EMPTY STRUCTURE)
To display a network the structure elements have to be extracted from the CSS and processed .
that process is called traversal process
The traversal process interprets each structure element in the structure network sequentially
starting at the first element of the top of the network
A traversal state list is associated with each traversal process. Values in this state list ;may be
accessed when the structure elements are interpreted
Structure editing
Each element within a structure can be accessed and modified individually with editing
functions to inset new elements, replace elements with new structure elements, delete structure
elements. Navigate within the structure and inquire structure element content
A structure is identified for editing an element pointer is established which points at the lat
element in the structure . functions for positioning element pointer
SET ELEMENT POINTER – set to an absolute position
SET ELEMENT POINT AT LABEL – set a position of the specified label structure element
The edit mode defined by the function SET EDIT MODE defines whether new elements replace the
element pointed to by the element pointer or are added after the element pointed to by the
element pointer
DELETE STRUCTURE NETWORK-delete the indicated structure and all its ancestors
ELEMENT SEARCH –search within a single structure for an element of a particular element
type
OPEN ARCHIVE FILE , CLOSE ARCHIVE FILE-initiates or terminates access to archive file
Structures represent parts of a hierarchical model of modeling scene. Each of these parts has
own world space represented by modeling coordinate system .
The relative positioning of the separate parts is achieved by having a single World coordinate
space onto which all the defined modeling coordinate systems are mapped by a composite
modeling transformation during the traversal process
The world coordinate space can be regarded as a workstation independent abstract viewing
space
The workstation dependent stage then performs a transformation on the geometrical information
contained in output primitives , attributes and logical input values
World coordinates used to define uniform coordinate system for all abstract workstation
View reference coordinate used to define a view
Normalized projection coordinates; used to facilitate assemblies of different views
Device coordinates: one coordinate system per workstation representing its display space
Output primitives and attributes are mapped from world coordinate to view reference coordinate
by the view orientation transformation, from view reference coordinates to normalized
projection coordinates by the view mapping transformation and from normalized projection
coordinates to device coordinates by the workstation transformation. Hidden lines removals
and clippings are also done during the mapping process
5. Graphical input
An application program gets graphical input from an operator by controlling the activity of one
or more logical input devices.
6. Language Interfaces
For integration into a language, PHIGS is embedded in a language dependent layer containing
the language conventions, e.g. parameter and name assignment
In case of the layer model, each layer may call the functions of the adjoining lower layers. In
general the application program uses the application oriented layer, the language dependent
layer, other application dependent layers and operations system resources. There are standards
of the language dependent layers for the language FORTRAN, PASCAL and C
PHIGS does not support a shading of pictures and modeling a non uniform rational B Spline
curves and surfaces (NURBS)
Parameters of the light sources are described and application program s may set the up
The predefined light sources are ambient, directional, positional, spot light source.
FILL AREA SET3 WITH DATA-creates a 3D fill area set structure element that includes color
and shading data
QUADRILATERAL MESH3 WITH DATA-creates a 3D quadrilateral mesh primitive with color and
shading data
TRIANGLE STRIP3 WITH DATA-creates a 3D triangle strip primitive with color and shading
data
NON UNIFORM B-SPLINE CURVE NURBS-crates a structure element containing the definition of
a non-uniform B-Spline Curve
Significance of PHIGS
Portability of program
PHIGS is computer and device independent graphics system. The application program
that utilize PHIGS can be easily transported between host processors and graphics
devices
PHIGS manages the storage and display of 2D and 3D graphical data, crates and
maintains a hierarchical database
Application using PHIGS have well defined inputs and outputs that minimizes errors
GKS
The main objective of the Graphical Kernel System, GKS, is the production and manipulation of pictures in
a computer or graphical device independent way.
2. A formalization of the expository material by way of abstracting the ideas into discrete functional
descriptions. These functional descriptions contain such information as descriptions of input and output
parameters, precise descriptions of the effect each function should have, references into the expository
material, and a description of error conditions. The functional descriptions in this section are language
independent.
3. Language bindings. These bindings are an implementation of these abstract functions in a specific
computer language such as Fortran or Ada or C.
In GKS, pictures are considered to be constructed from a number of basic building blocks. These basic
building blocks are of a number of types each of which can be used to describe a different component of a
picture.
The five main primitives in GKS are:
Polyline: which draws a sequence of connected line segments.
Polymarker: which marks a sequence of points with the same symbol.
Fill area: which displays a specified area.
Text: which draws a string of characters.
Cell array: which displays an image composed of a variety of colours or grey scales.
Associated with each primitive is a set of parameters which is used to define particular instances of that
primitive.
For example, the parameters of the Text primitive are the string or characters to be drawn and the starting
position of that string. Thus:
TEXT(X, Y, 'ABC') will draw the characters ABC at the position (X, Y).
Although the parameters enable the form of the primitives to be specified, additional data are necessary to
describe the actual appearance (or aspects) of the primitives.
For example, GKS needs to know the height of a character string and the angle at which it is to be drawn.
These additional data are known as attributes.
GKS standardizes a reasonably complete set of functions for displaying 2D images. It contains functions for
drawing lines, markers, filled areas, text, and a function for representing raster like images in a device-
independent manner.
In addition to these basic functions, GKS contains many functions for changing the appearance of the output
primitives, such as changing colors, changing line thicknesses, changing marker types and sizes etc. The
functions for changing the appearance of the fundamental drawing functions (output primitives) are called
attribute setting functions.
Polylines
The GKS function for drawing line segments is called Polyline. The polyline function takes an array of X-Y
coordinates and draws line segments connecting them. The attributes that control the appearance of a polyline
are:
Linetype, which controls whether the polyline is drawn as a solid, dashed, dotted, or dash-dotted line.
Linewidth scale factor, which controls how thick the line is.
Polyline color index, which controls what color the line is.
The main line drawing primitive of GKS is the polyline which is generated by calling the function:
POLYLINE(N, XPTS, YPTS)
where XPTS and YPTS are arrays giving the N points (XPTS(1), YPTS(1)) to (XPTS(N), YPTS(N)). The
polyline generated consists of N - 1 line segments joining adjacent points starting with the first point and ending
with the last.
Polymarkers
The GKS polymarker function allows drawing marker symbols centered at coordinate points specified.
The attributes that control the appearance of polymarkers are:
Marker, which specifies one of five standardized symmetric characters to be used for the marker.
The five characters are dot, plus, asterisk, circle, and cross.
Marker size scale factor, which controls how large each marker is (except for the dot marker).
Polymarker color index, which specifies what color the marker is.
GKS provides the primitive polymarker marks a set of points, instead of drawing lines through a set of points.
Text
The GKS text function allows drawing a text string at a specified coordinate position.
The attributes that control the appearance of text are:
Text font and precision, which specifies what text font should be used for the characters and how precisely
their representation should adhere to the settings of the other text attributes.
Character expansion factor, which controls the height-to-width ratio of each plotted character.
Character spacing, which specifies how much additional white space should be inserted between characters
in a string.
Text color index, which specifies what color the text string should be.
Character height, which specifies how large the characters should be.
Character up vector, which specifies at what angle the text should be drawn.
Text path, which specifies in what direction the text should be written (right, left, up, or down).
Text alignment, which specifies vertical and horizontal centering options for the text string.
TEXT(X, Y, STRING) where (X, Y) is the text position and STRING is a string of characters.
The character height attribute determines the height of the characters in the string. Since a character in a font
will have a designed aspect ratio, the character height also determines the character width. The character height
is set by the function:
SET CHARACTER HEIGHT(H)
where H is the character height.
The character up vector is perhaps the most important text attribute. Its main purpose is to determine the
orientation of the characters. However, it also sets a reference direction which is used in the determination of
text path and text alignment. The character up vector specifies the up direction of the individual characters. It
also specifies the orientation of the character string in that. By default, the characters are placed along the line
perpendicular to the character up vector. The function:
SET CHARACTER UP VECTOR(X, Y)
Fill Area
The GKS fill area function allows specifying a polygonal shape of an area to be filled with various interior
styles.
The attributes that control the appearance of fill areas are:
Fill area interior style, which specifies how the polygonal area should be filled: with solid colors or various
hatch patterns, or with nothing, that is, a line is drawn to connect the points of the polygon, so to get only a
border.
Fill area style index. If the fill area style is hatch, this index specifies which hatch pattern is to be used:
horizontal lines; vertical lines; left slant lines; right slant lines; horizontal and vertical lines; or left slant and
right slant lines.
Fill area color index, which specifies the color of the fill patterns or solid areas.
At the same time, there are now many devices which have the concept of an area which may be filled in some
way. These vary from intelligent pen plotters which can cross-hatch an area to raster displays which can
completely fill an area with a single colour or in some cases fill an area by repeating a pattern.
GKS provides a fill area function to satisfy the application needs which can use the varying device capabilities.
Defining an area is a fairly simple extension of defining a polyline. An array of points is specified which defines
the boundary of the area. If the area is not closed (i.e. the first point is not the same as the last point), the
boundary is the polyline defined by the points but extended to join the last point to the first point. A fill area
may be generated by invoking the function:
Cell Array
The GKS cell array function displays raster like images in a device-independent manner.
The cell array function takes the two corner points of a rectangle specified, a number of divisions (M) in the X
direction and a number of divisions (N) in the Y direction. It then partitions the rectangle into M x N
subrectangles called cells. Assign each cell a color and create the final cell array by coloring each individual
cell with its assigned color.