Computer Graphic Chapter 1
Computer Graphic Chapter 1
Unit 1: Introduction
By
Gaurav Bhattarai
GauravBhattarai
Introduction
• Started early in 1960s by Ivan Sutherland (MIT).
• Computer graphics is a field that is concerned with all aspects of producing
pictures or images using a computer.
• It includes the creation, storage, and manipulation of images of objects.
• It is related to the generation and the representation of graphics by a computer
using specialized graphic hardware and software.
• The graphics can be photographs, drawings, movies, or simulations etc.
• It is also defined as:
GauravBhattarai
Data structures:
• Data structures are used to store object attributes such as coordinate values, color
consideration, depth etc.
• Data structures that are suitable for computer graphics such as octree, quad tree,
meshes etc.
Graphics algorithms:
• Methods/procedures for picture generation, and transformations.
• Graphics algorithms include scan line algorithms (for drawing points, lines, circles,
ellipses etc.), clipping algorithms, fill algorithms, hidden surface removal algorithms
etc.
Graphics Languages:
• Higher level languages for generation of graphics objects/pictures such as C, C++,
Java, DirectX, QuickDraw, Display PDF and OpenGL.
Software tools:
• Operating system
• Complier
• Editor
• Debuggers
• Graphics libraries:
• Functions/ routine to draw line or circle etc.
• OpenGL.
GauravBhattarai
Graphics System
There are five major elements in graphics system:
1. Input devices
2. Processor (display controller)
3. Memory
4. Frame buffer
5. Output devices
Inside the frame buffer the image definition is stored. Display controller passes the
contents of the frame buffer to the monitor.
GauravBhattarai
Types of Computer Graphics
Bit Mapped or Raster Graphics
• Bit mapped graphics are graphics that are stored in the form of a bitmap in the
frame buffer.
• They are a sequence of bits that get drawn onto the screen.
• We can create bit mapped graphics using a painting program. We cannot edit the
components of a bitmapped image once we drew them.
• Enlargement of raster graphics reduces the clarity of image.
GauravBhattarai
Classification of Computer Graphics
On the basis of Interaction
1. Interactive Computer Graphics
• In interactive computer graphics, user have some control over the picture, i.e. user
can make any change in the produced image. One example of it is the ping pong
game.
• Computer graphics today is largely interactive, i.e. the user controls the contents,
structure, and appearance of images of the objects by using input devices, such as
keyboard, mouse, or touch-sensitive panel on the screen.
GauravBhattarai
On the basis of type of Objects
1. Two Dimensional Computer Graphics(2D)
• Generation of 2D objects such as lines, triangles, circles, rectangles, gray-scale
images, etc.
GauravBhattarai
Application of Computer Graphics
Computer Aided Design (CAD)
• One of the major uses of computer graphics is in design processes.
• CG is used to design components and systems of mechanical, electrical,
electrochemical, and electronic devices, including structures such as buildings,
automobile bodies, airplane and ship hulls, very large scale integrated (VLSI) chips,
optical systems and telephone and computer networks.
• These designs are more frequently used to test the structural, electrical, and
thermal properties of the systems.
• Architects use computer graphics to layout floor plans that shows positioning of
rooms, doors, windows, stairs, shelves and other building features. Electrical
designers then try out arrangements for wiring, electrical outlets and other system
to determine space utilization on a building.
GauravBhattarai
Presentation Graphics
• Another major application area of computer graphics is the Presentation Graphics.
• Presentation Graphics is commonly used to summarize financial, statistical,
mathematical, scientific and economic data for research reports, managerial
reports and other types of reports.
• Typical examples are bar charts, line graphs, surface graphs, pie charts and other
displays showing relationship between multiple variables.
• The 3D graphics are usually used simply for effects; they can provide a more
diagrammatic or more attractive presentation of data relationship.
GauravBhattarai
Entertainment
• Computer graphics methods are now commonly used in making motion pictures,
music videos and TV shows.
• Disney movies such as Lion Kings and The Beauty of Beast, and other scientific
movies like Jurassic Park, The lost world etc. are the best example of the
application of computer graphics in the field of entertainment.
• Also used to develop in gaming applications such as Prince of Persia, Far Cry etc.
Image Processing
• Image can be created using simple point program or can be fed into computer by
scanning the image. These picture/ images need to be changed to improve the
quality.
• Form image/pattern recognition systems, images need to be changed in specified
format so that the system can recognize the meaning of the picture.
• Currently computer graphics is widely used for image processing.
GauravBhattarai
Visualization
• Visualization is the process of visually representing the data. To visualize large
amount of information graphical computer systems are used.
• Generating computer graphics for scientific, engineering, and medical data sets is
termed as scientific visualization whereas business visualization is related with the
non-scientific data sets such as those obtained in economics.
• Some methods generate very large amount of data/information, analysis the
property of the whole amount of data is very difficult. Visualization simplifies the
analysis of data by using graphical representation. Visualization makes easier to
understand the trends and patterns inherent in the huge amount of data sets.
GauravBhattarai
Office Automation and Electronic Publishing
• Computer graphics has facilitated the office automation and electronic publishing
which is also popularly known as desktop publishing, giving more power to the
organizations to print the meaningful materials in-house.
• Office automation and electronic publishing can produce both traditional printed
(Hardcopy) documents and electronic (softcopy) documents that contain text, tables,
graphs, and other forms of drawn or scanned-in graphics.
Cartography
• Cartography is a subject, which deals with the making of maps and charts.
• Computer graphics is used to produce both accurate and schematic representations
of geographical and other natural phenomena from measurement data.
• Examples include geographic maps, oceanographic charts, weather maps, contour
maps and population-density maps. Surfer is one of such graphics packages, which is
extensively used for cartography.
GauravBhattarai
Computer Graphics vs Image Processing
GauravBhattarai
Scan Conversion
• It is a process of representing graphics objects a collection of pixels. The graphics
objects are continuous. The pixels used are discrete. Each pixel can have either on
or off state.
• The circuitry of the video display device of the computer is capable of converting
binary values (0, 1) into a pixel on and pixel off information. 0 is represented by
pixel off. 1 is represented using pixel on. Using this ability, graphics computer
represent picture having discrete dots.
• Any model of graphics can be reproduced with a dense matrix of dots or points.
Most human beings think graphics objects as points, lines, circles, ellipses. For
generating graphical object, many algorithms have been developed.
GauravBhattarai
Input Devices
Mouse, Touch Screen, Light Pen, Data Glove, Tablet (Digitizer), Bar Code Reader
Input device are used to feed data or information into a computer system. They are
usually used to provide input to the computer upon which reaction, outputs are
generated. Data input devices like keyboards are used to provide additional data to
the computers whereas pointing and selection devices like mouse, light pens, touch
panels are used to provide visual and indication-input to the application.
Keyboard
Keyboard is a primary serial input device. For each key press, a keyboard senses
specific codes (American standard code for Information Interchange, ASCII) to the
computes. Keyboards are also capable of sending/coding combinations of key press
and special keys like function keys. It is useful for expert users in giving exact
commands to the computer. Cursor controlled keys and function keys are used for
doing operations in a single keystroke.
Optical Mouse
An optical mouse uses an optical sensor to detect the movement across a special
mouse pad that has a grid of horizontal and vertical lines. The optical sensor detects
movement across the lines in the grid.
Z-Mouse
Z-mouse includes three buttons. A thumb wheel on the side, a track ball on the top,
and a standard mouse ball underneath. This design provides six degrees of freedom to
select an object from the spatial position. With this we can pick up an object, rotate it
and we can move it in any direction. Used in virtual reality and CAD systems.
GauravBhattarai
Trackball and Space ball
A trackball is a ball that can be rotated with the fingers or palm of the hand to
produce screen cursor movement.
• It is a two dimensional positioning device.
• Potentiometer’s attached to the ball, measure the amount and direction of
rotation. Trackballs are often mounted on keyboards.
Space ball is a three dimensional positioning and pointing device that provides six
degrees of freedom.
• Strain gauges are used to measure the amount of pressure applied to the space ball
to provide the input.
• It is used in virtual-reality systems, modeling, animation, CAD, and other
applications.
Data glove
Constructed with a series of sensors that can detect hand and finger motions. The
transmitting and receiving antennas can be structured as a set of three mutually
perpendicular cols, forming a three dimensional Cartesian coordinates system.
Electromagnetic coupling between the three pairs of coil is used to provide
information about the position and orientation of hand.
GauravBhattarai
Digitizers
A common device for drawing, painting, or interactively selecting coordinate positions
on an object is digitizer. It is used to scan over a drawing or object and to input a set of
discrete coordinate positions. And it can be used in both 2D and 3D graphics. Graphics
tablet is an example.
Light Pens
Light pens are pencil shaped devices used to select screen positions by detecting the
light coming from points on the CRT screen. They are sensitive to the short burst of
light emitted from the phosphor coating as the instant electron beam strikes a
particular point. Other light sources, such as background light in the room are usually
not detected by a light pen.
Joystick
A joystick is a device that moves in all directions and controls the movement of the
screen cursor. It consists of a small, vertical lever mounted on a base that is used to
steer the screen cursor. Joysticks are mainly used for computer games, controlling
industrial robots, and for other applications such as flight simulators, training
simulators etc.
GauravBhattarai
Bar Code Reader
A bar code is a machine-readable code in the form of a pattern of parallel vertical
lines. They are commonly used for labeling gods that are available in supermarkets,
numbering books in libraries etc. These codes are sensed and read by a photoelectric
device called bar code reader that reads the code by means of reflected light. The
information recorded in a bar code reader is fed into the computer, which recognizes
the information from the thickness and spacing of bars.
Touch-Panel
Touch panels are a sensitive surface that is used to point directly. The panel can be
touched by finger or any other object like stylus. Transparent touch panels are
integrated with computer monitor for the manipulation of information display. A basic
touch panel senses voltage drop when a user touches the panel. It knows where the
voltage has dropped and accordingly calculates the touch position.
GauravBhattarai
Output Devices
Cathode Ray Tube
The primary output device in a graphical system is the video monitor. The main
element of a video monitor is the Cathode Ray Tube (CRT), shown in the following
illustration. The cathode ray tube (CRT) is an evacuated tube containing one or more
electron guns (a source of electron) and a phosphor coated screen used to view
images.
GauravBhattarai
Main Components of CRT
An electron gun
• The primary components of an electron gun in a CRT are the heated metal cathode
and a control grid.
• Heated metal cathode: Heat is supplied to the cathode by directing the beam
through a coil of wire called the filament inside the cylindrical cathode
structure.
• Control grid: Intensity of the electron beam is controlled by setting the voltage
levels on the control grid, which is a metal cylinder that fits to the cathode.
Deflection System
• It is used to control the vertical and horizontal scanning of the electron beam.
GauravBhattarai
Working of CRT
• The electron gun emits a beam of electrons (cathode rays).
• The electron beam passes through focusing and deflection systems that direct it
towards specified positions on the phosphor-coated screen.
• When the beam hits the screen, the phosphor emits a small spot of light at each
position contacted by the electron beam. The glowing positions are used to represent
the picture in the screen.
• The amount of light emitted by the phosphor coating depends on the number of
electrons striking the screen. The brightness of the display is controlled by varying the
voltage on the control grid.
• Because the light emitted by the phosphor decays very rapidly with time. So, it redraws
the picture by directing
Properties of CRT
1. Persistence
• Persistence is the one of the major property of phosphor used in CRT’s.
• It means how long phosphors continue to emit light after the electron beam is
removed.
• Persistence is defined as the time it takes the emitted light from the screen to decay to
one-tenth of its original intensity.
• Lower persistence phosphors require higher refresh rates to maintain a picture on the
screen. A phosphor with lower persistence is useful for animation and a higher–
persistence phosphor is useful for displaying highly complex static picture.
• Graphics monitor are usually constructed with the persistence 10 to 60 microseconds.
GauravBhattarai
2. Resolution
• The maximum number of points (pixel) that can be displayed without overlap on a
CRT is referred to as the resolution.
• It is usually denoted as width × height, with the units in pixels.
• It is also defined as maximum number of points displayed horizontally and
vertically without overlap on a display screen.
• It represents number of dots per inch (dpi/pixel per inch) that can be plotted
horizontally and vertically.
• Resolution of 1280 x 720 means that there are 1280 columns and 720 rows.
• Resolution of 1024 x 768 means the width is 1024 pixels and the height is 768
pixels.
3. Aspect Ratio
• Another property of video monitors is aspect ratio.
• This number gives the ratio of total number of vertical pixel lines to total horizontal
pixel lines i.e., the ratio of the width to the height of an image or screen.
• Aspect ratio = Width/Height.
GauravBhattarai
4. Refresh Rate
• Light emitted by phosphor fades very rapidly, so to keep the drawn picture glowing
constantly; it is required to redraw the picture repeatedly by quickly directing the
electron beam back over the some point. This process is called refresh operation.
• The no of times/sec the image is redrawn to give a feeling of non-flickering pictures
is called refresh-rate. Refresh rates are described in units of cycles per second, or
Hertz (Hz), where a cycle corresponds to one frame.
• If Refresh rate decreases, flicker develops.
• Refresh rate above which flickering stops and steady it may be called as critical
fusion frequency (CFF).
GauravBhattarai
Types of refresh CRT
1. Raster-Scan Displays
• The most common type of graphics monitor employing a CRT is the raster-scan
display.
• In raster scan approach, the viewing screen is divided into a large number of
discrete phosphor picture elements, called pixels.
• Row of pixels is called the scan line. The matrix of pixels or collection of scan lines
constitutes the raster.
• As electron beam moves across each row, the beam intensity is turned on and off
to create a pattern of illuminated spots.
• Picture definition is stored in a memory called frame buffer or refresh buffer.
Frame buffer holds all the intensity value for each screen point.
• In monochromatic CRT’s (i.e., black-and-white system) with one bit per pixel, the
frame buffer is commonly called a bitmap. For systems with multiple bits per pixel,
the frame buffer is often referred to as a pixmap.
• Stored intensity values are then retrieved by the display processor from the frame
buffer and “painted” on the screen, pixel-by-pixel, one row (scan line) at a time.
GauravBhattarai
Fig: Raster scan display system
GauravBhattarai
Two types of Raster-scan systems:
a. Interlaced Raster-Scan System
• On some raster-scan systems (and in TV sets), each frame is displayed in two passes
using an interlaced refresh procedure.
• In interlaced scan, each frame is displayed in two passes. First pass for even scan
lines and then after the vertical re-trace, the beam sweeps out the remaining scan
lines i.e., odd scan lines (shown in fig). Interlacing is primarily used with slower
refreshing rates. This is an effective technique for avoiding screen flickering.
• Here, at first, all points on the even-numbered (solid) scan lines are displayed; and
then all points along the odd-numbered (dashed) lines are displayed.
GauravBhattarai
b. Non-interlaced Raster-Scan System or Progressive Scanning
• Follows non-interlaced refresh procedure.
• In non-interlaced refresh procedure, electron beam sweeps over entire scan lines
in a frame from top to bottom in one pass.
• That is, content displays both the even and odd scan lines (the entire video frame)
at the same time.
Fig: A raster-scan system displays an object as a set of discrete points across each
scan line.
GauravBhattarai
• Refreshing on raster scan display is carried out at the rate of 60 to 80 frames per
second. Sometimes, refresh rates are described in units of cycles per second, or
Hertz (Hz), where a cycle corresponds to one frame.
• Examples: Monitors, Home television, printers.
Application
• For the realistic display of scenes containing subtle shading and color patterns.
GauravBhattarai
2. Random-Scan
• Also called Calligraphic displays and Vector display system
• In a random scan display unit, electron beam directed towards only to the parts of
the screen where a picture is to be drawn.
• Random-scan monitors draw a picture one line at a time and for this reason are
also referred to as vector displays (or stroke-writing or calligraphic displays).
• Random scan system uses an electron beam which operates like a pencil to create
a line image on the CRT.
• Picture definition is stored as a set of line drawing instructions in an area of
memory called the refresh display file (Display list or display file).
• To display a picture, the system cycles through the set of commands (line drawing)
in the display file. After all commands have been processed, the system cycles back
to the first line command in the list.
• The component line can be drawn or refreshed by a random scan display system in
any specified order (shown in figure).
GauravBhattarai
Fig: A random-scan system draws the component lines of an object in any order
specified.
• The refresh rate of vector display depends upon the no. of lines to be displayed for
any image.
• Random scan systems are designed for line-drawing applications and cannot
display realistic shaded scenes. Since CRT beam directly follows the line path, the
vector display system produce smooth line.
• Generally, refreshing on random-scan display is carried out at the rate of 30 to 60
frames per second.
• Random-scan systems are used in line-drawing applications.
• Vector displays generally used to produce graphics with higher resolution.
GauravBhattarai
Architecture of Random-Scan (Vector) System
• Vector display system consists of additional processing unit along with CPU which is
called the display processor or graphics controller.
• Picture definition is stored as a set of line drawing commands in the memory,
called a display list. A display list (or display file) contains a series of graphics
commands that define an output image.
GauravBhattarai
Difference between Random scan and Raster Scan
Random Scan Raster Scan
The resolution of random scan is higher than While the resolution of raster scan is lesser
raster scan. or lower than random scan.
In random scan, any alteration is easy in While in raster scan, any alteration is not so
comparison of raster scan. easy .
In random scan, interlacing is not used. While in raster scan, interlacing is used.
In random scan, mathematical function is
While in which, for image or picture
used for image or picture rendering. It is
rendering, raster scan uses pixels. It is
suitable for applications requiring polygon
suitable for creating realistic scenes.
drawings.
Electron Beam is directed to only that part Electron Beam is directed from top to
of screen where picture is required to be bottom and one row at a time on screen. It
drawn, one line at a time. is directed to whole screen.
It stores picture definition as a set of
It stores picture definition as a set of line
intensity values of the pixels in the frame
commands in the Refresh buffer. GauravBhattarai
buffer.
Refresh rate depends on the number of lines Refresh rate is 60 to 80 frames per second
to be displayed i.e. 30 to 60 times per second. and is independent of picture complexity.
In random scan, Solid Pattern is tough to fill. In raster scan, Solid Pattern is easy to fill.
GauravBhattarai
1. A system with 24 bits per pixel and resolution of 1024 by 1024. Calculate the size of
frame buffer (in Megabytes).
Solution:
Frame size in bits= 24*1024*1024 bits
Frame size in bytes= 24*1024*1024 / 8 bytes (since, 8 Bits = 1 Byte)
Frame size in kilobytes= 24*1024*1024 / 8 * 1024 kb (since, 1024 Bytes = 1 KB)
So, Frame size in megabytes= 24*1024*1024 / 8*1024*1024 MB (since, 1024 KB = 1 MB)
= 3 MB.
2. How many k bytes does a frame buffer needs in a 600 x 400 pixel?
Solution:
Resolution is 600 x 400.
Suppose, n bits are required to store 1 pixel.
Then, the size of frame buffer = Resolution * bits per pixel
= (600 * 400) * n bits
= 240000 n bits
= (240000)/(1024 * 8) n kb (as 1kb = 1024 bites)
= 29.30 n k bytes.
GauravBhattarai
3. Find out the aspect ratio of the raster system using 8 x 10 inches screen and 100
pixel/inch.
Solution:
We know that,
Aspect ratio = Width / Height
= (8 x 100) / (10 x 100)
=4/5
So, aspect ratio = 4 : 5.
4. Consider a RGB raster system is to be designed using 8 inch by 10 inch screen with
a resolution of 100 pixels per inch in each direction. If we want to store 8 bits per
pixel in the frame buffer, how much storage do we need for the frame buffer?
Solution:
Size of screen = 8 inch × 10 inch
Pixel per inch (Resolution) = 100
Total no. of pixel = (8*100)*(10*100) = 800000 pixels
Per pixel storage = 8 bit
Total storage required in frame buffer = 800000*8 bits = 6400000 bits
= 6400000/8 bytes
= 800000 byte
GauravBhattarai
5. Consider raster system with resolution 1280 ×1024.
a) How much pixel could be accessed per second by the video controller that
refreshes the screen at the rate of 60 frames / second?
b) What is the access time per pixel?
Solution:
a) No. of pixel accessed per second = 1280*1024*60 = 78643200 pixels
b) Since 78643200 pixels are accessed in 1 second
Access time per pixel = 1/78643200
= 12.7 𝑛𝑎𝑛𝑜𝑠𝑒𝑐.
6. Consider a raster scan system having 12 inch by 12 inch with a resolution of 100
pixels per inch in each direction. If display controller of this system refresh the screen
at the rate of 50 frames per second, how many pixels could be accessed per second
and what is the access time per pixel of the system.
Solution:
Size of screen = 12 inch × 12 inch
Resolution = 100 pixels per inch
Total no. of pixels in one frame = (12*100)*(12*100)
Refresh rate = 50 frames per second
i.e. 50 frames can be accessed in 1 sec.
Total no. of pixel accessed in 1 sec = 50*(12*100)*(12*100) = 72000000 pixels
GauravBhattarai
Again,
50 frames can be accessed in 1 sec.
1 frames can be accessed in 1/50 sec.
(12*100*12*100) frames can be accessed in 1/50 sec.
Then, 1 pixel can be accessed in 1/(50*12*100*12*100) sec.
= 109/(50*12*100*12*100) ns
= 13.88 ns.
7. What is the time required to display a pixel on the monitor of size 1024 × 768 with
refresh rate of 60 Hz?
Solution:
Refresh rate = 60Hz i.e. 60 frames per second
Total no. of pixel in one frame = 1024*768 = 786432 pixels
GauravBhattarai
8. How much time is spent scanning across each row of pixels during screen refresh
on a raster system with resolution 1280 ×1024 and refresh rate of 60 frames per
second?
Solution:
Resolution = 1280 × 1024
i.e. one frame contains 1024 scan line and each scan line consists of 1280 pixels.
Refresh rate = 60 frames per second
i.e. 60 frames take 1 second.
1 frame takes 1/60 second.
i.e. 1024 scan line take 1/60 second i.e. 0.0166 sec.
1 scan line take 0.0166/1024 =0.016 sec.
9. Calculate the total memory required to store a 10 minute video in a SVGA system
with 24 bit true color and 25 fps.
Solution:
The SVGA system allows resolution = 800 × 600
Refresh rate = 25 fps
i.e. 25 frames take 1 second
1 frame takes 1/25 second = 0.04 second
Size of video =10 minutes = 10*60 = 600 second
Total memory required = 800 × 600 × 600 × 0.04× 24 bit
= 276480000 /(8*1024*1024) Mb
= 32.959 Mb
GauravBhattarai
10. If a pixel is accessed from the frame buffer with an average access time of 300ns
then will this rate produce an un-flicking effect for the screen size of 640 × 480.
Solution:
Size of screen = 640 × 480
Total no. of pixels = 640*480=307200
Average access time of one pixel = 300 ns
Total time required to access entire pixels of image in the screen = 307200*300
= 92160000 ns
= 92160000/109 sec
= 0.09216 sec
i.e. one cycle take 0.09216 sec.
Since the minimum refresh rate for unflicker image is 60 frames per second, hence we
can say the monitor produces flickering effect.
GauravBhattarai
Architecture of Raster- Scan System
• The raster graphics systems typically consist of several processing units. CPU is the
main processing unit of computer systems.
• Besides CPU, graphics system consists of a special buffer processor called video
controller or display controller. It is used to control the operation of the display
device.
• In addition, to the video controller, raster scan systems can have other processors
as co-processors which are called graphic controller or display processors or
graphics processor unit (GPU).
• Image is generated by CPU/GPU and write (or load) into frame buffer. Image in
frame buffer is read out by video controller to display on the screen.
GauravBhattarai
Fig: Raster scan display system architecture
• Here, frame buffer may be anywhere in system memory, the video controller
accesses the frame buffer via the system bus.
GauravBhattarai
Fig: Common raster display system architecture
• A common architecture in which a fixed area of the system memory is reserved for
the frame buffer, and the video controller is given access to the frame buffer
memory.
• The video controller cycles through the frame buffer, one scan line at a time. The
content of frame buffer is used to control the CRT beam's intensity or color.
GauravBhattarai
Working Mechanism of Video Controller
Disadvantages
• Slow, since graphics function such as retrieving coordinates, and scan conversion
(Digitizing a picture definition into a set of pixel intensities, reading screen
coordinates) is done by the CPU.
• In this type of architecture, CPU digitizes a picture definition given in an application
program into a set of pixel intensity values for storage in the frame buffer. This
digitization process is called scan conversion.
GauravBhattarai
b. Raster Display System with Peripheral Display Processor:
• The raster scan with a peripheral display processor is a common architecture that
avoids the disadvantage of simple raster scan system.
• Raster system architecture with a peripheral display processor is shown in figure
below.
GauravBhattarai
GauravBhattarai
➢ Depending on the pattern of coating of phosphor, two types of raster scan color
CRT are commonly used using shadow mask method.
GauravBhattarai
Fig: Operation of a delta–delta, shadow-mask CRT. Three electron guns, aligned with
the triangular color-dot patterns on the screen, are directed to each dot triangle by a
shadow mask.
GauravBhattarai
Inline Color CRT
• This CRT uses stripe pattern instead of delta pattern.
• Three strips one for each R, G, and B colors are used for a single pixel along a scan
line so called inline.
• Three beams simultaneously expose three inline phosphor dots along scan line.
GauravBhattarai
Direct-View Storage Tubes (DVST)
• This is an alternative method for maintaining a screen image.
• It sores the picture information inside the CRT instead of refreshing the screen.
• A direct-view storage tube (DVST) stores the picture information as a charge
distribution just behind the phosphor-coated screen.
• Two electron guns are used in a DVST. One, the primary gun, is used to store the
picture pattern; the second, the flood gun, maintains the picture display.
Pros:
• Since no refreshing is needed, complex pictures can be displayed in high-resolution
without flicker.
Cons:
• Ordinarily do not display color
GauravBhattarai
Flat Panel Displays
• Flat panel display refers to a class of video device that have reduced volume,
weight and power requirement compared to a CRT.
• These emerging display technologies tend to replace CRT monitors. Current uses of
flat-panel displays include TV monitors, calculators, pocket video games, laptops,
displays in airlines and ads etc.
• Two categories of flat-panel displays:
1. Emissive displays:
➢ Convert electrical energy into light.
➢ Example: Plasma panels, electroluminescent displays and light-emitting
diodes.
2. Non-emissive displays:
➢ Use optical effects to convert sunlight or light from other sources into graphics
patterns.
➢ Example: liquid-crystal displays.
GauravBhattarai
Plasma Panels
• Also called gas-charge displays. Plasma panels are constructed by filling the region
between two glass plates with a mixture of gases usually neon.
• A series of vertical conducting ribbons is placed on one glass panel, and a set of
horizontal ribbons is built into other glass panel.
• Firing voltage is applied to a pair of horizontal and vertical conductors. It causes the
gas at the intersection of the two conductors to break down into glowing pattern.
• Picture definition is stored in the refresh buffer.
GauravBhattarai
• Liquid crystal material is placed between these two glass plates. And intersection of
two conductors defines the pixel position. Picture definition is stored in the refresh
buffer, and the screen is refreshed at the rate of 60 frames per second. Liquid-
crystal displays (LCDs) are commonly used in small systems, such as calculators and
portable, laptop computers.
GauravBhattarai
Graphics Software
There are two general categories of graphics software
1. General programming packages:
• Provides extensive set of graphics functions for high level languages
(FORTRAN, C etc).
• Basic functions include those for generating picture components (straight
lines, polygons, circles, and other figures), setting color and intensity values,
selecting views, and applying transformations.
• Example: GL(Graphics Library)
2. Special-purpose application packages:
• Designed for nonprogrammers, so that users can generate displays without
worrying about how graphics operations work.
• The interface to the graphics routines in such packages allows users to
communicate with the programs in their own terms.
• Example: artist's painting programs and various business, medical, and CAD
systems.
GauravBhattarai
Software standards
Primary goal of standardized graphics software is portability. When packages are
designed with standard graphics functions, software can he moved easily from one
hardware system to another and used in different implementations and applications.
International and national standards planning organizations in many countries have
cooperated in an effort to develop a generally accepted standard for computer
graphics. After considerable effort, this work led to following standards:
1. GKS (Graphical Kernel System):
This system was adopted as the first graphics software standard by the
International Standards Organization (ISO) and American National Standards
Institute (ANSI). Although GKS was originally designed as a two-dimensional
graphics package, a three dimensional GKS extension was subsequently
developed.
2. PHIGS (Programmer’s Hierarchical Interactive Graphics Standard):
Extension to GKS, Increased Capabilities for object modeling, color specifications,
surface rendering and picture manipulations are provided. Subsequently, an
extension of PHIGS, called PHIGS+, was developed to provide three-dimensional
surface-shading capabilities not available in PHIGS.
Although PHIGS presents a specification for basic graphics functions, it does not
provide a standard methodology for a graphics interface to output devices (i.e. still
machine dependent). Nor does it specify methods for storing and transmitting
pictures. Separate standards have been developed for these areas:
GauravBhattarai
• CGI (Computer Graphics interface): Standardization for device interface
• CGM (Computer Graphics Metafile): Standards for archiving and transporting
pictures
GauravBhattarai
Coordinate Representations
Normally, graphics package require coordinate specification to be given with respect
to Cartesian reference frames. Each object for a scene can be defined in a separate
modeling Cartesian coordinate system, which is then mapped to world coordinates to
construct the scene. From world coordinates, objects are transferred to normalized
device coordinates, then to the final display device coordinates. The transformations
from modeling coordinates to normalized device coordinate are independent of
particular devices that might be used in an application. Device drivers are then used to
convert normalized coordinates to integer device coordinates.
An initial modeling-coordinate position (𝑥𝑚𝑐 , 𝑦𝑚𝑐 ) in this illustration is transferred
to a device coordinate position (𝑥𝑑𝑐 , 𝑦𝑑𝑐 ) with the sequence:
(𝑥𝑚𝑐 , 𝑦𝑚𝑐 ) → (𝑥𝑤𝑐 , 𝑦𝑤𝑐 ) → (𝑥𝑛𝑐 , 𝑦𝑛𝑐 ) → (𝑥𝑑𝑐 , 𝑦𝑑𝑐 )
PHIGS Workstations
Generally, the term workstation refers to a computer system with a combination of
input and output devices that is designed for a single user. Some graphics system,
such as PHIGS and GKS, use the concept of a “workstation” to specify devices or
software that are to be used for input or output in a particular application. A
workstation identifier in these system can refer to a file; a single devices, such as a
raster monitor; or a combination of devices, such as a monitor, keyboard, and a
mouse. Multiple workstation can be open to provide input or to receive output in a
graphics application.
GauravBhattarai
Color Models
A color model is a method for explaining the properties or behavior of color within
some particular context. No single color model can explain all aspects of color, so we
make use of different models to help describe the different perceived characteristics
of color.
GauravBhattarai
RGB (Red, Green, Blue) color model
• The RGB color model used in color CRT monitors.
• In this model, Red, Green and Blue are added together to get the resultant color
white.
• Each color point within the bounds of the cube is represented as the triple (R, G, B),
where value for R, G, B are assigned in the range from 0 to 1.
• Here RGB color place together at 120 degree.
• All other colors are generated from these three primary colors.
GauravBhattarai
CMY (Cyan, Magenta, Yellow) color model
• The CMY color model used in color printing devices.
• In his model, Cyan, Magenta, and Yellow are added together to get the resultant
color BLACK.
GauravBhattarai
CMYK (Cyan, Magenta, Yellow, Black) color model
• For printing and art industry the CMY model is not enough. So, fourth primary color
K (Black) is added to CMY model.
• CMYK (subtractive color model) is the standard color model used in offset printing
for full-color document. Because such printing uses inks of these four basic colors,
it is often called four-color printing.
GauravBhattarai
Color Cube
Hexacone
GauravBhattarai
Output Primitives
• The basic building blocks for pictures are referred to as output primitives.
• Output primitives are the geometric structures that used to describe the shapes
and colors of the objects.
• They include character strings, and geometric entities such as points, straight lines,
curved lines, polygons, circles, etc.
• Points and straight line segments are the most basic components of a picture.
• In raster scan systems, a picture is completely specified by the set of intensities for
the pixel positions in the display. The process that converts picture definition into a
set of pixel intensity values is called scan conversion. This is done by display
processor.
GauravBhattarai
Points and Lines
• With raster-scan system, a point can be plotted, by simply turning on the electron
beam at that point.
putpixel(20, 20, RED)
• And a random-scan system stores the point plotting instructions in the display list
file.
LDXA 100 Load data value 100 into the X register.
LDYAP 450 Draw point at(100, 450)
GauravBhattarai
Line Drawing Algorithms
Three most widely used line drawing algorithms:
1. Direct use of Line Equation
2. Digital Differential Analyzer Algorithm(DDA)
3. Bresenham’s Line Drawing Algorithm(BLA/BSA)
GauravBhattarai
Direct use of Line Equation
Fig: Line path between two endpoint(x1, y1), and (x2, y2).
Suppose (x1, y1) and(x2,y2) are the two end points of a line segment as shown in figure.
Then,
m = (y2-y1)/(x2-x1) ………………..(ii)
This equation forms the basis for determining intermediate pixel positions. Calculate
m from equation (ii) and test for following three cases:
Procedure:
The equation of the line is,
y = mx+b ………………….(i)
m = y2-y1/x2-x1 ………………….(ii)
Case I:
If |m| < 1, we sample at unit x intervals i.e. △x = 1.
xk+1 = xk+1 ..…..……………..(iii)
Therefore, in general,
yk+1 = yk ± m
yk+1 = xk ± 1 for |m| < 1 and
yk+1= yk ± 1
xk+1 = xk ± 1/m for |m| >1
GauravBhattarai
Algorithm:
Step 1: Input the line endpoints and store the left endpoint in (x1, y1) and right
endpoint in (x2, y2).
Step 2: Calculate the values of dx and dy, dx = x2 – x1, dy = y2 – y1.
Step 3: if( abs(dx) > abs(dy))
steplength = abs(dx)
else
steplength = abs(dy)
Step 8: End
GauravBhattarai
Pros & Cons of DDA:
• Simple method, involves only integer additions.
• But involves, continuous round offs which can cause the calculated pixel positions
to drift away from the actual line path.
• Furthermore, continuous rounding operations are time consuming.
GauravBhattarai
Example:
Digitize the line with endpoints (1, 5) and (7, 2) using DDA algorithm.
Solution:
Here,
dx = 7-1=6
dy = 2-5 = -3
So,
steplength = 6 (since abs(dx)>abs(dy)).
Therefore,
xIncrement = dx/steplength = 6/6 = 1,
yIncrement = dy/steplength = -3/6 = -1/2 = -0.5
Based on these values the intermediate pixel calculation is shown in the table below.
GauravBhattarai
Bresenham’s Line Drawing Algorithm (BSA)
GauravBhattarai
Deriving the Bresenham Line Drawing Algorithm
The vertical separations from the mathematical line at sample location xk +1 are
denoted by the letters d_upper and d_lower. The y coordinate on the line at xk +1 is:
y = m * (xk + 1) + b
Let’s substitute m with ∆y/∆x where ∆x and ∆y are the differences between the
endpoints:
∆x * (d_lower - d_upper) = ∆x * ( (2∆y / ∆x) * (xk + 1) - 2yk) + (2b - 1)
= 2∆y.xk - 2∆x.yk + 2∆y + ∆x * (2b - 1)
= 2∆y.xk - 2∆x.yk + c
So, at the kth step along a line, a decision parameter pk is given by:
pk = ∆x * (d_upper - d_lower)
= 2∆y.xk - 2∆x.yk + c
The sign of the decision parameter pk and that of (d_lower – d_upper) is the same.
If pk is negative, we select the lower pixel; otherwise, we select the upper pixel.
Because coordinate changes along the x-axis occur in unit steps, we can handle
everything with integer calculations.
GauravBhattarai
The decision parameter is given at step k+1 as:
pk+1 = 2∆y.(xk+1) - 2∆x.(yk+1) + c
The initial decision parameter p0 is evaluated at (x0, y0) and is written as follows:
p0 = 2∆y - ∆x
Hence,
If Pk is positive, (i.e. d_upper < d_lower),
Here, next point to plot is (xk+1, yk+1), equation 1 becomes,
pk+1= pk +2∆y - 2∆x
GauravBhattarai
Algorithm
Step 1:
Calculate ΔX and ΔY from the given input.
These parameters are calculated as-
• ΔX = Xn – X0
• ΔY =Yn – Y0
Step 2:
Calculate the decision parameter Pk.
It is calculated as-
Pk = 2ΔY – ΔX
Step 3:
Suppose the current point is (Xk, Yk) and the next point is (Xk+1, Yk+1).
Find the next point depending on the value of decision parameter Pk.
Follow below are the two cases:
Case 1: if Pk<0
Pk+1= Pk+2ΔY
Xk+1= Xk+1
Yk+1= Yk
GauravBhattarai
Case 2: if Pk>=0
Pk+1= Pk+2ΔY-2ΔX
Xk+1= Xk+1
Yk+1= Yk+1
Step 4:
Keep repeating Step 3 until the end point is reached or number of iterations equals to
(ΔX-1) times.
GauravBhattarai
Example:
Calculate the points between the starting coordinates (9, 18) and ending coordinates
(14, 22).
Solution:
Given:
Starting coordinates = (X0, Y0) = (9, 18)
Ending coordinates = (Xn, Yn) = (14, 22)
As Pk >= 0,
Pk+1 = Pk + 2ΔY – 2ΔX = 3 + (2 x 4) – (2 x 5) = 1
Xk+1 = Xk + 1 = 9 + 1 = 10
Yk+1 = Yk + 1 = 18 + 1 = 19
GauravBhattarai
Similarly, above step is executed until the end point is reached or number of iterations
equals to 4 times.
(Number of iterations = ΔX – 1
=5–1
= 4)
GauravBhattarai
GauravBhattarai
Circle
• A circle is defined as a set of points that are all at a given distance ‘r’ from the
center position (xc, yc).
• The general circle equation can be written as;
(x – xc)2 + (y – yc)2 = r2.
Properties of circle
Symmetry in quadrants
The shape of the circle is similar in each quadrant. Thus by calculating points in one
quadrant, we can calculate points in other three quadrants.
GauravBhattarai
Symmetry in octants
• The shape of the circle is similar in each octant. Thus by calculating points in one
octant we can calculate points in other seven octants. If the point (x, y) is on the
circle, then we can trivially compute seven other points on the circle.
• Therefore, we need to compute only one 45° segment to determine the circle, as
shown in figure below.
• By taking advantage of circle symmetry in octants, we can generate all pixel
positions around a circle by calculating only the points within the sector from x = 0
to y = x.
GauravBhattarai
Methods to draw circle
• Direct method
• Trigonometric method
Trigonometric method
• Use circle equation in polar form:
𝒙=𝒙𝒄+𝒓 𝒄𝒐𝒔𝜽
𝒚=𝒚c+𝒓 𝒔𝒊𝒏𝜽
• To draw circle using these polar co-ordinates approach, just increment angle
starting from 0 to 2π. Compute (x, y) position corresponding to increment angle.
GauravBhattarai
Mid point circle Algorithm:
• In mid point circle algorithm, we sample at unit intervals and determine the closest
pixel position to the specified circle path at each step.
• For a given radius r, and screen center position (xc, yc) , we can set up our algorithm
to calculate pixel positions around a circle path centered at (0,0) and then each
calculated pixel position (x, y) is moved to its proper position by adding xc to x and
yc to y.
i.e. x = x + xc,
y = y + yc
• To summarize the relative position of point (x, y) by checking sign of fcircle function,
GauravBhattarai
GauravBhattarai
GauravBhattarai
GauravBhattarai
GauravBhattarai
Example:
Digitize a circle (x-2)2 + (y-3)2 = 25
Solution:
Center C(xc, yc) = (2, 3)
Radius (r) = 5
1st pixel (0, 5)
p0 = 1-r
= 1-5
= -4
Successive decision parameter, and pixel calculations are shown in the table below.
Now, we need to determine symmetry points, and actual points when circle center is
at (2, 3).
GauravBhattarai
GauravBhattarai
GauravBhattarai
Ellipse
The basic algorithm for drawing ellipse is same as circle computing x and y position at
the boundary of the ellipse from the equation of ellipse directly.
We have equation of ellipse centered at origin (0,0) is:
GauravBhattarai
GauravBhattarai
GauravBhattarai
GauravBhattarai
GauravBhattarai
GauravBhattarai
GauravBhattarai
GauravBhattarai
GauravBhattarai
GauravBhattarai
GauravBhattarai
Example:
Digitize an ellipse (x-2)2/64 + (y+5)2/36 = 1, Using midpoint algorithm.
Solution:
Center(h, k) = (2,-5), a = 8, b = 6.
Let us assume that ellipse is centered on the coordinate origin(0, 0).
For region R1,
First pixel is (0, 6).
Now initial decision parameter,
p10 = b2- a2b+a2/4
= 36 - 64 × 6 + 64/4
p10 = -332
Successive decision parameter values and pixel positions can be calculated as follows:
The remaining positions along the ellipse path in the first quadrant are then calculated
as follows:
Thus we have obtained points along the ellipse path in the first quadrant. By using the
property of symmetry we can determine the symmetry points in other 3 quadrants.
Finally, for each (x, y) we need to perform;
x=x+2
y=y–5
GauravBhattarai
Filled Area Primitives
• Filling is the process of “coloring in” a fixed area or region.
• A polyline is a chain of connected line segments. It is specified by giving the vertices
(nodes) P0, P1, P2... and so on.
• The first vertex is called the initial or starting point and the last vertex is called the
final or terminal point, as shown in the figure(a).
• When starting point and terminal point of any polyline is same, i.e. when polyline is
closed then it is called polygon. This is illustrated in figure (b).
• Triangle is the simplest form of polygon having three sides and three vertices.
GauravBhattarai
Types of Polygons
• The classification of polygons is based on where the line segment joining any two
points within the polygon is going to lie. There are two types of polygons:
1. Convex
• If the line segment joining any two interior points of the polygon lies completely
inside the polygon then it is called convex polygon.
GauravBhattarai
2. Concave
• The polygons that are not convex are called concave polygons i.e., if the line
segment joining any two interior points of the polygon is not completely inside the
polygon then it is called concave polygon.
GauravBhattarai
Representation of Polygons
• There are various methods to represent polygons in a graphics system.
• Some graphic devices support polygon drawing primitive approach.
• They can directly draw the polygon shapes.
• In C, drawpoly(n, int *points) function is used to draw polygons i.e. triangle,
rectangle, pentagon, hexagon etc. Here, n is the number of vertices in a polygon.
• Example:
int points[] = { 320, 150, 420, 300, 250, 300, 320, 150};
drawpoly(4, points);
GauravBhattarai
Polygon Filling
• Polygon filling is the process of highlighting all the pixels which lie inside the
polygon with any color other than background color.
• In other words, it is the process of coloring the area of a polygon. An area or a
region is defined as a collection of pixels.
• For area filling polygons are standard output primitive in general graphics package
since polygons are easier to process/fill since they have linear boundaries.
• There are two basic approaches used to fill the polygon:
➢ One way to fill a polygon is to start from a given "seed", point known to be
inside the polygon and highlight outward from this point i.e. neighboring
pixels until we encounter the boundary pixels. This approach is called seed fill
because color flows from the seed pixel until reaching the polygon boundary.
It is suitable for complex objects.
➢ Another approach to fill the polygon is to apply the inside test i.e. to check
whether the pixel is inside the polygon or outside the polygon and then
highlight pixels which lie inside the polygon. This approach is known as scan-
line algorithm. It avoids the need for a seed pixel but it requires some
additional computation. It is suitable for simple objects, polygons, circles etc.
GauravBhattarai
Seed Fill Algorithm
Principle:
• "Start at a sample pixel called as a seed pixel from the area, fill the specified
color value until the polygon boundary".
• The seed fill algorithm is further classified as boundary fill algorithm and flood fill
algorithm.
GauravBhattarai
Boundary Fill Algorithm
• This is a recursive algorithm that begins with a starting interior pixel called a seed,
and continues painting towards the boundary. The algorithm checks to see if this
pixel is a boundary pixel or has already filled. If not, it fills the pixel and makes a
recursive call to itself using each and every neighboring pixel as a new seed.
• This algorithm picks a point inside an object and starts to fill until it hits the
boundary of the object.
• The color of the boundary and the fill color that should be different for this
algorithm to work. In this method, edges of the polygons are drawn first in a
single color.
• An area filling starts from a random pixel called as seed, inside a region and
continues painting towards the boundary. This process continues outwards pixel
by pixel until the boundary color is reached.
• A boundary-fill procedure accepts as input the co-ordinates of an interior point (x,
y), a fill color, and a boundary color.
• Starting from (x, y), the procedure tests neighboring positions to determine
whether they are of boundary color or has already filled. If not, they are painted
with the fill color, and their neighbors are tested.
• This process continues until all the pixels up to the boundary are tested.
GauravBhattarai
• There are two methods for proceeding to the neighboring pixels from the seed
pixel:
➢ 4 adjacent pixel may be tested (4-connected)
▪ 4 neighboring pixels i.e., left, right, up and down are tested.
➢ 8 adjacent pixel may be tested (8-connected)
▪ 8 neighboring pixels are tested i.e., left, right, up, down, and diagonal
pixels are also tested.
GauravBhattarai
Fig: Particular filling resulted using 4-connected algorithm
GauravBhattarai
Flood Fill Algorithm
• Flood fill algorithm is used to fill the area bounded by different color boundaries
i.e., it is required to fill in an area that is not defined within a single color boundary.
• Basic concept of a flood-fill algorithm is to fill areas by replacing a specified interior
color by fill color instead of searching for a boundary color. In other words, it
replaces the interior color of the object with the fill color. When no more pixels of
the original interior color exist, the algorithm is completed.
• Here, we start with some seed and examine the neighboring pixels.
• We start from a specified interior pixel (x, y) and reassign all pixel values that are
currently set to a given interior color with desired fill-color.
• If the area we want to paint has more than one interior color, we can first reassign
pixel values so that all interior points have the same color.
• Using either a 4-connected or 8-connected approach, we then step through pixel
positions until all interior points have been repainted.
• Using either a 4-connected or 8-connected approach, we can step through pixel
positions until all interior point have been filled.
GauravBhattarai
Problems with Seed-fill algorithms
• Recursive algorithm for seed fill methods have got two difficulties:
• The first difficulty is that if some inside pixels are already displayed in fill color then
recursive branch terminates, leaving further internal pixels unfilled.
• Another difficulty with recursive seed fill methods is that it cannot be used for large
polygons. This is because recursive seed fill procedures require stacking of
neighboring points and in case of large polygons stack space may be insufficient for
stacking of neighboring points.
GauravBhattarai
Inside and Outside Test of Polygon
• Area filling algorithms and other graphics processes often need to identify interior
and exterior region of objects.
• To determine whether or not a point is inside of a polygon, even-odd method.
GauravBhattarai
• If the intersection point is vertex of the polygon then we have to look at the other
endpoints of the two segments which meet at this vertex.
• If these points lie on the same side of the constructed line, then the point X counts
as an even number of intersections.
• If they lie on opposite sides of the constructed line, then the point is counted as a
single intersection.
GauravBhattarai
Scan-Line Algorithm
• A scan-line algorithm fills horizontal pixels across scan lines, instead of proceeding
to 4-connected or 8-connected neighboring points.
• This algorithm works by intersecting scanline with polygon edges and fills the
polygon between pairs of intersections.
Approach:
• For each scan-line crossing a polygon, this algorithm computes the intersection
points of the scan line with the polygon edges.
• These intersection points are then sorted from left to right (i.e., increasing value of
x coordinate), and the corresponding positions between each intersection pair are
filled by the specified color (shown in figure below).
• When scan line intersects an edge of polygon, start to color each pixel (because
now we’re inside the polygon), when intersects another edge, stop coloring.
• This procedure is repeated until complete polygon is filled.
GauravBhattarai
• The figure is showing intersection points p0, p1, p2, p3 and intersection pair (p0 −> p1,
p2−> p3). The scan line algorithm first finds the largest and smallest y values of the
polygon. It then starts with the largest y value and works its way down, scanning from
left to right.
• The important task in the scan line algorithm is to find the intersection points of the
scan line with the polygon boundary.
• To find intersection points;
➢ Find the slope m for each polygon boundary (i.e., each edge) using vertex
coordinates. If A(x1, y1) and E(x2, y2) then slope for polygon boundary AE can be
calculated as follows;
▪ m = (y2-y1)/(x2-x1)
➢ We can find the intersection point on the lower scan line if the intersection point
for current scan line is known. Let (xk, yk) be the intersection point for the current
scan line yk.
GauravBhattarai
▪ Scanning starts from top to bottom, y-coordinates between the two scan
line changes by 1
o i.e., yk+1 = yk -1.
▪ And x coordinate can be calculated as,
o xk+1 = xk - 1/m
Algorithm
Step1: Read the number of vertices of polygon, n.
Step2: Read the x and y coordinates of all vertices.
Step3: Find the minimum and maximum value of y and store as ymin and ymax
respectively.
GauravBhattarai
Step4: For y = ymax to ymin
• For each scan line
➢ Find the intersection points p0, p1, p2, …, pn.
➢ Find the coordinates of intersection points p0, p1, p2, …,pn.
➢ Sort the intersection point in the increasing order of x-coordinate i.e.[p0, p1,
p2,…,pn].
➢ Fill all those pair of coordinates that are inside polygon, using even-odd
method, and ignore the alternate pairs.
Step5: END.
GauravBhattarai
Filling Rectangles
• Scan line method can be used to fill a rectangle.
for (y=ymin to y= ymax of the rectangle) //by scan line
for(x = x to x= xmax) //by pixels
putpixel(x, y, fill_color)
GauravBhattarai
GauravBhattarai
Thank You
GauravBhattarai