Intro To Comp.g
Intro To Comp.g
G)
C.G involves technology to access. The Process transforms and presents information in a visual form.
The end product of the computer graphics is a picture. In computer graphics, 2 or 3D pictures can be
created. Algorithms have been developed for improving the speed of picture generation.
2Flight Simulator: It helps in giving training to the pilots of airplanes. These pilots spend much of their
training not in a real aircraft but on the ground at the controls of a Flight Simulator.
3. Use in Biology: Molecular biologist can display a picture of molecules and gain insight into their
structure with the help of computer graphics.
4. Presentation Graphics: Example of presentation Graphics are bar charts, line graphs, pie charts and
other displays showing relationships between multiple parameters.
7. Entertainment: Computer Graphics are now commonly used in making motion pictures, music videos
and television shows.
9. Printing Technology: Computer Graphics is used for printing technology and textile design.
In modern computers, graphics processing is done by a specialized component called a GPU, or Graphics
Processing Unit. A GPU includes processors for doing graphics computations with its own dedicated
memory for storing things like images and lists of coordinates.
GPU processors have very fast access to data that is stored in GPU memory, faster than their access to
data stored in the computer’s main memory. To draw a line or perform some other graphical
operations, the CPU simply has to send commands, along with any necessary data, to the GPU, which is
responsible for actually carrying out those commands.
The CPU offloads most of the graphical work to the GPU, which is optimized to carry out that work very
fast. The set of commands that the GPU understands make up the API of the GPU. OpenGL is an API, and
most GPUs support OpenGL in the sense that they can understand OpenGL commands, or at least that
OpenGL commands can efficiently be translated into commands that the GPU can understand.
OpenGL has a series of APIs that have been subject to repeated extension and revision. It was designed
as a “client/server” system. The server, which is responsible for controlling the computer’s display and
performing graphics computations, carries out commands issued by the client. The server executes
OpenGL commands. The client is the CPU in the same computer, along with the application program
that it is running.
OpenGL commands come from the program that is running on the CPU. However, it is actually possible
to run OpenGL programs remotely over a network. That is, you can execute an application program on a
remote computer (the OpenGL client), while the graphics computations and display are done on the
computer that you are actually using (the OpenGL server).
The client & the server are separate, and there’s a communication channel between those components.
OpenGL commands and the data that they need are communicated from the client (the CPU) to the
server (the GPU) over that channel. One of the driving factors in the evolution of OpenGL has been the
desire to limit the amount of communication that is needed between the CPU and the GPU.
One approach is to store information in the GPU’s memory. If some data is going to be used several
times, it can be transmitted to the GPU once and stored in memory there, where it will be immediately
accessible to the GPU. Another approach is to try to decrease the number of OpenGL commands that
must be transmitted to the GPU to draw a given image.
With OpenGL 2.0, it became possible to write programs to be executed as part of the graphical
computation in the GPU. A programmer who wants to use a new graphics technique can write a
program to implement the feature and just hand it to the GPU. The OpenGL API doesn’t have to be
changed. The only thing that the API has to support is the ability to send programs to the GPU for
execution.
BASIC GRAPHIC DESIGN PRINCIPLES
Contrast
Contrast refers to how different elements are in a design, particularly adjacent elements. These
differences make various elements stand out. Contrast is also a very important aspect of
creating accessible designs. Insufficient contrast can make text content in particular very difficult to read
Proportion
Simply put, it’s the size of elements in relation to one another. Proportion signals what’s important in a
design and what isn’t. Larger elements are more important, smaller elements less.
Hierarchy
Hierarchy refers to the importance of elements within a design. The most important elements or
content should appear to be the most important. It is most easily illustrated through the use of titles
and headings in a design. The title of a page should be given the most importance
Repetition
Repetition is a great way to reinforce an idea. It’s also a great way to unify a design that brings together
a lot of different elements. Repetition can be done in a number of ways: via repeating the same colors,
shapes, or other elements of a design. An example is the use of repetition in the format of the headings.
Rhythm
The spaces between repeating elements can cause a sense of rhythm to form, similar to the way the
space between notes in a musical composition create a rhythm. Rhythms can be used to create
excitement (particularly flowing and progressive rhythms) or create reassurance and consistency.
Pattern
Patterns are a repetition of multiple design elements working together. In design, however, patterns can
also refer to set standards for how certain elements are designed. For example, top navigation is a
design pattern that the majority of internet users have interacted with.
Movement
Movement refers to the way the eye travels over a design. The most important element should lead to
the next most important and so on. This is done through positioning (the eye naturally falls on certain
areas of a design first), emphasis, and other design elements already mentioned.
Variety
Variety in design is used to create visual interest. Without variety, a design can very quickly become
monotonous, causing the user to lose interest. Variety can be created in a variety of ways, through
color, typography, images, shapes, and virtually any other design element.
Unity
Unity refers to how well the elements of a design work together. Visual elements should have clear
relationships with each other in a design. Unity also helps ensure concepts are being communicated in a
clear, cohesive fashion.
CHAPTER 2: GRAPHICS DISPLAYS (DISPLAY DEVICES)
The display device is an output device used to represent the information in the form of images (visual
form). Display systems are mostly called a video monitor or Video display unit (VDU).
Display devices are designed to model, display, view, or display information. The purpose of display
technology is to simplify information sharing.
1. Cathode-Ray Tube(CRT)
2. Liquid crystal display(LCD)
3. Light Emitting Diode(LED)
4. Laser Devices
Here, CRT stands for Cathode ray tube. It is a technology which is used in traditional
computer monitor and television. Cathode ray tube is a particular type of vacuum tube that displays
images when an electron beam collides on the radiant surface.
Components of CRT
Electron Gun: The electron gun is made up of several elements, mainly a heating filament
(heater) and a cathode. The electron gun is a source of electrons focused on a narrow beam
facing the CRT.
Focusing & Accelerating Anodes: These anodes are used to produce a narrow and sharply
focused beam of electrons.
Horizontal & Vertical Deflection Plates: These plates are used to guide the path of the electron
the beam. The plates produce an electromagnetic field that bends the electron beam through
the area as it travels.
Phosphorus-coated Screen: The phosphorus coated screen is used to produce bright spots
when the high-velocity electron beam hits it.
There are two ways to represent an object on the screen:
1. Raster Scan: It is a scanning technique in which the electron beam moves along the screen. It moves
from top to bottom, covering one line at a time. A raster scan is based on pixel intensity control display
as a rectangular box on the screen called a raster. Picture description is stored in the memory area
called as Refresh buffer, or Frame Buffer. Frame buffer is also known as Raster or Bitmap. Raster scan
provides the refresh rate of 60 to 80 frames per second.
Advantages:
1. Real image
2. Many colors to be produced
3. Dark scenes can be pictured
Disadvantages:
1. Less resolution
2. Display picture line by line
3. More costly
2. Random Scan (Vector scan): Also known as stroke-writing display or calligraphic display. The electron
beam points only to the area in which the picture is to be drawn. It uses an electron beam like a pencil
to make a line image on the screen. The image is constructed from a sequence of straight-line segments.
On the screen, each line segment is drawn by the beam to pass from one point on the screen to the
other, where its x & y coordinates define each point. After compilation of picture drawing, the system
cycle back to the first line and create all the lines of picture 30 to 60 times per second.
Fig: A Random Scan display draws the lines of an object in a specific order
Advantages:
1. High Resolution
2. Draw smooth line Drawing
Disadvantages:
5. Refresh rate depends or resolution 5. Refresh rate does not depend on the picture.
7. Beam Penetration technology come under it. 7. Shadow mark technology came under this.
LCD is defined as the flat panel display that uses the properties of a liquid crystal to display a picture. It
is like a flat display television that deals with crystals and polarizers to give a perfect picture/ video. A
polarizer is defined as an optical filter that enables the light to pass through it. The light passes in
variations so that other lights do not pass through it. Liquid Crystal Displays are the devices that produce
a picture by passing polarized light from the surroundings or from an internal light source through a
liquid-crystal material that transmits the light.
LCD uses the liquid-crystal material between two glass plates; each plate is the right angle to each other
between plates liquid is filled. One glass plate consists of rows of conductors arranged in vertical
direction. Another glass plate is consisting of a row of conductors arranged in horizontal direction. The
pixel position is determined by the intersection of the vertical & horizontal conductor.
The light crystals do not emit light. Rather they need a backlight so that it can produce images. LCD uses
the basic technology for showing the images in pixels. On the other hand, the other advanced gadgets
depict clear images and relatively larger. The LCDs have replaced the heavy CRTs (Cathode Ray Tubes) in
earlier models of television. LCDs are now available in all sizes ranging from smart watches to large
screens
Characteristics of LCDs.
1. An LCD consists of two primary parts i.e. the electrodes and polarizing filters. Both are placed
perpendicular to each other so that the light can pass accordingly, thereby, leading to display
the perfect picture/ video.
2. LCD consists of molecules that are placed between the electrodes. This is how an image is
generated in pixels.
3. LCD screens are energy efficient. They reduce the power consumption as they do not use CRTs
anymore.
Benefits of LCD.
It has a much better display and is relatively thinner as compared to earlier electronic gadgets.
The color and brightness given by the LCDs are phenomenal. The brightness and contrast help in
representing the perfect image.
LCD reduces power consumption and increases efficiency.
Advantages:
Disadvantages:
1. Fixed aspect ratio & Resolution
2. Lower Contrast
3. More Expensive
LIGHT EMITTING DIODE (LED):
LED is a device which emits when current passes through it. It is a semiconductor device.
The size of the LED is small, so we can easily make any display unit by arranging a large number of LEDs.
LED consumes more power compared to LCD. LED is used on TV, smartphones, motor vehicles, traffic
light, etc. LEDs are powerful in structure, so they are capable of withstanding mechanical pressure. LED
also works at high temperatures.
Advantages:
LASER DEVICES:
LASER stands for Light Amplification by Stimulated Emission of Radiation. It is an electronic device that
produces light, actually an electromagnetic radiation. This electromagnetic radiation is done through
optical amplification.
Properties
The LASER radiation has very special properties that make them used in different type of applications. It
is used to manufacture a wide variety of electronic devices like CD ROMs, Barcode readers etc. The laser
light is very thin and coherent.
Applications
GRAPHICS HARDWARE
i). Display Processor:
It is interpreter or piece of hardware that converts display processor code into pictures. It is one of the
four main parts of the display processor
Display Controller:
1. It handles interrupt
2. It maintains timings
3. It is used for interpretation of instruction.
Display Generator:
Display Console: It contains CRT, Light Pen, and Keyboard and deflection system.
The raster scan system is a combination of some processing units. It consists of the control processing
unit (CPU) and a particular processor called a display controller. Display Controller controls the
operation of the display device. It is also called a video controller.
Working: The video controller in the output circuitry generates the horizontal and vertical drive signals
so that the monitor can sweep. Its beam across the screen during raster scans.
As fig showing that 2 registers (X register and Y register) are used to store the coordinate of the screen
pixels. Assume that y values of the adjacent scan lines increased by 1 in an upward direction starting
from 0 at the bottom of the screen to ymax at the top and along each scan line the screen pixel positions
or x values are incremented by 1 from 0 at the leftmost position to xmax at the rightmost position. The
origin is at the lowest left corner of the screen as in a standard Cartesian coordinate system.
At the start of a Refresh Cycle:
X register is set to 0 and y register is set to ymax. This (x, y') address is translated into a memory address
of frame buffer where the color value for this pixel position is stored. The controller receives this color
value (a binary no) from the frame buffer, breaks it up into three parts and sends each element to a
separate Digital-to-Analog Converter (DAC).
These voltages, in turn, controls the intensity of 3 e-beam that are focused at the (x, y) screen position
by the horizontal and vertical drive signals. This process is repeated for each pixel along the top scan
line, each time incrementing the X register by Y.
As pixels on the first scan line are generated, the X register is incremented through xmax. Then x register
is reset to 0, and y register is decremented by 1 to access the next scan line. Pixel along each scan line is
then processed, and the procedure is repeated for each successive scan line units pixels on the last scan
line (y=0) are generated.
For a display system employing a color look-up table frame buffer value is not directly used to control
the CRT beam intensity. It is used as an index to find the three pixel-color value from the look-up table.
This lookup operation is done for each pixel on every display cycle.
As the time available to display or refresh a single pixel in the screen is too less, accessing the frame
buffer every time for reading each pixel intensity value would consume more time what is allowed:
Multiple adjacent pixel values are fetched to the frame buffer in single access and stored in the register.
After every allowable time gap, the one-pixel value is shifted out from the register to control the warm
intensity for that pixel. The procedure is repeated with the next block of pixels, and so on, thus the
whole group of pixels will be processed.
ii). Character Generators (Cg)
In the world of video production, a character generator (CG) is a software application that produces
static or animated text for use in 2D and 3D videos. A CG can be used to create anything from simple
Lower Thirds text to full-blown 3D animations. A character generator, or CG, is a tool used to create
digital characters. These characters can be used in video games, movies, and other digital media. CGs
are created by artists who design the characters and then use software to bring them to life. The most
common Cg is the 3D character generator. This type of CG allows artists to create realistic-looking
characters that can be used in movies and video games.
2D character generators are also common, but they are not as realistic as 3D CGs. 2D CGs are often used
for cartoons and other types of artwork. They are usually less expensive than 3D character generators
and easier to use. No matter what type of character generator you use, the process of creating a digital
character generally follows the same steps: first, the artist designs the character; then, they build the
model using software; finally, they animate the character using motion capture or keyframing
techniques.
Introduction
A computer Graphics can be anything like beautiful scenery, images, terrain, trees, or anything else that
we can imagine, however all these computer graphics are made up of the most basic components of
Computer Graphics that are called Graphics Output Primitive or simply primitive. The Primitives are the
simple geometric functions that are used to generate various Computer Graphics required by the User.
Some most basic Output primitives are point-position (pixel), and a straight line. However different
Graphic packages offers different output primitives like a rectangle, conic section, circle, spline curve or
may be a surface. Once it is specified what picture is to be displayed, various locations are converted
into integer pixel positions within the frame buffer and various functions are used to generate the
picture on the two dimensional coordinate system of output display.
Point Function A point function is the most basic Output primitive in the graphic package. A point
function contains location using x and y coordinate and the user may also pass other attributes such as
its intensity and color. The location is stored as two integer tuple. The color is defined using hex codes.
The size of a pixel is equal to the size of pixel on display monitor.
A Line Function
A line function is used to generate a straight line between any two end points. Usually a line function is
provided with the location of two pixel points called the starting point and the end point and it is upto
the computer to decide what pixels fall between these two points so that a straight line is generated.
- A raster graphics image or bitmap is a data structure representing a generally rectangular grid of pixels,
or points of color, viewable via a display medium.
-A bitmap corresponds to a bit-for-bit with an image displayed on a screen, generally in the same
format used for storage in the display's video memory, or maybe as a device-independent bitmap.
Bitmap is technically characterized by the width and height of the image in pixels and by the number of
bits per pixel (a color depth, which determines the number of colors it can represent).
- This memory area holds the intensity values for all screen points. The intensity values are read from
memory area and used to ‗paint‘each point on the screen one row at a time. This row is called a scan
line and each point is called a pixel (picture element). The ordering of pixels by rows is known as raster
order, or raster scan order.
-In a raster scan display system the electron beam is swept across the screen one row at a time from
top to bottom and from left to right. As the electron beam moves across each row, the beam intensity
is turned on or off to create a pattern of illuminated spots.
-The spots to be turned on are dependent on the picture to be drawn. The definition of this picture is
stored in a memory area called the refresh buffer or frame buffer.
-Rasterization-The term rasterization can in general be applied to any process by which vector
information can be converted into a raster format. In normal usage, the term refers to the popular
rendering algorithm for displaying three-dimensional shapes on a computer. Rasterization is currently
the most popular technique for producing real-time 3D computer graphics. Compared to other
rendering techniques such as ray tracing, rasterization is extremely fast. However, rasterization is
simply the process of computing the mapping from scene geometry to pixels and does not prescribe a
particular way to compute the color of those pixels.
-Interlacing- It‘s a method of encoding a bitmap image such that a person who has partially received it
sees a degraded copy of the entire image. When communicating over a slow communications link, this
is often preferable to seeing a perfectly clear copy of one part of the image, as it helps the viewer
decide more quickly whether to abort or continue the transmission.
-Interlacing is supported by the following formats:
i. GIF graphics interchange format
ii. PNG Portable Network Graphics
iii. JPEG Joint Photographic Experts Group. The group realized a need to make large photographic
files smaller,
iv. PGF (Progressive Graphics File) is a wavelet-based bitmapped image format that employs lossless
and lossy data compression.
-Interlacing is also known as "progressive" encoding, because the image becomes progressively clearer
as it is received.
i. Control grid – determines the rate at which the electron will pass thro.
ii. Electron beam- electrons travel without any hindrance from the air/dust as the tube is a vacuum.
iii. Phosphor coated screen – It glows when struck by electrons.
iv. Conductive coating - to soak up the electrons that pile up at the screen-end of the tube.
v. Focusing anode – It attracts scattered electron to a focal point.
vi. Accelerating anode – It gives the anode a high velocity so that we can use the velocity/momentum
to give the light we want.
CRT Monitors
-A CRT monitor contains millions of tiny red, green, and blue phosphor dots that glow when struck by
an electron beam that travels across the screen to create a visible image.
-In a cathode ray tube, the "cathode" is a heated filament. The heated filament is in a vacuum created
inside a glass "tube." The "ray" is a stream of electrons generated by an electron gun that naturally pour
off a heated cathode into the vacuum. Electrons are negative. The anode is positive, so it attracts the
electrons pouring off the cathode. This screen is coated with phosphor, an organic material that glows
when struck by the electron beam.
- There is a conductive coating inside the tube to soak up the electrons that pile up at the screen-end of
the tube.
-There are three ways to filter the electron beam in order to obtain the correct image on the monitor
screen: shadow mask, aperture grill and slot mask. These technologies also impact the sharpness of the
monitor's display. They are:
1. Shadow-mask -A shadow mask is a thin metal screen filled with very small holes. Three electron
beams pass through the holes to focus on a single point on a CRT displays' phosphor surface. The
shadow mask helps to control the electron beams so that the beams strike the correct phosphor at just
the right intensity to create the desired colors and image on the display. The unwanted beams are
blocked or "shadowed."
2. Aperture-grill -Monitors based on the Trinitron technology, which was pioneered by Sony, use an
aperturegrill instead of a shadow-mask type of tube. The aperture grill consists of tiny vertical wires.
Electron beams pass through the aperture grill to illuminate the phosphor on the faceplate. Most
aperture-grill monitors have a flat faceplate and tend to represent a less distorted image over the
entire surface of the display than the curved faceplate of a shadow-mask CRT. However, aperture-grill
displays are normally more expensive.
3. Slot-mask -A less-common type of CRT display, a slot-mask tube uses a combination of the shadow-
mask and aperture-grill technologies. Rather than the round perforations found in shadow-mask CRT
displays, a slotmask display uses vertically aligned slots. The design creates more brightness through
increased electron transmissions combined with the arrangement of the phosphor dots.
Advantages of phosphor
i. electron are easily knocked off to give light
ii. Once electrons starts losing energy, phosphor stay glowing for some time – persistence
-Persistence is defined as the time it takes for the emitted light from the screen to decay to 1/10th of
its origin in intensity. Lower persistence phosphor require higher refresh rates to maintain a picture on
the screen w/o flicker. The phosphor with low persistence is useful for animation. A high persistence
phosphor is useful for displaying high complex static pictures.
-Resolution-The max No. of points that can b displayed on the screen on a CRT w/o overlap is called
resolution. Typical resolution of high definition system is 1280 by 1024.
-Screen size- The physical size of a graphics monitor is given by the length on the screen diagonally n
normally quoted in inches.
-Aspect Ratio- It gives the ratio of vertical points to horizontal points necessary to produce equal length
lines in both direction of the screen.
ii. As light strikes the first filter, it is polarized. The molecules in each layer of the liquid crystal then
guide the light they receive to the next layer. As the light passes through the liquid crystal layers, the
molecules also change the light's plane of vibration to match their own angle. When the light reaches
the far side of is matched up with the second polarized glass filter, then the light will pass through. -
For a particular the liquid crystal substance, it vibrates at the same angle as the final layer of molecules.
If the final layer
voltage the liquid material at that intersection of electrons changes the orientation of that liquid crystal
of that intersection. The horizontal component is converted to a vertical component hence transmit
light.
If we apply an electric charge to liquid crystal molecules, they untwist. When they straighten out, they
change the angle of the light passing through them so that it no longer matches the angle of the top
polarizing filter. Consequently, no light can pass through that area of the LCD, which makes that area
darker than the surrounding areas.
-Plasma Panel
The basic idea of a plasma display is to illuminate tiny, colored fluorescent lights to form an image. Each
pixel is made up of three fluorescent lights -- a red light, a green light and a blue light.
What is Plasma?
The central element in a fluorescent light is plasma, a gas made up of free-flowing ions (electrically
charged atoms) and electrons (negatively charged particles).
-Under normal conditions, a gas is mainly made up of uncharged particles. That is, the individual gas
atoms include equal numbers of protons (positively charged particles in the atom's nucleus) and
electrons. The negatively charged electrons perfectly balance the positively charged protons, so the
atom has a net charge of zero.
If you introduce many free electrons into the gas by establishing an electrical voltage across it,
negatively charged particles rush toward the positively charged area of the plasma, and positively
charged particles rush toward the negatively charged area.
In this mad rush, particles are constantly bumping into each other. These collisions excite the gas atoms
in the plasma, causing them to release photons of energy.
How the system works
-It‘s composed of 2 glass plates:
-The 1st plate is brought to the 2nd plate until the space between them is small. The edges are sealed
off and space is left with air. The air inside is then removed and replaced with the plasma gas (e.g
neon).
Properties of the gas
-Must produce light when ionized.
-Must be easily ionized.
-Produce the correct color of gas when ionized.
Data Structures
-The need to create, modify or manipulate individual portions of the picture without affecting other
parts of the same picture, makes it necessary to be:
1. Able to independently reference those individual portions of the picture.
2. Be able to specify certain attributes for the portion which are diff from those other parts.
3. Be able to make additional details to a part of the pic while making additional provision for storage
space etc.
-All this happens in any graphical implementation such as modeling, animation & comp aided design
application
-A data structure is a set of elements that are related to each other by a set of relations.
Edge – is defined by a set of surfaces (also defined by vertices)
Surface – is defined be a set of edges which are defined by a set of vertices.
Three different kinds of data structures can be used to construct an object. They are based on edges,
vertices & surfaces.
-Vertex data are coordinate values for each vertex in the object.
-Edge data consists of pointers back into the vertex table to identify the vertices for each polygon edge.
-Polygon table contain pointers back into the edge table to identify the edges for each polygon
S2
S1
Databases
-It‘s an organized collection of graphics& non-graphics data related to each other in the support of a
common purpose and is stored on secondary storage in a computer.
-A database therefore may be viewed as the implementation of data structures into the computer.
Thus a decision on the data structure has to be made first followed by the choice of a database to
implement such a structure.
-A graphics database must be able to store pictorial data in addition to textural & alpha numeric data.
Homogeneous Coordinates
- OpenGL commands usually deal with two- and three-dimensional vertices, but in fact all are treated
internally as three-dimensional homogeneous vertices comprising four coordinates. -homogeneous
coordinates introduce a fourth ordinate i.e. (x, y, z, w).
-Many of our transformations will require translation of the points. For example if we want to move all
the points two units along the x axis we would require:
X‘ = x + 2
Y‘ = y
Z‘ = z
But how can we do this with a matrix. The answer is to use homogenous coordinates
-In most cases the last ordinate will be 1, but in general we can consider it a scale factor.
-Homogenous co-ordinates fall into two types:
1. Those with the final ordinate non-zero, which can be normalised into position vectors.
2. Those with zero in the final ordinate which are direction vectors, and have direction magnitude.
Transformation Matrices
-Transformations in computer graphics accomplish the following tasks:
i. Moving Objects- from frame to frame in an animation.
ii. Change of Coordinates- which is used when objects that are stored relative to one reference frame
are to be accessed in a different reference frame. One important case of this is that of mapping objects
stored in a standard coordinate system to a coordinate system that is associated with the camera (or
viewer).
iii. Projection- is used to project objects from the idealized drawing window to the viewport, and
mapping the viewport to the graphics display window.
iv. Mapping- between surfaces, for example, transformations that indicate how textures are to be
wrapped around objects, as part of texture mapping.
-OpenGL has a very particular model for how transformations are performed. Recall that when
drawing, it was convenient for us to first define the drawing attributes (such as color) and then draw a
number of objects using that attribute. OpenGL uses much the same model with transformations. You
specify a transformation, and then this transformation is automatically applied to every object that is
drawn, until the transformation is set again. It is important to keep this in mind, because it implies that
you must always set the transformation prior to issuing drawing commands.
-Because transformations are used for different purposes, OpenGL maintains three sets of matrices for
performing various transformation operations. These are:
Model view matrix- Used for transforming objects in the scene and for changing the coordinates into a
form that is easier for OpenGL to deal with.
Projection matrix- Handles parallel and perspective projections. (Used for the third task above.)
Texture matrix-This is used in specifying how textures are mapped onto objects. (Used for the last task
above.)
- For each matrix type, OpenGL maintains a stack of matrices. The current matrix is the one on the top
of the stack. It is the matrix that is being applied at any given time. The stack mechanism allows you to
save the current matrix (by pushing the stack down) and restoring it later (by popping the stack).
Types of transformations
i. Translation
- A translation is applied to an object by repositioning it along a straight line path from one coordinate
location to another. A 2d point is translated by adding translation distance Tx, Ty to the original
coordinate position x,y. - performed by the glTranslate*() command.
-Multiplies the current matrix by a matrix that moves (translates) an object by the given x, y, and z values
(or moves the local coordinate system by the same amounts).
-Note that using (0.0, 0.0, 0.0) as the argument for glTranslate*() is the identity operation - that is, it
has no effect on an object or its local coordinate system.
ii. Rotation- performed by the glRotate*() command.
void glRotate{fd}(TYPE angle, TYPE x, TYPE y, TYPE z);
Multiplies the current matrix by a matrix that rotates an object (or the local coordinate system) in a
counterclockwise direction about the ray from the origin through the point (x, y, z). The angle parameter
specifies the angle of rotation in degrees.
- The glRotatef (45.0, 0.0, 0.0, 1.0) is a rotation of 45 degrees about the z-axis.
iii. Scaling- performed by the glScale*() command. The glScale*() changes the apparent size of an
object. Scaling with values greater than 1.0 stretches an object, and using values less than 1.0 shrinks it.
Scaling with a -1.0 value reflects an object across an axis. The identity values for scaling are (1.0, 1.0,
1.0). In general, you should limit your use of glScale*() to those cases where it is necessary. Using
glScale*() decreases the performance of lighting calculations, because the normal vectors have to be
renormalized after transformation.
- Note: A scale value of zero collapses all object coordinates along that axis to zero. It's usually not a
good idea to do this, because such an operation cannot be undone. Mathematically speaking, the
matrix cannot be inverted, and inverse matrices are required for certain lighting operations.
Example 1
-Suppose that rather than drawing a rectangle that is aligned with the coordinate axes, you want to
draw a rectangle that is rotated by 20 degrees (counterclockwise) and centered at some point (x; y).
The desired result is shown in Figure below.
- You could compute the rotated coordinates of the vertices yourself (using the appropriate
trigonometric functions), but OpenGL provides a way of doing this transformation more easily.
-Suppose that we are drawing within the unit square, . Suppose we have a 4 x 4 sized
rectangle to be drawn centered at location (x; y). We could draw an unrotated rectangle with the
following command:
glRectf(x - 2, y - 2, x + 2, y + 2);
-Note that the arguments should be of type GLfloat (2.0f rather than 2).
-Assuming that the matrix mode is GL _MODELVIEW (the default). Generally, there will be some
existing transformation (call it M) currently present in the Modelview matrix. This usually represents
some more global transformation, which is to be applied on top of our rotation. For this reason, we will
compose our rotation transformation with this existing transformation.
-Also, we should save the contents of the Modelview matrix, so we can restore its contents
after we are done. Because the OpenGL rotation function destroys the contents of the
Modelview matrix, we will begin by saving it, by using the command glPushMatrix(). Saving the
Modelview matrix in this manner is not always required, but it is considered good form.
Then we will compose the current matrix M with an appropriate rotation matrix R. Then we draw the
rectangle (in upright form). Since all points are transformed by the Modelview matrix prior to
projection, this will have the effect of rotating our rectangle. Finally, we will pop off this matrix (so
future drawing is not rotated).
-To perform the rotation, we will use the command glRotatef (ang, x, y, z). All arguments are GLfloat‘s.
(Or, recalling OpenGL‘s naming convention, we could use glRotated() which takes GLdouble
arguments.) This command constructs a matrix that performs a rotation in 3-dimensional space
counterclockwise by angle ang degrees, about the vector (x; y; z). It then composes (or multiplies) this
matrix with the current Modelview matrix. In our case the angle is 20 degrees. To achieve a rotation in
the (x; y) plane the vector of rotation would be the zunit vector, (0; 0; 1). Here is how the code might
look (but beware, this conceals a subtle error).
glPushMatrix (); // save the current matrix
glRotatef(20, 0, 0, 1); // rotate by 20 degrees CCW
glRectf(x-2, y-2, x+2, y+2); // draw the rectangle
glPopMatrix(); // restore the old matrix
-Although this may seem backwards, it is the way in which almost all object transformations are
performed in OpenGL:
i. Push the matrix stack,
ii. Apply (i.e., multiply) all the desired transformation matrices with the current matrix,
iii. Draw your object (the transformations will be applied automatically), and (4) Pop the matrix stack.
- The order of the rotation relative to the drawing command may seem confusing at first. You might
think,
“Shouldn’t we draw the rectangle first and then rotate it?”. The key is to remember that whenever you
draw (using glRectf() or glBegin()...glEnd()), the points are automatically transformed using the current
Modelview matrix. So, in order to do the rotation, we must first modify the Modelview matrix, then draw
the rectangle. The rectangle will be automatically transformed into its rotated state. Popping the matrix
at the end is important, otherwise future drawing requests would also be subject to the same rotation.
Example 2: Rotating a Rectangle (correct): Something is wrong with the example given above. The
rotation is performed about the origin of the coordinate system, not about the center of the rectangle
and we want.
-Since M is on the top of the stack, we need to first apply translation (T) to M, and then apply rotation
(R) to the result, and then do the drawing (~v). The final code is given by:
glPushMatrix(); // save the current matrix (M)
glTranslatef(x, y, 0); // apply translation (T)
glRotatef(20, 0, 0, 1); // apply rotation (R)
glRectf(-2,-2, 2, 2); // draw rectangle at the origin
glPopMatrix(); // restore the old matrix (M)
Write a Program to draw animation using increasing circles filled with different colors and patterns.
1. #include<graphics.h>
2. #include<conio.h>
3. void main()
4. {
5. intgd=DETECT, gm, i, x, y;
6. initgraph(&gd, &gm, "C:\\TC\\BGI");
7. x=getmaxx()/3;
8. y=getmaxx()/3;
9. setbkcolor(WHITE);
10. setcolor(BLUE);
11. for(i=1;i<=8;i++)
12. {
13. setfillstyle(i,i);
14. delay(20);
15. circle(x, y, i*20);
16. floodfill(x-2+i*20,y,BLUE);
17. }
18. getch();
19. closegraph();
20. }
Output
It is one of the most popular line-clipping algorithms. The concept of assigning 4-bit region codes to the
endpoints of a line and subsequent checking and operation of the endpoint codes to determine totally
visible lines and invisible lines (lying completely at one side of the clip window externally). Cohen-
Sutherland Line Clipping Algorithm was originally introduced by Danny Cohen and Ivan Sutherland.
For clipping other invisible lines and partially visible lines, the algorithm breaks the line segments into
smaller subsegments by finding intersections with appropriate window edges. For a pair of a non-zero
endpoint and an intersection point, the corresponding subsegment is checked for two primary visibility
states as done in the earlier steps. The process is repeated till two visible intersections are found or no
intersection with any of the four visible window edges is found. Thus this algorithm cleverly reduces the
number of intersection calculations.
2. Intialize j=1
while j<=2
3. if codes of P1 and P2 are both equal to zero then draw P1P2 (Totally Visible)
4. if logical intersection or AND operation of code – P1 and code -P2 is not equal to zero then ignore
P1P2 (Totally invisible)
5. if code -P1=0 then swap P1 and P2 along with their flags and set i=1
6. if code – P1< >0 then
for i=1{
if C1 left=1 then
find interaction (xL, y’L) with left edge
assign code to (xL, y’L)
P1=(xL, y’L)
end if
i=i+1
go to 3
}
for i=2
{
if C1 right=1 then
find interaction (xR, y’R) with right edge
assign code to (xR, y’R)
P1=(xR, y’R)
end if
i=i+1
go to 3
}
for i=3
{
if C1 bottom=1 then
find interaction (x’B, yB) with bottom edge
assign code to (x’B, yB)
P1=(x’B, yB)
end if
i=i+1
go to 3
}
for i=4
{
if C1 top=1 then
find interaction (x’T, yT) with top edge
assign code to (x’T, yT)
P1=(x’T, yT)
end if
i=i+1
go to 3
}
end
RTK uses carrier phase dynamic real-time difference method to obtain centimeter level accuracy
measurement results in the field in real time. RTK working form: send the observation value and the
coordinate information of the station to the mobile station at the same time with the electromagnetic
signal through the data ingot and modem reference station. The mobile station receives data and
collects GPS satellite signals at the same time to obtain the observation data. In the system, differential
observation values are formed for real-time processing, and the mobile station position coordinates
with centimeter accuracy are given in time. RTK is widely used in line alignment and land survey.
The carrier of this technology is a technology that uses infrared light or visible light to calculate the
distance by measuring the time between two points along the line. This technology is mainly used in
electronic level, total station and other equipment.
Electronic level is also called digital level. Invented by Zeiss in the 1990s and developed on the basis of
the automatic level, the automatic level is used as the basis to add mirrors and detectors in the optical
path of the telescope, and the high-tech product of the integration of optical, mechanical and electrical
measurement composed of the image processing electronic system and the bar code scale is adopted.
This technology effectively complements the defects of GPS positioning technology, and has the
advantages of fast measurement speed and high accuracy. Electronic level plays an important role in
digital measurement.
Compared with the traditional survey technology, the digital survey technology has the following
characteristics in the application of engineering survey;