0% found this document useful (0 votes)
29 views31 pages

Intro To Comp.g

The document discusses computer graphics and its applications. It defines computer graphics as using computers to create and manipulate visual images. Some key applications mentioned include education/training, flight simulation, biology, presentations, entertainment, and printing. The document also discusses interactive vs non-interactive graphics and OpenGL graphics software.

Uploaded by

ben munjaru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views31 pages

Intro To Comp.g

The document discusses computer graphics and its applications. It defines computer graphics as using computers to create and manipulate visual images. Some key applications mentioned include education/training, flight simulation, biology, presentations, entertainment, and printing. The document also discusses interactive vs non-interactive graphics and OpenGL graphics software.

Uploaded by

ben munjaru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

1.0 Introduction of Computer Graphics (C.

G)

C.G involves technology to access. The Process transforms and presents information in a visual form.
The end product of the computer graphics is a picture. In computer graphics, 2 or 3D pictures can be
created. Algorithms have been developed for improving the speed of picture generation.

Definition of Computer Graphics:


It is the use of computers to create and manipulate pictures on a display device. It comprises of software
techniques to create, store, modify, represents pictures.

Why computer graphics used?


Without C.G, lot of time and memory will be needed. This method will be tough to understand by a
common man. In this situation graphics is a better. Graphics tools are charts and graphs. Using graphs,
data can be represented in pictorial form. A picture can be understood easily just with a single look.

Application of Computer Graphics


1. Education and Training: Computer-generated model of the physical, financial and economic system is
often used as educational aids. Model of physical and physiological systems can help trainees to
understand the operation of the system.

2Flight Simulator: It helps in giving training to the pilots of airplanes. These pilots spend much of their
training not in a real aircraft but on the ground at the controls of a Flight Simulator.
3. Use in Biology: Molecular biologist can display a picture of molecules and gain insight into their
structure with the help of computer graphics.

4. Presentation Graphics: Example of presentation Graphics are bar charts, line graphs, pie charts and
other displays showing relationships between multiple parameters.

7. Entertainment: Computer Graphics are now commonly used in making motion pictures, music videos
and television shows.

9. Printing Technology: Computer Graphics is used for printing technology and textile design.

Interactive and Passive Graphics


(a) Non-Interactive or Passive Computer Graphics:
In non-interactive computer graphics, the user cannot make any change in the rendered image. Non-
interactive Graphics involves only one-way communication between the computer and the user.

(b) Interactive Computer Graphics:


In interactive Computer Graphics user can make any change in the produced image. Interactive
Computer Graphics require two-way communication between the computer and the user.
Advantages:
1. Higher Quality
2. More precise results or products
3. Greater Productivity
4. Lower analysis and design cost
5. Significantly enhances our ability to understand data and to perceive trends.
GRAPHICS SOFTWARE (OPENGL)
OpenGL as the primary basis for 3D graphics programming, supported by the graphics hardware in most
modern computing devices, including desktop computers, laptops, and mobile devices. In the first
desktop computers, the contents of the screen were managed directly by the CPU, i.e, to draw a line
segment on the screen, the CPU would run a loop to set the color of each pixel that lies along the line.
Graphics could take up a lot of the CPU’s time, and performance was very slow

In modern computers, graphics processing is done by a specialized component called a GPU, or Graphics
Processing Unit. A GPU includes processors for doing graphics computations with its own dedicated
memory for storing things like images and lists of coordinates.

GPU processors have very fast access to data that is stored in GPU memory, faster than their access to
data stored in the computer’s main memory. To draw a line or perform some other graphical
operations, the CPU simply has to send commands, along with any necessary data, to the GPU, which is
responsible for actually carrying out those commands.

The CPU offloads most of the graphical work to the GPU, which is optimized to carry out that work very
fast. The set of commands that the GPU understands make up the API of the GPU. OpenGL is an API, and
most GPUs support OpenGL in the sense that they can understand OpenGL commands, or at least that
OpenGL commands can efficiently be translated into commands that the GPU can understand.

OpenGL has a series of APIs that have been subject to repeated extension and revision. It was designed
as a “client/server” system. The server, which is responsible for controlling the computer’s display and
performing graphics computations, carries out commands issued by the client. The server executes
OpenGL commands. The client is the CPU in the same computer, along with the application program
that it is running.

OpenGL commands come from the program that is running on the CPU. However, it is actually possible
to run OpenGL programs remotely over a network. That is, you can execute an application program on a
remote computer (the OpenGL client), while the graphics computations and display are done on the
computer that you are actually using (the OpenGL server).

The client & the server are separate, and there’s a communication channel between those components.
OpenGL commands and the data that they need are communicated from the client (the CPU) to the
server (the GPU) over that channel. One of the driving factors in the evolution of OpenGL has been the
desire to limit the amount of communication that is needed between the CPU and the GPU.

One approach is to store information in the GPU’s memory. If some data is going to be used several
times, it can be transmitted to the GPU once and stored in memory there, where it will be immediately
accessible to the GPU. Another approach is to try to decrease the number of OpenGL commands that
must be transmitted to the GPU to draw a given image.

With OpenGL 2.0, it became possible to write programs to be executed as part of the graphical
computation in the GPU. A programmer who wants to use a new graphics technique can write a
program to implement the feature and just hand it to the GPU. The OpenGL API doesn’t have to be
changed. The only thing that the API has to support is the ability to send programs to the GPU for
execution.
BASIC GRAPHIC DESIGN PRINCIPLES
Contrast
Contrast refers to how different elements are in a design, particularly adjacent elements. These
differences make various elements stand out. Contrast is also a very important aspect of
creating accessible designs. Insufficient contrast can make text content in particular very difficult to read

Proportion
Simply put, it’s the size of elements in relation to one another. Proportion signals what’s important in a
design and what isn’t. Larger elements are more important, smaller elements less.

Hierarchy
Hierarchy refers to the importance of elements within a design. The most important elements or
content should appear to be the most important. It is most easily illustrated through the use of titles
and headings in a design. The title of a page should be given the most importance

Repetition
Repetition is a great way to reinforce an idea. It’s also a great way to unify a design that brings together
a lot of different elements. Repetition can be done in a number of ways: via repeating the same colors,
shapes, or other elements of a design. An example is the use of repetition in the format of the headings.

Rhythm
The spaces between repeating elements can cause a sense of rhythm to form, similar to the way the
space between notes in a musical composition create a rhythm. Rhythms can be used to create
excitement (particularly flowing and progressive rhythms) or create reassurance and consistency.

Pattern
Patterns are a repetition of multiple design elements working together. In design, however, patterns can
also refer to set standards for how certain elements are designed. For example, top navigation is a
design pattern that the majority of internet users have interacted with.

Movement
Movement refers to the way the eye travels over a design. The most important element should lead to
the next most important and so on. This is done through positioning (the eye naturally falls on certain
areas of a design first), emphasis, and other design elements already mentioned.

Variety
Variety in design is used to create visual interest. Without variety, a design can very quickly become
monotonous, causing the user to lose interest. Variety can be created in a variety of ways, through
color, typography, images, shapes, and virtually any other design element.
Unity
Unity refers to how well the elements of a design work together. Visual elements should have clear
relationships with each other in a design. Unity also helps ensure concepts are being communicated in a
clear, cohesive fashion.
CHAPTER 2: GRAPHICS DISPLAYS (DISPLAY DEVICES)
The display device is an output device used to represent the information in the form of images (visual
form). Display systems are mostly called a video monitor or Video display unit (VDU).
Display devices are designed to model, display, view, or display information. The purpose of display
technology is to simplify information sharing.

There are some display devices given below:

1. Cathode-Ray Tube(CRT)
2. Liquid crystal display(LCD)
3. Light Emitting Diode(LED)
4. Laser Devices

CATHODE-RAY TUBE (CRT):

Here, CRT stands for Cathode ray tube. It is a technology which is used in traditional
computer monitor and television. Cathode ray tube is a particular type of vacuum tube that displays
images when an electron beam collides on the radiant surface.

Components of CRT

 Electron Gun: The electron gun is made up of several elements, mainly a heating filament
(heater) and a cathode. The electron gun is a source of electrons focused on a narrow beam
facing the CRT.
 Focusing & Accelerating Anodes: These anodes are used to produce a narrow and sharply
focused beam of electrons.
 Horizontal & Vertical Deflection Plates: These plates are used to guide the path of the electron
the beam. The plates produce an electromagnetic field that bends the electron beam through
the area as it travels.
 Phosphorus-coated Screen: The phosphorus coated screen is used to produce bright spots
when the high-velocity electron beam hits it.
There are two ways to represent an object on the screen:

1. Raster Scan: It is a scanning technique in which the electron beam moves along the screen. It moves
from top to bottom, covering one line at a time. A raster scan is based on pixel intensity control display
as a rectangular box on the screen called a raster. Picture description is stored in the memory area
called as Refresh buffer, or Frame Buffer. Frame buffer is also known as Raster or Bitmap. Raster scan
provides the refresh rate of 60 to 80 frames per second.

For Example: Television


The beam refreshing has two types:
1. Horizontal Retracing
2. Vertical Retracing
When the beam starts from the top left corner and reaches bottom right, and again return to the top
left, it is called the vertical retrace. It will call back from top to bottom more horizontally as a horizontal
reversal.

Advantages:
1. Real image
2. Many colors to be produced
3. Dark scenes can be pictured
Disadvantages:
1. Less resolution
2. Display picture line by line
3. More costly
2. Random Scan (Vector scan): Also known as stroke-writing display or calligraphic display. The electron
beam points only to the area in which the picture is to be drawn. It uses an electron beam like a pencil
to make a line image on the screen. The image is constructed from a sequence of straight-line segments.
On the screen, each line segment is drawn by the beam to pass from one point on the screen to the
other, where its x & y coordinates define each point. After compilation of picture drawing, the system
cycle back to the first line and create all the lines of picture 30 to 60 times per second.

Fig: A Random Scan display draws the lines of an object in a specific order
Advantages:

1. High Resolution
2. Draw smooth line Drawing
Disadvantages:

1. It does only the wireframe.


2. It creates complex scenes due to flicker.

Difference between Random and Raster Scan Display:

Random Scan Raster Scan

1. It has high Resolution 1. Its resolution is low.

2. It is more expensive 2. It is less expensive

3. Any modification if needed is easy 3.Modification is tough

4. Solid pattern is tough to fill 4.Solid pattern is easy to fill

5. Refresh rate depends or resolution 5. Refresh rate does not depend on the picture.

6. Only screen with view on an area is displayed. 6. Whole screen is scanned.

7. Beam Penetration technology come under it. 7. Shadow mark technology came under this.

8. It does not use interlacing method. 8. It uses interlacing

9. It is restricted to line drawing applications 9. It is suitable for realistic display.

LIQUID CRYSTAL DISPLAY (LCD):

LCD is defined as the flat panel display that uses the properties of a liquid crystal to display a picture. It
is like a flat display television that deals with crystals and polarizers to give a perfect picture/ video. A
polarizer is defined as an optical filter that enables the light to pass through it. The light passes in
variations so that other lights do not pass through it. Liquid Crystal Displays are the devices that produce
a picture by passing polarized light from the surroundings or from an internal light source through a
liquid-crystal material that transmits the light.

LCD uses the liquid-crystal material between two glass plates; each plate is the right angle to each other
between plates liquid is filled. One glass plate consists of rows of conductors arranged in vertical
direction. Another glass plate is consisting of a row of conductors arranged in horizontal direction. The
pixel position is determined by the intersection of the vertical & horizontal conductor.
The light crystals do not emit light. Rather they need a backlight so that it can produce images. LCD uses
the basic technology for showing the images in pixels. On the other hand, the other advanced gadgets
depict clear images and relatively larger. The LCDs have replaced the heavy CRTs (Cathode Ray Tubes) in
earlier models of television. LCDs are now available in all sizes ranging from smart watches to large
screens

Characteristics of LCDs.

1. An LCD consists of two primary parts i.e. the electrodes and polarizing filters. Both are placed
perpendicular to each other so that the light can pass accordingly, thereby, leading to display
the perfect picture/ video.
2. LCD consists of molecules that are placed between the electrodes. This is how an image is
generated in pixels.
3. LCD screens are energy efficient. They reduce the power consumption as they do not use CRTs
anymore.
Benefits of LCD.

 It has a much better display and is relatively thinner as compared to earlier electronic gadgets.
 The color and brightness given by the LCDs are phenomenal. The brightness and contrast help in
representing the perfect image.
 LCD reduces power consumption and increases efficiency.
Advantages:

1. Produce a bright image


2. Energy efficient
3. Completely flat screen

Disadvantages:
1. Fixed aspect ratio & Resolution
2. Lower Contrast
3. More Expensive
LIGHT EMITTING DIODE (LED):

LED is a device which emits when current passes through it. It is a semiconductor device.

The size of the LED is small, so we can easily make any display unit by arranging a large number of LEDs.
LED consumes more power compared to LCD. LED is used on TV, smartphones, motor vehicles, traffic
light, etc. LEDs are powerful in structure, so they are capable of withstanding mechanical pressure. LED
also works at high temperatures.

Advantages:

1. The Intensity of light can be controlled.


2. Low operational Voltage.
3. Capable of handling the high temperature.
Disadvantages:

1. More Power Consuming than LCD.

LASER DEVICES:

LASER stands for Light Amplification by Stimulated Emission of Radiation. It is an electronic device that
produces light, actually an electromagnetic radiation. This electromagnetic radiation is done through
optical amplification.

Properties

The LASER radiation has very special properties that make them used in different type of applications. It
is used to manufacture a wide variety of electronic devices like CD ROMs, Barcode readers etc. The laser
light is very thin and coherent.

Applications

o Used in CD and DVD ROMs.


o Used in Barcode Scanners.
o Used in integral part of Nuclear Fusion Reactors.
o Used in different type of devices i.e. cutting, drilling, surface treatment, soldering, welding devices.
o Used in medical equipment i.e. dentistry, cosmetic treatment devices.
o Used in Laser printing machines.
POINT PLOTING TECHNIQUES

GRAPHICS HARDWARE
i). Display Processor:
It is interpreter or piece of hardware that converts display processor code into pictures. It is one of the
four main parts of the display processor

Parts of Display Processor


1. Display File Memory
2. Display Processor
3. Display Generator
4. Display Console
Display File Memory: It is used for generation of the picture. It is used for identification of graphic
entities.

Display Controller:
1. It handles interrupt
2. It maintains timings
3. It is used for interpretation of instruction.

Display Generator:

1. It is used for the generation of character.


2. It is used for the generation of curves.

Display Console: It contains CRT, Light Pen, and Keyboard and deflection system.

The raster scan system is a combination of some processing units. It consists of the control processing
unit (CPU) and a particular processor called a display controller. Display Controller controls the
operation of the display device. It is also called a video controller.

Working: The video controller in the output circuitry generates the horizontal and vertical drive signals
so that the monitor can sweep. Its beam across the screen during raster scans.

As fig showing that 2 registers (X register and Y register) are used to store the coordinate of the screen
pixels. Assume that y values of the adjacent scan lines increased by 1 in an upward direction starting
from 0 at the bottom of the screen to ymax at the top and along each scan line the screen pixel positions
or x values are incremented by 1 from 0 at the leftmost position to xmax at the rightmost position. The
origin is at the lowest left corner of the screen as in a standard Cartesian coordinate system.
At the start of a Refresh Cycle:

X register is set to 0 and y register is set to ymax. This (x, y') address is translated into a memory address
of frame buffer where the color value for this pixel position is stored. The controller receives this color
value (a binary no) from the frame buffer, breaks it up into three parts and sends each element to a
separate Digital-to-Analog Converter (DAC).

These voltages, in turn, controls the intensity of 3 e-beam that are focused at the (x, y) screen position
by the horizontal and vertical drive signals. This process is repeated for each pixel along the top scan
line, each time incrementing the X register by Y.

As pixels on the first scan line are generated, the X register is incremented through xmax. Then x register
is reset to 0, and y register is decremented by 1 to access the next scan line. Pixel along each scan line is
then processed, and the procedure is repeated for each successive scan line units pixels on the last scan
line (y=0) are generated.

For a display system employing a color look-up table frame buffer value is not directly used to control
the CRT beam intensity. It is used as an index to find the three pixel-color value from the look-up table.
This lookup operation is done for each pixel on every display cycle.

As the time available to display or refresh a single pixel in the screen is too less, accessing the frame
buffer every time for reading each pixel intensity value would consume more time what is allowed:

Multiple adjacent pixel values are fetched to the frame buffer in single access and stored in the register.
After every allowable time gap, the one-pixel value is shifted out from the register to control the warm
intensity for that pixel. The procedure is repeated with the next block of pixels, and so on, thus the
whole group of pixels will be processed.
ii). Character Generators (Cg)
In the world of video production, a character generator (CG) is a software application that produces
static or animated text for use in 2D and 3D videos. A CG can be used to create anything from simple
Lower Thirds text to full-blown 3D animations. A character generator, or CG, is a tool used to create
digital characters. These characters can be used in video games, movies, and other digital media. CGs
are created by artists who design the characters and then use software to bring them to life. The most
common Cg is the 3D character generator. This type of CG allows artists to create realistic-looking
characters that can be used in movies and video games.

2D character generators are also common, but they are not as realistic as 3D CGs. 2D CGs are often used
for cartoons and other types of artwork. They are usually less expensive than 3D character generators
and easier to use. No matter what type of character generator you use, the process of creating a digital
character generally follows the same steps: first, the artist designs the character; then, they build the
model using software; finally, they animate the character using motion capture or keyframing
techniques.

Working of Character Generators:


A character generator, or CG, is a device that creates graphic images and animations for use in video
productions. The images are usually created from scratch by a team of artists, or they may be taken
from a pre-existing database of images. The animations are created by an animator, who designs the
movement of the characters and objects in the scene. The CG is used to generate the images and
animations that are then combined with live-action footage or other graphics to create a final video
production.

Advantages of Character Generator:


Perhaps the most obvious benefit is that it can save you a lot of time. If you’re not experienced
in drawing or creating digital art, it can be very time-consuming to create believable and
detailed characters. With a character generator, you can simply input your desired
characteristics and have a professional-looking character in minutes.
Another great benefit of using a character generator is that you can try out different looks for
your characters without having to commit to one right away. This can be helpful if you’re
unsure of what kind of look you want for your story, or if you want to experiment with different
designs before finalizing anything. You can also easily change up a character’s appearance

Types of Character Generators:


Here are a few of the most common:

1. 2D character generators create two-dimensional characters that can be used in a variety of


applications, such as video games or animated films.
2. 3D character generators create three-dimensional characters that can be used in a variety
of applications, such as video games or animated films.
3. Motion capture character generators use motion capture technology to record the
movement of real people and then generate realistic character animations from that data.
4. Facial recognition character generators use facial recognition algorithms to generate
characters that look like specific people or celebrities.
COMPUTER GRAPHICS/OUTPUT PRIMITIVE

Introduction
A computer Graphics can be anything like beautiful scenery, images, terrain, trees, or anything else that
we can imagine, however all these computer graphics are made up of the most basic components of
Computer Graphics that are called Graphics Output Primitive or simply primitive. The Primitives are the
simple geometric functions that are used to generate various Computer Graphics required by the User.
Some most basic Output primitives are point-position (pixel), and a straight line. However different
Graphic packages offers different output primitives like a rectangle, conic section, circle, spline curve or
may be a surface. Once it is specified what picture is to be displayed, various locations are converted
into integer pixel positions within the frame buffer and various functions are used to generate the
picture on the two dimensional coordinate system of output display.

Computer Screen Coordinates


In video monitors Locations are referenced using Integer Screen coordinates which corresponds to the
pixel positions in the frame buffer. These coordinates gives us the Column Number(x) and Scan Line
Number(y). During screen refreshing the address of pixel along with other information stored in frame
buffers are used to generate the pixel on the screen with respect to the top left corner of the screen.
However it is possible to modify the origin of the coordinate system using various software commands
or hardware controls. A frame buffer stores the pixel location and other information such as intensity,
color of the pixel position that is to be generated on the screen.

Absolute and Relative Coordinate


The address stored in the Frame Buffer can be absolute or relative. There are various graphics packages
that allows the location of Output primitive to be declared using relative Coordinate. This type of
method is used for various graphics application, used for producing drawings etc. In this type of
coordinate system every pixel location is defined using offset which is added to last pixel position.

Point Function A point function is the most basic Output primitive in the graphic package. A point
function contains location using x and y coordinate and the user may also pass other attributes such as
its intensity and color. The location is stored as two integer tuple. The color is defined using hex codes.
The size of a pixel is equal to the size of pixel on display monitor.

A Line Function
A line function is used to generate a straight line between any two end points. Usually a line function is
provided with the location of two pixel points called the starting point and the end point and it is upto
the computer to decide what pixels fall between these two points so that a straight line is generated.

Line Drawing Algorithms


When a computer need to determine the positions of pixel that falls between two given points to
generate a straight line it requires some algorithm. There are many methodologies that used to draw
the line. The most commonly used are DDA and Bresenhams line drawing algorithm. Line DDA basically
takes the 2 end points of a line and then inputs pixel one at a time on the path.
GRAPHICS DISPLAY TECHNIQUES
COLOR DISPLAY TECHNIQUES
In computer graphics several different mathematical systems exist which are used to describe colors.
The colors system used in computer graphics are typically known as primary 3 color system. Primary
colors are those which cannot be created by mixing other colors. With such system a color is defined by
specifying an ordered set of 3 values. Composite colors are created by mixing varying amount of 3
primary colors which results in the creation of new color.

Two categories of color display techniques are:


1. Additive color system:
Colors in additive systems are created by adding colors to black to create new colors. The more color
that is added to the black tends towards white. The presence of all the primary colors in sufficient
amount creates pure white while the absence of all the primary colors creates pure black. Additive
colors environments are self-luminous. For example, an image displayed on the screen uses additive
color system.

2. Subtractive color system:


The subtractive color system is opposite of additive color system. Conceptually primary colors are
subtracted from white to create new colors. The more colors that are subtracted tend towards black.
Thus the presence of all primary colors creates pure black.

In computer graphics the following color systems are widely used:


(i) RGB – Red, Green, Blue
RGB color system is widely used in color systems for images formats nowadays. It is an additive color
system in which different amounts of the color are added to black to produce new colors. The graphics
format which follows the RGB color system uses a combination of 3 digits to represents a color instance.
This 3 digit combination is representation is known as triplet.
0 0 0 ------- Black
112
222
|||
255 255 255 ---- White

(ii) CMY – Cyan, Magenta, Yellow


CMY is a subtractive color system used by printers and photographers for the color with ink and
emulsion normally on a white surface. It is used by most hard copy devices that deposit the color on
white paper such as laser and inkjet printers.

(iii) HSV – Hue, Saturation, Value


HSV color system is one of many color system that vary the degree of properties of colors to create new
colors, rather than using a mixture of the colors themselves. Here the hue color specifies the common
use of colors such as red, orange, etc. The saturation refers to the amount of white in a hue and the
value is called brightness. So the HSV is also called as HSB where H is hue, S is saturation and B is
brightness.
(iv) HLS – Hue, Lightness, Saturation
The HLS color system is closely related to HSV and behaves in the same way. There are several other
color systems that are similar to HSV in that they create color by altering hue with 2 other values. That
includes:
H S I: Hue, Saturation, and Intensity
H S L: Hue, Saturation, and Luminosity
H B L: Hue, Brightness, and Luminosity

GRAPHICS DISPLAY TECHNOLOGIES AND INTERACTIVE DEVICES


(a) Graphics display technologies. They include:
Vector Displays
Raster Displays
CRT
LCD
Plasma Panels
LED
Electroluminescent displays
Three-primary color raster scan
(i) Vector Displays
-Vector graphics terminals are computer graphics displays that draw lines instead of bitmapped images.
A bitmap image is a collection of pixels with different colors and weights.
-In a vector display, also called Random Scan system, stroke-writing, or calligraphic the electron beam
directly draws the picture.

Advantages of random scan:


i. Very high resolution, limited only by monitor
ii. Easy animation, just draw at different positions
iii. Requires little memory (just enough to hold the display program)
Disadvantages of random scan:
i. requires "intelligent electron beam, i.e., processor controlled
ii. Limited screen density before have flicker, can't draw a complex image
iii. Limited color capability (very expensive)

(ii) Raster graphics


-Use pixels to draw graphics.

- A raster graphics image or bitmap is a data structure representing a generally rectangular grid of pixels,
or points of color, viewable via a display medium.
-A bitmap corresponds to a bit-for-bit with an image displayed on a screen, generally in the same
format used for storage in the display's video memory, or maybe as a device-independent bitmap.
Bitmap is technically characterized by the width and height of the image in pixels and by the number of
bits per pixel (a color depth, which determines the number of colors it can represent).

- This memory area holds the intensity values for all screen points. The intensity values are read from
memory area and used to ‗paint‘each point on the screen one row at a time. This row is called a scan
line and each point is called a pixel (picture element). The ordering of pixels by rows is known as raster
order, or raster scan order.

-In a raster scan display system the electron beam is swept across the screen one row at a time from
top to bottom and from left to right. As the electron beam moves across each row, the beam intensity
is turned on or off to create a pattern of illuminated spots.
-The spots to be turned on are dependent on the picture to be drawn. The definition of this picture is
stored in a memory area called the refresh buffer or frame buffer.

-Rasterization-The term rasterization can in general be applied to any process by which vector
information can be converted into a raster format. In normal usage, the term refers to the popular
rendering algorithm for displaying three-dimensional shapes on a computer. Rasterization is currently
the most popular technique for producing real-time 3D computer graphics. Compared to other
rendering techniques such as ray tracing, rasterization is extremely fast. However, rasterization is
simply the process of computing the mapping from scene geometry to pixels and does not prescribe a
particular way to compute the color of those pixels.

-Interlacing- It‘s a method of encoding a bitmap image such that a person who has partially received it
sees a degraded copy of the entire image. When communicating over a slow communications link, this
is often preferable to seeing a perfectly clear copy of one part of the image, as it helps the viewer
decide more quickly whether to abort or continue the transmission.
-Interlacing is supported by the following formats:
i. GIF graphics interchange format
ii. PNG Portable Network Graphics
iii. JPEG Joint Photographic Experts Group. The group realized a need to make large photographic
files smaller,
iv. PGF (Progressive Graphics File) is a wavelet-based bitmapped image format that employs lossless
and lossy data compression.

-Interlacing is also known as "progressive" encoding, because the image becomes progressively clearer
as it is received.

TYPES OF RASTER DISPAYS


(i) Cathode Ray Tube e

i. Control grid – determines the rate at which the electron will pass thro.
ii. Electron beam- electrons travel without any hindrance from the air/dust as the tube is a vacuum.
iii. Phosphor coated screen – It glows when struck by electrons.
iv. Conductive coating - to soak up the electrons that pile up at the screen-end of the tube.
v. Focusing anode – It attracts scattered electron to a focal point.
vi. Accelerating anode – It gives the anode a high velocity so that we can use the velocity/momentum
to give the light we want.

CRT Monitors
-A CRT monitor contains millions of tiny red, green, and blue phosphor dots that glow when struck by
an electron beam that travels across the screen to create a visible image.
-In a cathode ray tube, the "cathode" is a heated filament. The heated filament is in a vacuum created
inside a glass "tube." The "ray" is a stream of electrons generated by an electron gun that naturally pour
off a heated cathode into the vacuum. Electrons are negative. The anode is positive, so it attracts the
electrons pouring off the cathode. This screen is coated with phosphor, an organic material that glows
when struck by the electron beam.
- There is a conductive coating inside the tube to soak up the electrons that pile up at the screen-end of
the tube.
-There are three ways to filter the electron beam in order to obtain the correct image on the monitor
screen: shadow mask, aperture grill and slot mask. These technologies also impact the sharpness of the
monitor's display. They are:
1. Shadow-mask -A shadow mask is a thin metal screen filled with very small holes. Three electron
beams pass through the holes to focus on a single point on a CRT displays' phosphor surface. The
shadow mask helps to control the electron beams so that the beams strike the correct phosphor at just
the right intensity to create the desired colors and image on the display. The unwanted beams are
blocked or "shadowed."

2. Aperture-grill -Monitors based on the Trinitron technology, which was pioneered by Sony, use an
aperturegrill instead of a shadow-mask type of tube. The aperture grill consists of tiny vertical wires.
Electron beams pass through the aperture grill to illuminate the phosphor on the faceplate. Most
aperture-grill monitors have a flat faceplate and tend to represent a less distorted image over the
entire surface of the display than the curved faceplate of a shadow-mask CRT. However, aperture-grill
displays are normally more expensive.

3. Slot-mask -A less-common type of CRT display, a slot-mask tube uses a combination of the shadow-
mask and aperture-grill technologies. Rather than the round perforations found in shadow-mask CRT
displays, a slotmask display uses vertically aligned slots. The design creates more brightness through
increased electron transmissions combined with the arrangement of the phosphor dots.

Advantages of phosphor
i. electron are easily knocked off to give light
ii. Once electrons starts losing energy, phosphor stay glowing for some time – persistence

-Persistence is defined as the time it takes for the emitted light from the screen to decay to 1/10th of
its origin in intensity. Lower persistence phosphor require higher refresh rates to maintain a picture on
the screen w/o flicker. The phosphor with low persistence is useful for animation. A high persistence
phosphor is useful for displaying high complex static pictures.
-Resolution-The max No. of points that can b displayed on the screen on a CRT w/o overlap is called
resolution. Typical resolution of high definition system is 1280 by 1024.
-Screen size- The physical size of a graphics monitor is given by the length on the screen diagonally n
normally quoted in inches.
-Aspect Ratio- It gives the ratio of vertical points to horizontal points necessary to produce equal length
lines in both direction of the screen.

(ii) Liquid Crystal Display (LCDs).


- In a LCD display, there are 2 polarizers i.e. vertical and horizontal polarizer.
i. A polarizer is a component that filters light. A vertical polarizer filters vertical component of light and
allow horizontal component of light thro.

ii. As light strikes the first filter, it is polarized. The molecules in each layer of the liquid crystal then
guide the light they receive to the next layer. As the light passes through the liquid crystal layers, the
molecules also change the light's plane of vibration to match their own angle. When the light reaches
the far side of is matched up with the second polarized glass filter, then the light will pass through. -
For a particular the liquid crystal substance, it vibrates at the same angle as the final layer of molecules.
If the final layer
voltage the liquid material at that intersection of electrons changes the orientation of that liquid crystal
of that intersection. The horizontal component is converted to a vertical component hence transmit
light.
If we apply an electric charge to liquid crystal molecules, they untwist. When they straighten out, they
change the angle of the light passing through them so that it no longer matches the angle of the top
polarizing filter. Consequently, no light can pass through that area of the LCD, which makes that area
darker than the surrounding areas.

(iii) FLAT PANEL DISPLAY

-Plasma Panel
The basic idea of a plasma display is to illuminate tiny, colored fluorescent lights to form an image. Each
pixel is made up of three fluorescent lights -- a red light, a green light and a blue light.
What is Plasma?
The central element in a fluorescent light is plasma, a gas made up of free-flowing ions (electrically
charged atoms) and electrons (negatively charged particles).
-Under normal conditions, a gas is mainly made up of uncharged particles. That is, the individual gas
atoms include equal numbers of protons (positively charged particles in the atom's nucleus) and
electrons. The negatively charged electrons perfectly balance the positively charged protons, so the
atom has a net charge of zero.
If you introduce many free electrons into the gas by establishing an electrical voltage across it,
negatively charged particles rush toward the positively charged area of the plasma, and positively
charged particles rush toward the negatively charged area.

In this mad rush, particles are constantly bumping into each other. These collisions excite the gas atoms
in the plasma, causing them to release photons of energy.
How the system works
-It‘s composed of 2 glass plates:
-The 1st plate is brought to the 2nd plate until the space between them is small. The edges are sealed
off and space is left with air. The air inside is then removed and replaced with the plasma gas (e.g
neon).
Properties of the gas
-Must produce light when ionized.
-Must be easily ionized.
-Produce the correct color of gas when ionized.

(b) INTERACTIVE DEVICES


- They are devices that help in input of data in the system and also help in giving out the processed
information. They include:
Mouse
Space balls – right handed co-ordinate system. The space ball doesn‘t move. It has a strain gauge that
measure the amount of pressure applied to the space ball to provide input.
Trackball – it‘s an upside down mouse
Touch pad
Touch panel

Data Structures
-The need to create, modify or manipulate individual portions of the picture without affecting other
parts of the same picture, makes it necessary to be:
1. Able to independently reference those individual portions of the picture.
2. Be able to specify certain attributes for the portion which are diff from those other parts.
3. Be able to make additional details to a part of the pic while making additional provision for storage
space etc.
-All this happens in any graphical implementation such as modeling, animation & comp aided design
application
-A data structure is a set of elements that are related to each other by a set of relations.
Edge – is defined by a set of surfaces (also defined by vertices)
Surface – is defined be a set of edges which are defined by a set of vertices.
Three different kinds of data structures can be used to construct an object. They are based on edges,
vertices & surfaces.
-Vertex data are coordinate values for each vertex in the object.
-Edge data consists of pointers back into the vertex table to identify the vertices for each polygon edge.
-Polygon table contain pointers back into the edge table to identify the edges for each polygon

S2
S1
Databases
-It‘s an organized collection of graphics& non-graphics data related to each other in the support of a
common purpose and is stored on secondary storage in a computer.
-A database therefore may be viewed as the implementation of data structures into the computer.
Thus a decision on the data structure has to be made first followed by the choice of a database to
implement such a structure.
-A graphics database must be able to store pictorial data in addition to textural & alpha numeric data.

Display Code Generation


Code Generator
A code generator is a tool or resource that generates a particular sort of code or computer programming
language. This has many specific meanings in the world of IT, many of them related to the sometimes
complex processes of converting human programming syntax to the machine language that can be read
by a computing system.
Display generator
Electronic device that interfaces computer-graphics display information with a graphics display device.
Display generator for raster-scan display contains four subsystems: display controller, display processor,
refresh memory and video driver.
2D and 3D Representations Transformations
-A transformation Maps points (x, y) in one coordinate system to points (x', y') in another
coordinate system. x' = ax + by + c y' = dx + ey + f
-Transformations are used to: Change the shape of objects, Position objects in a scene (modeling),
Create multiple copies of objects, Projection for virtual cameras and Animations.

Homogeneous Coordinates
- OpenGL commands usually deal with two- and three-dimensional vertices, but in fact all are treated
internally as three-dimensional homogeneous vertices comprising four coordinates. -homogeneous
coordinates introduce a fourth ordinate i.e. (x, y, z, w).

Why use homogeneous coordinates


-To transform points we use matrix multiplications. For example to make an object at the origin twice
as big we could use:

-Many of our transformations will require translation of the points. For example if we want to move all
the points two units along the x axis we would require:
X‘ = x + 2
Y‘ = y
Z‘ = z
But how can we do this with a matrix. The answer is to use homogenous coordinates

-In most cases the last ordinate will be 1, but in general we can consider it a scale factor.
-Homogenous co-ordinates fall into two types:
1. Those with the final ordinate non-zero, which can be normalised into position vectors.
2. Those with zero in the final ordinate which are direction vectors, and have direction magnitude.
Transformation Matrices
-Transformations in computer graphics accomplish the following tasks:
i. Moving Objects- from frame to frame in an animation.
ii. Change of Coordinates- which is used when objects that are stored relative to one reference frame
are to be accessed in a different reference frame. One important case of this is that of mapping objects
stored in a standard coordinate system to a coordinate system that is associated with the camera (or
viewer).
iii. Projection- is used to project objects from the idealized drawing window to the viewport, and
mapping the viewport to the graphics display window.
iv. Mapping- between surfaces, for example, transformations that indicate how textures are to be
wrapped around objects, as part of texture mapping.
-OpenGL has a very particular model for how transformations are performed. Recall that when
drawing, it was convenient for us to first define the drawing attributes (such as color) and then draw a
number of objects using that attribute. OpenGL uses much the same model with transformations. You
specify a transformation, and then this transformation is automatically applied to every object that is
drawn, until the transformation is set again. It is important to keep this in mind, because it implies that
you must always set the transformation prior to issuing drawing commands.
-Because transformations are used for different purposes, OpenGL maintains three sets of matrices for
performing various transformation operations. These are:

Model view matrix- Used for transforming objects in the scene and for changing the coordinates into a
form that is easier for OpenGL to deal with.
Projection matrix- Handles parallel and perspective projections. (Used for the third task above.)
Texture matrix-This is used in specifying how textures are mapped onto objects. (Used for the last task
above.)
- For each matrix type, OpenGL maintains a stack of matrices. The current matrix is the one on the top
of the stack. It is the matrix that is being applied at any given time. The stack mechanism allows you to
save the current matrix (by pushing the stack down) and restoring it later (by popping the stack).

Types of transformations
i. Translation
- A translation is applied to an object by repositioning it along a straight line path from one coordinate
location to another. A 2d point is translated by adding translation distance Tx, Ty to the original
coordinate position x,y. - performed by the glTranslate*() command.

void glTranslate{fd}(TYPEx, TYPE y, TYPEz);

-Multiplies the current matrix by a matrix that moves (translates) an object by the given x, y, and z values
(or moves the local coordinate system by the same amounts).

Figure below shows the effect of glTranslate*().

-Note that using (0.0, 0.0, 0.0) as the argument for glTranslate*() is the identity operation - that is, it
has no effect on an object or its local coordinate system.
ii. Rotation- performed by the glRotate*() command.
void glRotate{fd}(TYPE angle, TYPE x, TYPE y, TYPE z);
Multiplies the current matrix by a matrix that rotates an object (or the local coordinate system) in a
counterclockwise direction about the ray from the origin through the point (x, y, z). The angle parameter
specifies the angle of rotation in degrees.
- The glRotatef (45.0, 0.0, 0.0, 1.0) is a rotation of 45 degrees about the z-axis.
iii. Scaling- performed by the glScale*() command. The glScale*() changes the apparent size of an
object. Scaling with values greater than 1.0 stretches an object, and using values less than 1.0 shrinks it.
Scaling with a -1.0 value reflects an object across an axis. The identity values for scaling are (1.0, 1.0,
1.0). In general, you should limit your use of glScale*() to those cases where it is necessary. Using
glScale*() decreases the performance of lighting calculations, because the normal vectors have to be
renormalized after transformation.

- Note: A scale value of zero collapses all object coordinates along that axis to zero. It's usually not a
good idea to do this, because such an operation cannot be undone. Mathematically speaking, the
matrix cannot be inverted, and inverse matrices are required for certain lighting operations.

Example 1
-Suppose that rather than drawing a rectangle that is aligned with the coordinate axes, you want to
draw a rectangle that is rotated by 20 degrees (counterclockwise) and centered at some point (x; y).
The desired result is shown in Figure below.

Desired drawing. (Rotated rectangle is shaded).

- You could compute the rotated coordinates of the vertices yourself (using the appropriate
trigonometric functions), but OpenGL provides a way of doing this transformation more easily.

-Suppose that we are drawing within the unit square, . Suppose we have a 4 x 4 sized
rectangle to be drawn centered at location (x; y). We could draw an unrotated rectangle with the
following command:
glRectf(x - 2, y - 2, x + 2, y + 2);

-Note that the arguments should be of type GLfloat (2.0f rather than 2).
-Assuming that the matrix mode is GL _MODELVIEW (the default). Generally, there will be some
existing transformation (call it M) currently present in the Modelview matrix. This usually represents
some more global transformation, which is to be applied on top of our rotation. For this reason, we will
compose our rotation transformation with this existing transformation.
-Also, we should save the contents of the Modelview matrix, so we can restore its contents
after we are done. Because the OpenGL rotation function destroys the contents of the
Modelview matrix, we will begin by saving it, by using the command glPushMatrix(). Saving the
Modelview matrix in this manner is not always required, but it is considered good form.
Then we will compose the current matrix M with an appropriate rotation matrix R. Then we draw the
rectangle (in upright form). Since all points are transformed by the Modelview matrix prior to
projection, this will have the effect of rotating our rectangle. Finally, we will pop off this matrix (so
future drawing is not rotated).
-To perform the rotation, we will use the command glRotatef (ang, x, y, z). All arguments are GLfloat‘s.
(Or, recalling OpenGL‘s naming convention, we could use glRotated() which takes GLdouble
arguments.) This command constructs a matrix that performs a rotation in 3-dimensional space
counterclockwise by angle ang degrees, about the vector (x; y; z). It then composes (or multiplies) this
matrix with the current Modelview matrix. In our case the angle is 20 degrees. To achieve a rotation in
the (x; y) plane the vector of rotation would be the zunit vector, (0; 0; 1). Here is how the code might
look (but beware, this conceals a subtle error).
glPushMatrix (); // save the current matrix
glRotatef(20, 0, 0, 1); // rotate by 20 degrees CCW
glRectf(x-2, y-2, x+2, y+2); // draw the rectangle
glPopMatrix(); // restore the old matrix

-Although this may seem backwards, it is the way in which almost all object transformations are
performed in OpenGL:
i. Push the matrix stack,
ii. Apply (i.e., multiply) all the desired transformation matrices with the current matrix,
iii. Draw your object (the transformations will be applied automatically), and (4) Pop the matrix stack.
- The order of the rotation relative to the drawing command may seem confusing at first. You might
think,
“Shouldn’t we draw the rectangle first and then rotate it?”. The key is to remember that whenever you
draw (using glRectf() or glBegin()...glEnd()), the points are automatically transformed using the current
Modelview matrix. So, in order to do the rotation, we must first modify the Modelview matrix, then draw
the rectangle. The rectangle will be automatically transformed into its rotated state. Popping the matrix
at the end is important, otherwise future drawing requests would also be subject to the same rotation.

Example 2: Rotating a Rectangle (correct): Something is wrong with the example given above. The
rotation is performed about the origin of the coordinate system, not about the center of the rectangle
and we want.

The actual rotation of the previous example. (Rotated rectangle is shaded).


-to fix this, we will draw the rectangle centered at the origin, then rotate it by 20 degrees, and finally
translate (or move) it by the vector (x; y). To do this, we will need to use the command glTranslatef(x, y,
z). All three arguments are GLfloat‘s. (And there is version with GLdouble arguments.) This command
creates a matrix which performs a translation by the vector (x; y; z), and then composes (or multiplies)
it with the current matrix. Recalling that all 2-dimensional graphics occurs in the z = 0 plane, the desired
translation vector is (x; y; 0).
- So the conceptual order is (1) draw, (2) rotate, (3) translate. But remember that you need to set up
the transformation matrix before you do any drawing. That is, if ~v represents a vertex of the rectangle,
and R is the rotation matrix and T is the translation matrix, and M is the current Modelview matrix,
then we want to compute the product:

-Since M is on the top of the stack, we need to first apply translation (T) to M, and then apply rotation
(R) to the result, and then do the drawing (~v). The final code is given by:
glPushMatrix(); // save the current matrix (M)
glTranslatef(x, y, 0); // apply translation (T)
glRotatef(20, 0, 0, 1); // apply rotation (R)
glRectf(-2,-2, 2, 2); // draw rectangle at the origin
glPopMatrix(); // restore the old matrix (M)

Write a Program to draw animation using increasing circles filled with different colors and patterns.

1. #include<graphics.h>
2. #include<conio.h>
3. void main()
4. {
5. intgd=DETECT, gm, i, x, y;
6. initgraph(&gd, &gm, "C:\\TC\\BGI");
7. x=getmaxx()/3;
8. y=getmaxx()/3;
9. setbkcolor(WHITE);
10. setcolor(BLUE);
11. for(i=1;i<=8;i++)
12. {
13. setfillstyle(i,i);
14. delay(20);
15. circle(x, y, i*20);
16. floodfill(x-2+i*20,y,BLUE);
17. }
18. getch();
19. closegraph();
20. }
Output

Cohen-Sutherland Line Clipping Algorithm

Cohen-Sutherland Line Clipping:

It is one of the most popular line-clipping algorithms. The concept of assigning 4-bit region codes to the
endpoints of a line and subsequent checking and operation of the endpoint codes to determine totally
visible lines and invisible lines (lying completely at one side of the clip window externally). Cohen-
Sutherland Line Clipping Algorithm was originally introduced by Danny Cohen and Ivan Sutherland.

For clipping other invisible lines and partially visible lines, the algorithm breaks the line segments into
smaller subsegments by finding intersections with appropriate window edges. For a pair of a non-zero
endpoint and an intersection point, the corresponding subsegment is checked for two primary visibility
states as done in the earlier steps. The process is repeated till two visible intersections are found or no
intersection with any of the four visible window edges is found. Thus this algorithm cleverly reduces the
number of intersection calculations.

Steps of Cohen-Sutherland Line Clipping Algorithm:

1. Input: x1, xR, yT, yB, P1(x1, y1), P2(x2, y2)


Initialize i=1
while i<=2
i=i+1
end while
i=1

2. Intialize j=1
while j<=2

3. if codes of P1 and P2 are both equal to zero then draw P1P2 (Totally Visible)
4. if logical intersection or AND operation of code – P1 and code -P2 is not equal to zero then ignore
P1P2 (Totally invisible)
5. if code -P1=0 then swap P1 and P2 along with their flags and set i=1
6. if code – P1< >0 then
for i=1{
if C1 left=1 then
find interaction (xL, y’L) with left edge
assign code to (xL, y’L)
P1=(xL, y’L)
end if
i=i+1
go to 3
}
for i=2
{
if C1 right=1 then
find interaction (xR, y’R) with right edge
assign code to (xR, y’R)
P1=(xR, y’R)
end if
i=i+1
go to 3
}
for i=3
{
if C1 bottom=1 then
find interaction (x’B, yB) with bottom edge
assign code to (x’B, yB)
P1=(x’B, yB)
end if
i=i+1
go to 3
}
for i=4
{
if C1 top=1 then
find interaction (x’T, yT) with top edge
assign code to (x’T, yT)
P1=(x’T, yT)
end if
i=i+1
go to 3
}
end

Computer Graphics and Geomatics


The “Computer Graphics and Geomatics” research groups are engaged in generating knowledge in the
field of Computer Graphics and Geomatics, encompassing 3D Modeling Systems, Virtual Reality,
Augmented Reality, Simulation, 2D/3D/4D Geographic Information Systems, Drone-based Precision
Agriculture. The groups have various Virtual Reality and Augmented Reality devices in place as well as
3D Printers and Drones equipped with latest-generation sensors.

Application of Computer Graphics In Surveying And Mapping


In the field of Surveying and mapping, the continuous application of all kinds of new technologies and
equipment has changed the traditional surveying and mapping methods (such as flat mapping, simple
plotting). With the development of computer information technology, the digital way of work is
becoming more and more mature. It not only reduces the workload and intensity of Surveying and
mapping staff, but also improves the scope and accuracy of Surveying and mapping. It has gradually
become the mainstream way of Surveying and mapping.

1.1 Global Satellite Positioning Technology


Global positioning system (GPS) was invented by the US Department of defense. This technology is to
use synchronous satellite to realize the positioning all over the world, and the positioning is accurate. At
present, the number of GPS satellites is very large. There are at least four satellites at all angles in the
world, realizing global 24-hour positioning. It has the characteristics of high positioning accuracy and
convenient operation, so it has been widely used. Global positioning technology mainly includes: static
positioning technology (GPS) and time dynamic positioning technology

1.1.1 Static positioning technology


GPS positioning technology needs 12 hours to achieve synchronous observation. With the continuous
improvement of GPS technology requirements, this traditional way cannot meet people's needs, so the
static positioning technology comes into being. This technology has the characteristics of strong
adaptability, not affected by weather and type structure, and can solve the problem of point and point
cannot be indivisibility, which is usually used in large-scale control measurement.
1.1.2 Real time dynamic positioning technology (RTK)
RTK, also known as real-time kinematic, is a real-time dynamic difference method. This method is a new
GPS measurement method commonly used at present. Traditional dynamic measurement, fast static
measurement and static measurement need to be calculated after measurement to obtain centimeter
level accuracy.

RTK uses carrier phase dynamic real-time difference method to obtain centimeter level accuracy
measurement results in the field in real time. RTK working form: send the observation value and the
coordinate information of the station to the mobile station at the same time with the electromagnetic
signal through the data ingot and modem reference station. The mobile station receives data and
collects GPS satellite signals at the same time to obtain the observation data. In the system, differential
observation values are formed for real-time processing, and the mobile station position coordinates
with centimeter accuracy are given in time. RTK is widely used in line alignment and land survey.

1.2 Photoelectric Ranging Technologies

The carrier of this technology is a technology that uses infrared light or visible light to calculate the
distance by measuring the time between two points along the line. This technology is mainly used in
electronic level, total station and other equipment.

1.2.1 Electronic level

Electronic level is also called digital level. Invented by Zeiss in the 1990s and developed on the basis of
the automatic level, the automatic level is used as the basis to add mirrors and detectors in the optical
path of the telescope, and the high-tech product of the integration of optical, mechanical and electrical
measurement composed of the image processing electronic system and the bar code scale is adopted.
This technology effectively complements the defects of GPS positioning technology, and has the
advantages of fast measurement speed and high accuracy. Electronic level plays an important role in
digital measurement.

1.2.2 Total station


The full name is electronic total station. It is a measuring instrument that can integrate electricity,
machine and light. It has the advantages of fast measurement speed, convenient operation and high
accuracy. It can be realized that all the measurement work on the measuring station can be completed
by placing the instrument once. It is usually used in deformation monitoring and precise engineering
survey of large-scale buildings and underground tunnels.

3.0 ANALYSES OF THE APPLICATION CHARACTERISTICS OF COMPUTER TECHNOLOGY IN SURVEYING


AND MAPPING

Compared with the traditional survey technology, the digital survey technology has the following
characteristics in the application of engineering survey;

3.1 High Degree of Automation


The digital surveying and mapping system is a comprehensive surveying and mapping system which
takes the computer as the core, total station, GPS, digital photogrammetry digitizer as the data
acquisition t-tool, with the support of external input and output software and hardware, to collect,
input, map, draw, output and manage the digital space of terrain. Adopting digital surveying and
mapping technology can reflect the main points of things intuitively, which greatly improves the
automation degree of engineering surveying. Traditional engineering drawings and large-scale maps
need complex field mapping, which has a long working period. Because of the use of digital surveying
and mapping technology, it can greatly reduce the labor and intensity of field surveying and mapping
personnel.[5]

3.2 Quick Data Sorting


The graph formed by data mapping carries a large amount of information, and different data can be
stored in layers. Data mapping is very convenient. Portable computer application can make part of the
mapping complete through external data collection such as palmtop computer, and make data sorting
or updating fast and convenient. Surveying and mapping personnel only need to save some
modifications of human data to get new food data graphics.

3.3 High Mapping Accuracy


Digital mapping can be easily and simply completed by computer, and complex record, examination and
calculation are avoided. Digital mapping is realized by self-updating of digital technology, which is not
affected by human factors and has a small probability of error. Digital mapping can reduce the error in
the process of data transmission and obtain accurate measurement results.

3.4 Rich Graphic Attribute Information


In digital mapping, it is not only to determine the coordinates of the topographic points, but also to
know what the attributes of the measured points are. At that time, it is necessary to record the coding
of the measuring points of this point and display the connection information into a map by using the
schema symbol library in the mapping system, and only need to know the coding to find the
corresponding schema symbols from the library to complete the map.

You might also like