Graphics
Graphics
Introduction
The origination of computer graphics can be in digital or analogue form. We can use
computer software to draw a picture in the computer. This picture is already in digital
format. But very often, the images come from the real world. In such case, the
brilliant pictures that we see are composed of different colours which are actually
light waves of different frequencies. Light wave is analogue, so we have to digitize it
so as to be processed by the computer.
Digitization of Light
1. Sampling
Usually, we uses digital camera to catch a moment and scanner to scan a picture
or hand-written document or even a 3D object! In many digital cameras and most
of the scanners, there are CCD (Charge-Coupled Device) arrays inside. CCD
converts light signal to electrical signal for digitization.
The density of the CCDs determines the quality of the output image signal, and
is highly related to the device resolution. Device resolution of scanner is the
measure of the number of dots (i.e. pixels on screen) can be shown in a unit
length and is measured in dpi (dots per inch). Since the electrical signal from the
CCD resembles the sample of the original image, the device resolution can be
used as a measure of the sampling rate.
2. Quantization
The sample size decides the number of colours that can be recognized. In
computer graphics, the sample size is the colour depth (or bit depth). To be
more precise and general, color depth is the number of bits used to store the
color of one pixel.
Representation of Colours
The number of colours that can be used to store an image depends on the colour depth
and the colour model used.
Common Colour Model:
Black and White:
It uses 0 and 1 to represent white and black dots. Brightness of colour is
represented by density of black and white dots.