0% found this document useful (0 votes)
19 views

Projection

.

Uploaded by

py4041548
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Projection

.

Uploaded by

py4041548
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

S.

N PERSPECTIVE PROJECTIONS PARALLEL PROJECTION


O.
1. If COP[ Centre Of Projection] is located If COP [ Centre Of Projection] is
at a finite point in 3 space , the result is located at infinity, all the projectors are
a perspective projection. parallel and the result is a parallel
projection.
2. Perspective projection is representing or parallel projection is used in drawing
drawing objects which resemble the real objects when perspective projection
thing cannot be used.
3. perspective projection represents objects Parallel projection is much like seeing
in a three-dimensional way. objects through a telescope, letting
parallel light rays into the eyes which
produce visual representations without
depth
4. In perspective projection, objects that parallel projection does not create this
are far away appear smaller, and objects effect.
that are near appear bigger
5. While parallel projection may be best it is better to use perspective projection
for architectural drawings, in cases
wherein measurements are necessary
6. perspective projections require a In parallel projection the center of
distance between the viewer and the projection is at infinity, while in
target point. prospective projection, the center of
projection is at a point.
7. Types: Types:
1.one point perspective, 1.Orthographic
2.Two point perspective, 2.Oblique
3. Three point perspective,
8.
1. Orthographic projection

Orthographic projection is the type of parallel projection in which parallel


projectors are projected in a perpendicular direction on the major planes
of a 3-D object and the corresponding 2-D representations of the object
are drawn on media such as paper and computer screen.

2. Oblique projection
Oblique projection is the type of parallel projection in which the parallel
projectors are parallel to each other but not perpendicular to any planes of
the 3-D object they are projected on and one of the three planes of the
object is projected at either 30°, 45°, or 60° to the x-axis. Angle 45° is
used in most oblique projections.

Since the parallel projectors are not directly perpendicular to any of the 3-
D object’s plane, it results in technical or engineering drawings that have
true shapes and sizes on only one or two planes/faces. Oblique projection
is used in creating two types of technical or engineering drawing: cavalier
drawing and cabinet drawing.

Types
There are following three types of perspective projection:
One-point perspective
In this projection, all lines and objects in the scene converge towards a
single vanishing point on the horizon line, providing a sense of depth. For
example, the lines converge to a yellow point in the following image.
One-point perspective image
Two-point perspective image
In the two-point perspective, the receding lines converge to two vanishing
points on the horizon line. For example, the lines converge to two yellow
points in the following image:

Two-point perspective image


Three-point perspective
In the three-point perspective, the receding lines converge to three
vanishing points, two on the horizon line and one above or below it. For
example, the lines converge to three yellow points in the following
image:

Three-point perspective
Zero-point perspective
In this type of projection, there's no convergence towards a vanishing
point. A vanishing point and horizon line do not exist in the zero-
perspective images. For example, there cannot be a convergence of lines
towards a vanishing point in the following image:

Zero-point perspective
PROJECTIONS:
Representing an n-dimensional object into an n-1 dimension is known as
projection. It is process of converting a 3D object into 2D object, we
represent a 3D object on a 2D plane {(x,y,z)->(x,y)}. It is also defined
as mapping or transforming of the object in projection plane or view
plane. When geometric objects are formed by the intersection of lines
with a plane, the plane is called the projection plane and the lines are
called projections.
Types of Projections:
1. Parallel projections
2. Perspective projections
Center of Projection:

It is an arbitrary point from where the lines are drawn on each point of
an object.
 If cop is located at a finite point in 3D space , Perspective projection
is the result
 If the cop is located at infinity, all the lines are parallel and the result
is a parallel projection.

Parallel Projection:

A parallel projection is formed by extending parallel lines from each


vertex of object until they intersect plane of screen. Parallel projection
transforms object to the view plane along parallel lines. A projection is
said to be parallel, if center of projection is at an infinite distance from
the projected plane. A parallel projection preserves relative proportion
of objects, accurate views of the various sides of an object are obtained
with a parallel projection. The projection lines are parallel to each other
and extended from the object and intersect the view plane. It preserves
relative propositions of objects, and it is used in drafting to produce
scale drawings of 3D objects. This is not a realistic representation, the
point of intersection is the projection of the vertex.
Parallel projection is divided into two parts and these two parts sub
divided into many.

Orthographic Projections:

In orthographic projection the direction of projection is normal to the


projection of the plane. In orthographic lines are parallel to each other
making an angle 90 with view plane. Orthographic parallel projections
are done by projecting points along parallel lines that are perpendicular
to the projection line. Orthographic projections are most often used to
procedure the front, side, and top views of an object are called
evaluations. Engineering and architectural drawings commonly employ
these orthographic projections. Transformation equations for an
orthographic parallel projection as straight forward. Some special
orthographic parallel projections involve plan view, side elevations. We
can also perform orthographic projections that display more than one
phase of an object, such views are called monometric orthographic
projections.
Oblique Projections:

Oblique projections are obtained by projectors along parallel lines that


are not perpendicular to the projection plane. An oblique projection
shows the front and top surfaces that include the three dimensions of
height, width and depth. The front or principal surface of an object is
parallel to the plane of projection. Effective in pictorial representation.

 Isometric Projections: Orthographic projections that show more than


one side of an object are called axonometric orthographic projections.
The most common axonometric projection is an isometric projection.
In this projection parallelism of lines are preserved but angles are not
preserved.
 Dimetric projections: In these two projectors have equal angles with
respect to two principal axis.
 Trimetric projections: The direction of projection makes unequal
angle with their principal axis.

Cavalier Projections:
All lines perpendicular to the projection plane are projected with no
change in length. If the projected line making an angle 45 degrees with
the projected plane, as a result the line of the object length will not
change.
Cabinet Projections:

All lines perpendicular to the projection plane are projected to one half
of their length. These gives a realistic appearance of object. It makes
63.4 degrees angle with the projection plane. Here lines perpendicular to
the viewing surface are projected at half their actual length.
Perspective Projections:
 A perspective projection is the one produced by straight lines
radiating from a common point and passing through point on the
sphere to the plane of projection.
 Perspective projection is a geometric technique used to produce a
three dimensional graphic image on a plane, corresponding to what
person sees.
 Any set of parallel lines of object that are not parallel to the projection
plane are projected into converging lines. A different set of parallel
lines will have a separate vanishing point.
 Coordinate positions are transferred to the view plane along lines that
converge to a point called projection reference point.
 The distance and angles are not preserved and parallel lines do not
remain parallel. Instead, they all converge at a single point called
center of projection there are 3 types of perspective projections.
Two characteristic of perspective are vanishing point and perspective
force shortening. Due to fore shortening objects and lengths appear
smaller from the center of projections. The projections are not parallel
and we specify a center of projection cop.
Different types of perspective projections:
 One point perspective projections: In this, principal axis has a finite
vanishing point. Perspective projection is simple to draw.
 Two point perspective projections: Exactly 2 principals have
vanishing points. Perspective projection gives better impression of
depth.

 Three point perspective projections: All the three principal axes


have finite vanishing point. Perspective projection is most difficult to
draw.
Perspective fore shortening:

The size of the perspective projection of the object varies inversely with
distance of the object from the center of projection.

What is a Pixel in Digital Images?

Understanding the Fundamentals of Pixels in an Image

Pixels are the basic units of digital images, representing color and
brightness. This guide covers their properties and importance in image
processing and computer vision.

What is a Pixel?

A pixel, short for "picture element," is the smallest unit of a digital image.
Pixels are arranged in a grid to form an image, with each pixel
representing a specific color at a particular point in the image. The
resolution of an image is determined by the number of pixels it contains,
typically described in terms of width and height (e.g. 1920x1080 pixels).

Properties of Pixels

Pixels have several important properties that define the appearance and
quality of an image:

1. Color Depth:

1. Color depth, also known as bit depth, indicates the number of bits

used to represent the color of each pixel. Common color depths


include 8-bit, 16-bit, and 24-bit.
2. In an 8-bit image, each pixel can have 256 different color values

(2^8). In a 24-bit image, each pixel can represent over 16 million


colors (2^24), with 8 bits allocated for each of the red, green, and
blue (RGB) channels.
2. Color Models:
1. RGB: The most common color model for digital images, where

each pixel is defined by three values representing the intensities of


red, green, and blue.
2. Grayscale: In grayscale images, each pixel represents a shade of

gray, ranging from black to white. Grayscale images are typically


8-bit, with 256 shades of gray.
3. CMYK: Used primarily in printing, where each pixel is defined by

four values representing cyan, magenta, yellow, and black.


4. HSV: The Hue, Saturation, and Value model, often used in color

manipulation and analysis.

3. Resolution:

1. The resolution of an image is the total number of pixels it contains,

typically expressed as width x height (e.g., 1920x1080).


2. Higher resolution images have more pixels and can represent more

detail, but they also require more storage space and processing
power.

How Pixels Form an Image

When pixels are arranged in a grid, they collectively form an image. Each
pixel’s color and brightness contribute to the overall appearance of the
image. The human eye perceives the combination of these pixels as a
continuous image, even though it is composed of discrete elements.

For example, consider a simple 3x3 pixel image:

css

[RGB]
[GBR]
[BRG]
In this grid, each letter represents a pixel with a specific color (Red,
Green, Blue). When viewed from a distance, these individual pixels blend
together to form the complete image.

Pixel Manipulation in Image Processing

Pixels are central to various image processing techniques, allowing for


the manipulation and enhancement of digital images. Some common
pixel manipulation operations include:

1. Brightness and Contrast Adjustment:

1. Changing the brightness involves adding or subtracting a constant

value to the pixel values.


2. Adjusting contrast involves scaling the difference between pixel

values and the average value.

2. Color Manipulation:

1. Color manipulation techniques such as changing the hue,

saturation, or applying color filters involve modifying the pixel


values in the color channels.

3. Image Filtering:

1. Image filters, such as blur, sharpen, and edge detection, work by

applying mathematical operations to the pixel values and their


neighbors.
2. For example, a Gaussian blur filter smooths an image by averaging

the pixel values with their neighbors.


4. Geometric Transformations:

1. Operations like rotation, scaling, and translation change the

positions of pixels to achieve the desired effect.

PIXEL TRANSFORM
We can transform an image by means of applying the same adjustment to
every pixel of the image. Such transformations range from changing
pixels' brightness to changing their color. In this topic, we'll learn about
some popular pixel transformations.

Brightness

The brightness of a picture simply defines how dark or light it is. To


change the brightness of an image, we change the value of each pixel
color by a constant value. Adding a positive constant to all of the image
color values increases the brightness of the image, while subtracting a
positive constant from all of the image color values decreases the
brightness.

Of course, these operations may sometimes lead to invalid color values.


If the value is below 0, then you need to set it to 0, and if the value is
above 255 (integer color representation), then set it to 255. In the case of
float value representation, set values above 1.0 to 1.0.

Here is an example of changing the brightness of an image with a 24-bit


color scheme. We read the color of each pixel and then change it, setting
a new color for the pixel. The changes are applied to the original image.

Below are two images – the original and the new one with brightness
changed by an offset of -50.
Contrast

The difference in brightness between different objects or regions of an


image is referred to as contrast. To change the contrast of an image, we
multiply the value of each pixel color by a constant value. If the value is
greater than one, we increase the contrast. On the contrary, multiplying
the value of all pixel colors by a number less than one, we decrease the
contrast.

If the color value is greater than 255, then the value is set to 255 (to 1.0
for float representation).

Here is an example of a function that changes the contrast of an image


with a 24-bit color scheme. It reads the color of each pixel and changes
it, setting a new color to the pixel. The changes are applied to the
original image.

Below are two images – the original and the one with its contrast
changed by a factor of 0.85.
Color transforms

Color transforms are used in digital image processing to convert an image


from one color space to another. These transforms can be used for a
variety of purposes such as correcting color balance, enhancing color
contrast, and improving image compression. Some popular color spaces
used in digital image processing include RGB, CMYK, HSV, and Lab.
Color transforms in digital image processing can be performed using
mathematical operations such as matrix multiplication and color lookup
tables.
Color models

A color model, also known as a color space, is a mathematical


representation of colors used in digital image processing. It defines how
colors are represented in terms of `numerical values that can be stored,
processed, and displayed by computers and other digital devices.

RGB color model

RGB stands for Red, Green, and Blue, and it is the most commonly used
color model in digital image processing. In this model, each pixel in an
image is represented by three values that denote the amount of red, green,
and blue light that makes up its color. The RGB model is an additive
color model, which means that combining red, green, and blue light in
varying proportions produces a wide range of colors. The RGB color
model is widely used in display technologies such as computer monitors,
televisions, and digital
cameras.

CMYK color model

CMYK stands for Cyan, Magenta, Yellow, and Key (Black), and it is a
subtractive color model used in printing. In this model, colors are created
by subtracting different amounts of cyan, magenta, yellow, and black ink
from a white background. Unlike the RGB model, which is an additive
model, the CMYK model is a subtractive model, which means that it
starts with a white background and subtracts colors from it to create the
desired hue. The CMYK color model is used in printing technologies
such as offset printing, flexography, and digital
printing.

HSL and HSV color models

HSL stands for Hue, Saturation, and Lightness, while HSV stands
for Hue, Saturation, and Value. These color models are often used for
color manipulation and adjustment in image processing. The HSL model
represents colors in terms of their hue (the actual color), saturation (the
intensity of the color), and lightness (the brightness of the color). The
HSV model represents colors in terms of their hue, saturation, and value
(a measure of the brightness of the color). The HSL and HSV models are
particularly useful for adjusting color balance and enhancing color
contrast in an image.
Lab color model

The Lab color model is a device-independent color model that represents


colors in terms of their lightness (L), a (the green-red component), and b
(the blue-yellow component). Unlike the RGB and CMYK models, the
Lab model is designed to approximate human perception of color, making
it useful for color correction and other color-based operations. The Lab
model is particularly useful for tasks such as color matching, color
grading, and image segmentation.

Color transforms

Color transforms are mathematical operations used to manipulate the


color information in digital images. Here are some common types of
color transforms in digital image processing.

Color correction transforms (e.g., white balance correction)

Color correction transforms are used to adjust the colors in an image to


match a particular standard or desired appearance. One common example
of a color correction transform is white balance correction, which is used
to correct the color cast caused by the color temperature of the light
source used to capture an image. White balance correction adjusts the
color balance of an image to ensure that whites appear white, and colors
appear as they should.

Color space conversion transforms (e.g., RGB to HSL conversion)

Color space conversion transforms are used to convert an image from one
color model to another. For example, an RGB image can be converted to
the HSL or HSV color models for color manipulation and adjustment.
Color space conversion transforms are often used in image compression,
where converting an image to a different color space can reduce its file
size without significant loss of image
quality.

Color enhancement transforms (e.g., contrast enhancement)

Color enhancement transforms are used to improve the visual appearance


of an image by adjusting color contrast, saturation, and other color-related
properties. For example, contrast enhancement can be used to increase the
visual separation between colors in an image, while saturation
enhancement can be used to make colors appear more vivid and
intense.

Color quantization transforms (e.g., reducing the number of colors in


an image)

Color quantization transforms are used to reduce the number of colors in


an image by mapping the original color values to a smaller set of color
values. This can be useful for reducing the file size of an image or for
simplifying its visual appearance. One common example of a color
quantization transform is dithering, which is used to simulate the
appearance of additional colors by mixing existing colors in a pattern.
Algorithms For Color Transforms

Standard algorithms (e.g., gamma correction, histogram


equalization)

Standard algorithms for color transforms are well-established


mathematical techniques that have been used in image processing for
many years. Examples of standard algorithms include gamma correction
and histogram equalization. Gamma correction is used to adjust the
brightness of an image by applying a power function to the pixel values.
Histogram equalization is used to enhance the contrast of an image by
adjusting the distribution of pixel values in the image. These algorithms
are widely used because they are computationally efficient and can be
applied to a wide range of images.

Machine learning-based algorithms (e.g., deep neural networks)

Machine learning-based algorithms for color transform in digital image


processing involve using deep neural networks or other machine learning
techniques to learn complex mappings between input and output color
spaces. These algorithms can be trained on large datasets of labeled
images to learn how to perform tasks such as color correction,
colorization, and color transfer. Machine learning-based algorithms can
be more accurate than standard algorithms and can be adapted to specific
types of images or applications. However, they can also be more
computationally intensive and may require more specialized hardware or
software to implement.

You might also like