Report Contnets
Report Contnets
multiple segments (sets of pixels) (Also known as super pixels). The goal of
segmentation is to simplify and/or change the representation of an image into
something that is more meaningful and easier to analyze. Image segmentation is
typically used to locate objects and boundaries (lines, curves, etc.) in images. More
precisely, image segmentation is the process of assigning a label to every pixel in
an image such that pixels with the same label share certain visual characteristics.
An edge is the boundary between two regions with relatively distinct gray-
level properties. Edge detection is a terminology in image processing and computer
vision, particularly in the areas of feature detection and feature extraction, to refer
to algorithms which aim at identifying points in a digital image at which the image
brightness changes sharply or more formally has discontinuities.
(3.1)
(3.2)
The angle of orientation of the edge (relative to the pixel grid) giving rise to the
spatial gradient is given by equation 3.3,
(3.3)
Although his work was done in the early days of computer vision, the Canny
edge detector (including its variations) is still a state-of-the-art edge detector.
Unless the preconditions are particularly suitable, it is hard to find an edge detector
that performs significantly better than the Canny edge detector.
Fig 3.3 shows the edge detection output by applying the Canny operator. Canny
operator has detected not only the tumor region also detects the unwanted artifacts.
The maximum response for each pixel is the value of the corresponding
pixel in the output magnitude image. The values for the output orientation image
lie between 1 and 8, depending on which of the 8 kernels produced the maximum
response.
This edge detection method is also called edge template matching, because a
set of edge templates is matched to the image, each representing an edge in a
certain orientation. The edge magnitude and orientation of a pixel is then
determined by the template that matches the local area of the pixel the best.
On the other hand, the set of kernels needs 8 convolutions for each pixel,
whereas the set of kernel in gradient method needs only 2, one kernel being
sensitive to edges in the vertical direction and one to the horizontal direction. The
result for the edge magnitude image is very similar with both methods, provided
the same convolving kernel is used.
Figure 3.4 Output of Edge Detection by Prewitt Operator
Fig 3.4 shows the edge detection output by applying the Prewitt operator.
Like the Sobel operator, Prewitt operator detects only the boundary of object.
(3.4)
(3.5)
The angle of orientation of the edge giving rise to the spatial gradient (relative to
the pixel grid orientation) is given by:
Figure 3.5 Output of Edge Detection by Roberts Operator
Fig 3.5 shows the edge detection output by applying the Robert operator.
From the above outputs, all operators have failed to detect the tumor location.
The method is useful in images with backgrounds and foregrounds that are
both bright or both dark. In particular, the method can lead to better views of bone
structure in x-ray images, and to better detail in photographs that are over or under-
exposed. A key advantage of the method is that it is a fairly straightforward
technique and an invertible operator. So in theory, if the histogram equalization
function is known, then the original histogram can be recovered. The calculation is
not computationally intensive. A disadvantage of the method is that it is
indiscriminate. It may increase the contrast of background noise, while decreasing
the usable signal.
It could dramatically change the character of the image, e.g., the average
luminance (mean) of the image. Changing the overall illumination of MR
image will shifts the peaks in the histogram, there is a very little scope to
improve contrast by global transformation.
Fig 3.8 shows the output images by applying various threshold values.
• Pixels assigned to a single class need not form coherent regions as the
spatial locations of pixels are completely ignored.
CHAPTER 4
PROPOSED TECHNIQUES
Fig 3.9 shows the segmented image by applying the region based algorithm. From
the output tumor regions are segmented exactly but the drawback of region based
algorithm is it is difficult to identify the seed points.
MODULES
1. IMAGE ACQUISITION
2. PREPROCESSING
3. IMAGE RESIZING
4. SEGEMENTATION
5. FILTERING
1. IMAGE ACQUISITION
2. IMAGE PRE-PROCESSING
Image pre-processing can significantly increase the reliability of an optical
inspection. Several filter operations which intensify or reduce certain image details
enable an easier or faster evaluation. Users are able to optimize a camera image
with just a few clicks.
3. IMAGE RESIZING
To resize an image, use the image resize function. When you resize an image, you
specify the image to be resized and the magnification factor. To enlarge an image,
specify a magnification factor greater than 1. To reduce an image, specify a
magnification factor between 0 and 1.
4. IMAGE SEGMENTATION
Segmentation partitions an image into distinct regions containing each pixels with
similar attributes. To be meaningful and useful for image analysis and
interpretation, the regions should strongly relate to depicted objects or features of
interest. Meaningful segmentation is the first step from low-level image processing
transforming a greyscale or colour image into one or more other images to high-
level image description in terms of features, objects, and scenes. The success of
image analysis depends on reliability of segmentation, but an accurate partitioning
of an image is generally a very challenging problem.
5. IMAGE FILTERING
Image filtering allows you to apply various effects on photos. The type of image
filtering described here uses a 2D filter similar to the one included in Paint Shop
Pro as User Defined Filter and in Photoshop as Custom Filter.