0% found this document useful (0 votes)
3 views

Lecture 2 AI Summary

Image noise refers to unwanted variations in brightness or color that degrade image quality, often caused by imperfections in imaging systems, environmental conditions, or transmission errors. Noise filtering is essential for improving image clarity and accuracy, particularly in applications like medical imaging and machine learning. Edge detection techniques identify significant transitions in intensity, aiding in understanding image structure and enhancing features.

Uploaded by

g23ai2114
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Lecture 2 AI Summary

Image noise refers to unwanted variations in brightness or color that degrade image quality, often caused by imperfections in imaging systems, environmental conditions, or transmission errors. Noise filtering is essential for improving image clarity and accuracy, particularly in applications like medical imaging and machine learning. Edge detection techniques identify significant transitions in intensity, aiding in understanding image structure and enhancing features.

Uploaded by

g23ai2114
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 53

Image Noise

What is Image Noise?

 Noise in an image refers to unwanted random variations in brightness or color that


can degrade its quality.

 It often occurs due to:

o Imperfections in the imaging system:

 Camera electronics (e.g., sensor issues)

 Quality of lenses (blurry or scratched)

o Environmental conditions:

 Lighting variations (too dark or too bright)

 Reflectance from shiny or uneven surfaces

o Transmission errors: Issues when images are sent digitally

Examples of Noise

1. Gaussian Noise:

o Pixels have slight random intensity changes following a Gaussian distribution


(bell curve).

o More common in natural scenes and digital photography.

2. Salt and Pepper Noise:

o Pixels are randomly turned completely black (0) or white (255).

o Common in transmission errors.

Why Remove Noise? Noise obscures the true information in an image. To analyze or use the
image (e.g., for machine learning or computer vision tasks), noise needs to be filtered out.
How
This Relates to Real Images

When you see an image affected by noise:

 Additive noise might show random light or dark dots scattered over the image.

 Multiplicative noise might make some areas overly bright or overly dark, depending
on the scaling effect.
Image Filtering

What is Image Filtering?

 Image filtering is a process of modifying or analyzing an image by applying a


mathematical operation over its pixels.

 It’s used for:

1. Enhancing images: Removing noise, increasing contrast, resizing, etc.

2. Extracting information: Finding textures, edges, or patterns.

3. Detecting objects or features: Identifying shapes or matching templates.

How Does Filtering Work?

At its core:

 For every pixel in the image, a function of its surrounding neighborhood is


calculated and applied.

 This operation changes the pixel’s value to produce a new, filtered image.

Key Terms:

1. Kernel (or Filter):

o A small grid (e.g., 3x3, 5x5) of numbers used to define the filter operation.

o The kernel slides over the image, pixel by pixel, applying the filter function.

2. Neighborhood:

o The pixels surrounding the current pixel being processed.

o The kernel interacts with this neighborhood.

Linear Filtering

Linear filtering is the most basic type of filtering, where the new pixel value is calculated as a
weighted sum of its neighbors, using the kernel.

 How It Works:

o The kernel is placed over a region of the image.

o Each value in the kernel is multiplied by the corresponding pixel value.

o The results are summed up to compute the new pixel value.


Padding in Filtering

 What is Padding?

o When the kernel slides over the edges of an image, some pixels don't have
enough neighbors to compute a new value. Padding solves this by adding
extra rows or columns around the image.

 Types of Padding:

1. Constant Padding:

 Add a fixed value (e.g., 0 for black) around the image.

2. Reflect Padding:

 Extend the image by mirroring its edges.


 How It Works:

1. The kernel is flipped horizontally and vertically.


2. It is placed over the image.

3. Each value in the flipped kernel is multiplied by the corresponding pixel value
in the image.

4. The products are summed up to compute the new pixel value.

 Why Flip the Kernel?

o Flipping makes convolution associative and commutative, which are useful


mathematical properties.
Noise Filtering

What Is Noise Filtering? Noise filtering is the process of removing unwanted random
variations (noise) from an image while preserving important features like edges and
textures.

1. Why Remove Noise?

Noise can degrade the quality of an image, making it difficult to analyze or use for tasks like
object detection, edge detection, or pattern recognition. Removing noise improves:

 Clarity: Enhances the overall appearance of the image.

 Accuracy: Reduces errors in downstream image processing tasks.

2. Types of Noise and Filtering Methods

A. Gaussian Noise

 Nature: Random variations follow a Gaussian (bell curve) distribution.

 Filtering Method: Gaussian filter

o Smooths out noise while preserving image details.

o Reduces high-frequency variations caused by Gaussian noise.


5. Applications of Noise Filtering

1. Medical Imaging:

o Removing speckle noise from ultrasound or MRI images for clearer diagnosis.

2. Photography:

o Cleaning up noisy photos taken in low light.

3. Satellite Imaging:

o Reducing noise in images taken from space.

4. Preprocessing for Machine Learning:

o Removing noise to improve feature extraction and model performance.


Edge Detection

What Is Edge Detection?

 Edge detection is a technique to identify points in an image where the intensity


changes sharply, such as object boundaries or texture changes.

 Edges represent significant transitions in the scene, making them crucial for
understanding the image structure.

1. How Does Edge Detection Work?

Edges are found by detecting rapid changes in pixel intensity. This is done by calculating the
derivative of the image:

 First Derivative: Highlights changes (e.g., Sobel, Prewitt filters).

 Second Derivative: Highlights areas where intensity changes most sharply (e.g.,
Laplacian filter).

2. Steps in Edge Detection

A. Smoothing

 Noise in the image can cause false edges.

 Preprocessing the image with a smoothing filter (e.g., Gaussian filter) reduces noise.

B. Gradient Calculation

 Compute the gradient of intensity values to identify where the changes are sharpest.

 Common methods include Sobel, Prewitt, and Roberts filters.

C. Thresholding

 After finding edges, apply a threshold to keep only significant ones and remove weak
edges.
Image Enhancement:

 Applying gamma correction for displays with non-linear brightness.


Averages:

 Denoising and smoothing images.

 Reducing fine-grained details or random noise


4. Applications of Discrete Derivatives

1. First Derivatives:

o Edge detection (e.g., Sobel or Prewitt filters).

o Texture analysis.

2. Second Derivatives:

o Detecting sharper edges and corners (e.g., Laplacian filter).

o Highlighting regions of rapid intensity change.


Result:

 The center pixel, where intensity changes sharply, has a high response (360).
 Smooth regions (e.g., background) have low or zero response.

3. Practical Observations

First Derivatives:

 Good for detecting edges in simple intensity transitions.

 Might produce thick or noisy edges if the gradient is not well-defined.

Second Derivatives:

 Better for emphasizing corners and regions with rapid intensity changes.

 More sensitive to noise, which can result in false edges.

4. Applications

1. Edge Detection:

o First derivatives detect edges in images using methods like Sobel or Prewitt
filters.

o Second derivatives refine edges using Laplacian filters.

2. Feature Extraction:

o Gradients are used in object recognition tasks to detect features like edges,
corners, or shapes.

3. Image Sharpening:

o Second derivatives can highlight edges and enhance image details.

4. Noise Detection:

o Second derivatives amplify noise, helping in analyzing image quality.

You might also like