Lecture 2 AI Summary
Lecture 2 AI Summary
o Environmental conditions:
Examples of Noise
1. Gaussian Noise:
Why Remove Noise? Noise obscures the true information in an image. To analyze or use the
image (e.g., for machine learning or computer vision tasks), noise needs to be filtered out.
How
This Relates to Real Images
Additive noise might show random light or dark dots scattered over the image.
Multiplicative noise might make some areas overly bright or overly dark, depending
on the scaling effect.
Image Filtering
At its core:
This operation changes the pixel’s value to produce a new, filtered image.
Key Terms:
o A small grid (e.g., 3x3, 5x5) of numbers used to define the filter operation.
o The kernel slides over the image, pixel by pixel, applying the filter function.
2. Neighborhood:
Linear Filtering
Linear filtering is the most basic type of filtering, where the new pixel value is calculated as a
weighted sum of its neighbors, using the kernel.
How It Works:
What is Padding?
o When the kernel slides over the edges of an image, some pixels don't have
enough neighbors to compute a new value. Padding solves this by adding
extra rows or columns around the image.
Types of Padding:
1. Constant Padding:
2. Reflect Padding:
3. Each value in the flipped kernel is multiplied by the corresponding pixel value
in the image.
What Is Noise Filtering? Noise filtering is the process of removing unwanted random
variations (noise) from an image while preserving important features like edges and
textures.
Noise can degrade the quality of an image, making it difficult to analyze or use for tasks like
object detection, edge detection, or pattern recognition. Removing noise improves:
A. Gaussian Noise
1. Medical Imaging:
o Removing speckle noise from ultrasound or MRI images for clearer diagnosis.
2. Photography:
3. Satellite Imaging:
Edges represent significant transitions in the scene, making them crucial for
understanding the image structure.
Edges are found by detecting rapid changes in pixel intensity. This is done by calculating the
derivative of the image:
Second Derivative: Highlights areas where intensity changes most sharply (e.g.,
Laplacian filter).
A. Smoothing
Preprocessing the image with a smoothing filter (e.g., Gaussian filter) reduces noise.
B. Gradient Calculation
Compute the gradient of intensity values to identify where the changes are sharpest.
C. Thresholding
After finding edges, apply a threshold to keep only significant ones and remove weak
edges.
Image Enhancement:
1. First Derivatives:
o Texture analysis.
2. Second Derivatives:
The center pixel, where intensity changes sharply, has a high response (360).
Smooth regions (e.g., background) have low or zero response.
3. Practical Observations
First Derivatives:
Second Derivatives:
Better for emphasizing corners and regions with rapid intensity changes.
4. Applications
1. Edge Detection:
o First derivatives detect edges in images using methods like Sobel or Prewitt
filters.
2. Feature Extraction:
o Gradients are used in object recognition tasks to detect features like edges,
corners, or shapes.
3. Image Sharpening:
4. Noise Detection: