0% found this document useful (0 votes)
83 views5 pages

CV 2 marks

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views5 pages

CV 2 marks

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

2 MARKS (SET 1)

UNIT 1: IMAGE PROCESSING FOUNDATIONS

1. How does corner detection differ from edge detection?

Feature Corner Detection Edge Detection


Purpose Detects points where the intensity Identifies boundaries where the intensity
changes significantly in all directions. changes sharply in one direction.

Output Yields points of intersection or high Produces lines or curves that represent object
curvature, useful for feature boundaries in an image.
matching.
Applications Common in motion tracking, object Useful for segmentation, shape analysis, and
recognition, and 3D reconstruction. object boundary detection.

2. What is mathematical morphology in image processing?

1. Mathematical morphology involves operations based on shapes to analyze and process geometric
structures in images.
2. It uses techniques like dilation, erosion, opening, and closing to manipulate image structures.
3. Applications include noise removal, object boundary extraction, and image segmentation.

UNIT II: SHAPES AND REGIONS

3. What do you mean by active contours?

1. Active contours, also known as snakes, are curves that move through an image to find object
boundaries.
2. They are driven by energy functions based on image gradients and external forces.
3. Applications include segmentation, edge detection, and object tracking in dynamic images.

4. List the advantages of using size filtering in object detection.

1. Helps in removing noise or irrelevant objects by discarding objects smaller or larger than a specified
size.
2. Improves accuracy in detecting objects of interest by focusing on specific size ranges.
3. Reduces computational load by excluding unnecessary regions in an image.

UNIT III: HOUGH TRANSFORM

5. How is RANSAC applied to detect straight lines in images?


1. RANSAC (Random Sample Consensus) iteratively selects a random subset of points to estimate line
parameters.
2. It evaluates how well other points in the dataset align with the estimated line within a tolerance.
3. The model with the highest consensus set (inliers) is selected as the best-fit line.

6. What is spatial matched filtering?

1. Spatial matched filtering enhances image features by correlating the image with a predefined
template or kernel.
2. It is optimal for detecting features that match the shape and orientation of the template.
3. Applications include edge detection, feature matching, and noise reduction.

UNIT IV: 3D VISION AND MOTION

7. What is layered motion?

1. Layered motion refers to representing motion in a scene as multiple overlapping layers with distinct
movements.
2. Each layer corresponds to an object or region with a unique motion trajectory.
3. Used in video segmentation, background subtraction, and motion analysis.

8. What is photometric stereo?

1. Photometric stereo estimates the shape and surface properties of objects using images captured
under varying lighting conditions.
2. It analyzes shading variations to compute surface normals and depth information.
3. Applications include 3D modeling, surface inspection, and material analysis.

UNIT V: APPLICATIONS

9. How does an in-vehicle vision system locate the roadway?

1. It detects lane markings using edge detection and segmentation techniques.


2. It uses geometric constraints and perspective analysis to identify the road boundaries.
3. Machine learning models further enhance lane tracking and localization in dynamic environments.

10. How does the system detect and locate pedestrians?

1. Combines feature extraction techniques (e.g., HOG, Haar) with object detection algorithms like SVM
or YOLO.
2. Uses motion analysis to distinguish pedestrians from static background objects.
3. Depth information from stereo vision or LiDAR enhances detection accuracy in varying conditions.
2 MARKS (SET 2)
UNIT 1: IMAGE PROCESSING FOUNDATIONS

1. What are classical filtering operations in image processing?

1. Smoothing Filters: Reduce noise by averaging pixel values, e.g., Gaussian and mean filters.
2. Sharpening Filters: Enhance edges and fine details using techniques like the Laplacian filter.
3. Edge Detection Filters: Identify boundaries using gradient-based filters such as Sobel and Canny.

2. Give various definitions of Computer Vision.

1. Computer Vision involves enabling machines to interpret and understand visual data from the world.
2. It focuses on algorithms to process, analyze, and extract meaningful information from images or
videos.
3. It aims to replicate human vision capabilities in applications like object recognition, scene
understanding, and 3D modeling.

UNIT II: SHAPES AND REGIONS

3. What do you mean by binary shape analysis?

1. Binary shape analysis deals with analyzing shapes in binary images where objects are separated
from the background.
2. Operations include measuring geometric properties such as area, perimeter, and eccentricity.
3. Common applications include object detection, recognition, and classification in simple scenes.

4. Write a short note on region descriptors.

1. Region descriptors provide quantitative measures that describe the properties of segmented regions
in an image.
2. Examples include area, centroid, aspect ratio, orientation, and compactness.
3. They are used in object recognition, feature extraction, and image classification tasks.

UNIT III: HOUGH TRANSFORM

5. What is the Hough Transform (HT)?

1. The Hough Transform is a feature extraction method to detect lines, curves, and shapes in images.
2. It maps points in the image space to a parameter space, where shapes correspond to parameter
clusters.
3. Widely used in detecting geometric shapes like circles, ellipses, and straight lines.
6. What is the RANSAC algorithm?

1. RANSAC (Random Sample Consensus) is an iterative algorithm for fitting a model to data with
outliers.
2. It selects random subsets of data points and finds the best model fitting most points (inliers).
3. Commonly used in line fitting, homography estimation, and feature matching tasks.

UNIT IV: 3D VISION AND MOTION

7. What is optical flow?

1. Optical flow represents the motion of objects, surfaces, or edges between consecutive frames in a
video.
2. It is calculated using algorithms that estimate pixel-wise motion vectors.
3. Applications include motion tracking, video compression, and activity recognition.

8. Give any few limitations of the shape-from-shading method.

1. Assumes a single light source and fails in complex lighting conditions or shadows.
2. Struggles with textureless surfaces where shading variations are minimal.
3. Sensitive to noise, leading to inaccuracies in recovering 3D shapes.

UNIT V: APPLICATIONS

9. Define Chamfer matching.

1. Chamfer matching is a technique to match a template to an image by minimizing the distance


between the template edge and image edges.
2. It uses a distance transform to compute the shortest distances between edges.
3. Commonly applied in object detection, shape matching, and template-based recognition.

10. What is foreground-background separation, and why is it important in surveillance?

1. Foreground-background separation involves segmenting moving objects (foreground) from a static or


slowly changing background.
2. It is essential for tracking objects, detecting intrusions, and analyzing motion in surveillance videos.
3. Common methods include background subtraction, motion segmentation, and temporal
differencing.

You might also like