Open In App

Image Stitching with OpenCV

Last Updated : 17 Jun, 2024
Comments
Improve
Suggest changes
Like Article
Like
Report

Image stitching is a fascinating technique that combines multiple images to create a seamless panoramic image. This technique is widely used in various applications such as creating wide-angle panoramas, virtual tours, and even in scientific imaging to cover a larger area. In this article, we'll explore how to perform image stitching using OpenCV and Python.

What is Image Stitching?

Image stitching is a process in computer vision and image processing where multiple images, typically of overlapping scenes, are combined to produce a single panoramic image. This technique involves aligning and blending the images to create a seamless and high-resolution composite.

Here are the key steps involved in image stitching:

  1. Image Acquisition: Capture multiple images of the scene with overlapping areas. These images are usually taken with a consistent orientation and similar exposure settings.
  2. Feature Detection: Identify distinctive features (like corners, edges, or specific patterns) in each image. Common algorithms for this task include SIFT (Scale-Invariant Feature Transform), SURF (Speeded-Up Robust Features), and ORB (Oriented FAST and Rotated BRIEF).
  3. Feature Matching: Corresponding features between overlapping images are matched. This step aligns the images by finding pairs of similar features.
  4. Homography Estimation: Compute a transformation matrix (homography) that aligns one image with the next. This matrix describes how to warp one image to match the perspective of another.
  5. Image Warping and Alignment: Apply the homography matrix to warp images into a common coordinate frame so that they overlap correctly.
  6. Blending: Seamlessly blend the overlapping areas to reduce visible seams and ensure a smooth transition between images. Techniques like feathering, multi-band blending, and exposure compensation are often used.
  7. Rendering: Combine the aligned and blended images into a single panoramic image. This may involve cropping to remove unwanted edges and adjusting the final image's exposure and color balance.

Image stitching is widely used in various applications, such as creating panoramic photos in consumer cameras, generating large-scale maps from satellite images, and constructing virtual environments in video games and simulations.

Prerequisites:

To follow along with this tutorial, ensure you have the following installed:

  1. Python (>=3.6)
  2. OpenCV (>=4.0)
  3. NumPy

You can install OpenCV and NumPy using pip:

pip install opencv-python numpy

Images that will be used for image stitching:

image1:

GeeksforGeekssvg-ezgifcom-resize
image1.jpg

image2:

GeeksforGeekssvg-ezgifcom-resize

Steps to perform Image Stitching with OpenCV and Python

Here, are the steps by which image stitching can be performed with OpenCV and Python.

Step 1: Import Necessary Libraries

Start by importing the necessary libraries.

Python
import cv2
import numpy as np
from google.colab.patches import cv2_imshow

Step 2: Load Images

Load the images you want to stitch. Ensure the images have some overlapping regions.

Python
img1 = cv2.imread('image1.jpg')
img2 = cv2.imread('image2.jpg')


Step 3: Detect Features and Find Matches

We'll use the SIFT (Scale-Invariant Feature Transform) algorithm to detect and describe features.

Python
# Initialize SIFT detector
sift = cv2.SIFT_create()

# Detect keypoints and descriptors
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)

# Use BFMatcher to find matches
bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)
matches = bf.match(des1, des2)

# Sort matches by distance
matches = sorted(matches, key=lambda x: x.distance)
matches


Output:

[< cv2.DMatch 0x79c4db814af0>,
< cv2.DMatch 0x79c4db814ef0>,
< cv2.DMatch 0x79c4db814b70>,
< cv2.DMatch 0x79c4db814ed0>,
< cv2.DMatch 0x79c4db814ab0>,
< cv2.DMatch 0x79c4db814a30>,
< cv2.DMatch 0x79c4db814b10>,
< cv2.DMatch 0x79c4db814b50>,
< cv2.DMatch 0x79c4db815250>,
< cv2.DMatch 0x79c4db814570>,
< cv2.DMatch 0x79c4db815450>,
< cv2.DMatch 0x79c4db8149b0>,
< cv2.DMatch 0x79c4db814970>,
< cv2.DMatch 0x79c4db814a70>,
< cv2.DMatch 0x79c4db814870>,
< cv2.DMatch 0x79c4db815490>,
< cv2.DMatch 0x79c4db814bb0>,
< cv2.DMatch 0x79c4db814930>,
< cv2.DMatch 0x79c4db8146d0>,
< cv2.DMatch 0x79c4db814eb0>,
< cv2.DMatch 0x79c4db8148b0>,
< cv2.DMatch 0x79c4db8151f0>,
< cv2.DMatch 0x79c4db8145f0>,
< cv2.DMatch 0x79c4db814bf0>,
< cv2.DMatch 0x79c4db8154d0>,
< cv2.DMatch 0x79c4db814770>,
< cv2.DMatch 0x79c4db8154b0>,
< cv2.DMatch 0x79c4db814890>,
< cv2.DMatch 0x79c4db814b30>,
< cv2.DMatch 0x79c4db814690>,
< cv2.DMatch 0x79c4db8144d0>,
< cv2.DMatch 0x79c4db814910>,
< cv2.DMatch 0x79c4db814530>,
< cv2.DMatch 0x79c4db814e70>,
< cv2.DMatch 0x79c4db814e50>,
< cv2.DMatch 0x79c4db814e30>,
< cv2.DMatch 0x79c4db814e10>,
< cv2.DMatch 0x79c4db814df0>,
< cv2.DMatch 0x79c4db814dd0>,
< cv2.DMatch 0x79c4db814cb0>,
< cv2.DMatch 0x79c4db814cf0>,
< cv2.DMatch 0x79c4db814cd0>,
< cv2.DMatch 0x79c4db814d70>,
< cv2.DMatch 0x79c4db814d50>,
< cv2.DMatch 0x79c4db814d30>,
< cv2.DMatch 0x79c4db814d10>,
< cv2.DMatch 0x79c4db814db0>,
< cv2.DMatch 0x79c4db814d90>,
< cv2.DMatch 0x79c4db814c90>,
< cv2.DMatch 0x79c4db814c70>,
< cv2.DMatch 0x79c4db814c50>,
< cv2.DMatch 0x79c4db814c30>,
< cv2.DMatch 0x79c4db8150b0>,
< cv2.DMatch 0x79c4db815090>,
< cv2.DMatch 0x79c4db815070>,
< cv2.DMatch 0x79c4db815050>,
< cv2.DMatch 0x79c4db815030>,
< cv2.DMatch 0x79c4db815010>,
< cv2.DMatch 0x79c4db814ff0>,
< cv2.DMatch 0x79c4db814fd0>,
< cv2.DMatch 0x79c4db814fb0>,
< cv2.DMatch 0x79c4db814f90>,
< cv2.DMatch 0x79c4db814f70>,
< cv2.DMatch 0x79c4db814f50>,
< cv2.DMatch 0x79c4db814f30>,
< cv2.DMatch 0x79c4db814f10>,
< cv2.DMatch 0x79c4db815330>,
< cv2.DMatch 0x79c4db815310>,
< cv2.DMatch 0x79c4db8152f0>,
< cv2.DMatch 0x79c4db8152d0>,
< cv2.DMatch 0x79c4db8152b0>,
< cv2.DMatch 0x79c4db815290>,
< cv2.DMatch 0x79c4db8151b0>,
< cv2.DMatch 0x79c4db815190>,
< cv2.DMatch 0x79c4db815170>,
< cv2.DMatch 0x79c4db815150>,
< cv2.DMatch 0x79c4db815130>,
< cv2.DMatch 0x79c4db815110>,
< cv2.DMatch 0x79c4db8150f0>,
< cv2.DMatch 0x79c4db8150d0>,
< cv2.DMatch 0x79c4db815670>,
< cv2.DMatch 0x79c4db815650>,
< cv2.DMatch 0x79c4db815630>,
< cv2.DMatch 0x79c4db815610>,
< cv2.DMatch 0x79c4db8155f0>,
< cv2.DMatch 0x79c4db8155d0>,
< cv2.DMatch 0x79c4db815430>,
< cv2.DMatch 0x79c4db815410>,
< cv2.DMatch 0x79c4db8153f0>,
< cv2.DMatch 0x79c4db8153d0>,
< cv2.DMatch 0x79c4db8153b0>,
< cv2.DMatch 0x79c4db815390>,
< cv2.DMatch 0x79c4db815370>,
< cv2.DMatch 0x79c4db815350>,
< cv2.DMatch 0x79c4db815730>,
< cv2.DMatch 0x79c4db815710>,
< cv2.DMatch 0x79c4db8156f0>]

Step 4: Draw Matches (Optional)

Visualize the matches to understand how well the features are detected and matched.

Python
img_matches = cv2.drawMatches(
    img1, kp1, img2, kp2, matches[:50], None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
cv2_imshow(img_matches)
cv2.waitKey(0)
cv2.destroyAllWindows()

Step 5: Homography Estimation

Calculate the homography matrix using the matched keypoints.

Python
# Extract location of good matches
src_pts = np.float32([kp1[m.queryIdx].pt for m in matches]).reshape(-1, 1, 2)
dst_pts = np.float32([kp2[m.trainIdx].pt for m in matches]).reshape(-1, 1, 2)

# Compute homography
H, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
H

Output:

array([[ 1.00000000e+00,  1.11199700e-16,  7.82259986e-13],
[-4.70979401e-16, 1.00000000e+00, 8.18963956e-13],
[-7.75167832e-19, -2.76371364e-20, 1.00000000e+00]])

Step 6: Warp Images

Warp one image to align with the other using the homography matrix.

Python
# Get the dimensions of the images
h1, w1 = img1.shape[:2]
h2, w2 = img2.shape[:2]

# Get the canvas dimesions
pts = np.float32([[0, 0], [0, h1], [w1, h1], [w1, 0]]).reshape(-1, 1, 2)
dst = cv2.perspectiveTransform(pts, H)
img2_warped = cv2.warpPerspective(img2, H, (w1 + w2, h1))

# Place the first image on the canvas
img2_warped[0:h1, 0:w1] = img1

Output:

array([[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],

[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],

[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],

...,

[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],

[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],

[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]]], dtype=uint8)

Step 7: Blend Images

Blend the images to create a seamless panorama.

Python
# Simple blending technique
result = img2_warped

cv2.imshow('Result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()

Output:

out-ezgifcom-resize-(1)
Image Stitching With OpenCV and Python

Conclusion

You've now successfully stitched two images into a single panoramic image using OpenCV and Python. This basic workflow can be extended and refined for more complex scenarios, such as stitching multiple images or handling different lighting conditions.


Next Article

Similar Reads