0% found this document useful (0 votes)
70 views

Contour Based Tracking

This document provides a summary of a student project on video tracking using contour detection and a paint application in OpenCV. The project aims to analyze video frames to detect contours of interest and track their movement across frames. It will use contour detection algorithms and represent tracked objects with different colors in a paint application interface to demonstrate the path of each object. The methodology section outlines the key steps including importing libraries, initializing variables, defining a kernel and color arrays, reading video frames, and detecting contours to perform video tracking.

Uploaded by

Sahil Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Contour Based Tracking

This document provides a summary of a student project on video tracking using contour detection and a paint application in OpenCV. The project aims to analyze video frames to detect contours of interest and track their movement across frames. It will use contour detection algorithms and represent tracked objects with different colors in a paint application interface to demonstrate the path of each object. The methodology section outlines the key steps including importing libraries, initializing variables, defining a kernel and color arrays, reading video frames, and detecting contours to perform video tracking.

Uploaded by

Sahil Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

SCHOOL OF COMPUTER SCIENCE AND ENGINEERING

Video Tracking by Finding Contour of


Interest and using Paint Application to
show path

Under the Guidance of,


Prof. Chiranji Lal Chowdhary

By

Yashvardhan Khemka 18BCE0735


Sahil Singh 18BCI0167
Apurva Anand 18BCE2102

Google Drive Link :


https://drive.google.com/file/d/1efMw3FwrXmQuWwyUo8P3lnNPn_Lq
MY3e/view?usp=sharing
SYNOPSIS

Background –
Video tracking is the process of locating a moving object (or multiple
objects) over time using a camera. It has a variety of uses, some of
which are: human-computer interaction, security and surveillance,
video communication and compression, augmented reality, traffic
control, medical imaging and video editing.
To perform video tracking, an algorithm analyzes sequential video
frames and outputs the movement of targets between the frames.
There are a variety of algorithms, each having strengths and
weaknesses. Considering the intended use is important when choosing
which algorithm to use. There are two major components of a visual
tracking system: target representation and localization, as well as
filtering and data association.

Objectives –
The objective of video tracking is to associate target objects in
consecutive video frames. The association can be especially difficult
when the objects are moving fast relative to the frame rate. Another
situation that increases the complexity of the problem is when the
tracked object.
TABLE OF CONTENTS

1.INTRODUCTION
2.LITERATURE SURVEY
3.BACKGROUND OF THE PROJECT WORK
4.PROPOSED WORK
5.EVALUATION AND RESULT ANALYSIS
6.TABULAR COMPARISON WITH EXISTING WORK
7.OVERALL DISCUSSION
8.CONCLUSION
9.REFERENCES
10.APPENDIX
INTRODUCTION

Motivation-

There were several motivating interests that led to the decision of making
this project

1.China monitors its citizens through Internet, camera as well as through other
digital technologies. Mass surveillance in China is closely related to its Social
Credit System, and has significantly expanded under the China Internet Security
Law and with the help of local companies

2. Wii Remote was a major paradigm shift in user/game interaction. Over on the
PlayStation side, we had PlayStation Move, essentially a wand with both (1) an
internal motion sensors, (2) and an external motion tracking component via a
webcam hooked up to the PlayStation 3 itself. Of course, then there is the XBox
Kinect (one of the largest modern day computer vision success stories,
especially within the gaming area) that required no extra remote or wand —
using a stereo camera and a regression forest for pose classification, the Kinect
allowed you to become the controller.

So it was the curiosity to know how image tracking works and how it was
possible for the remote controller to know where the object was in the screen.
Given the real time data on the webcam, the project will make use of OPEN CV
to make sure that the pointer is able to recognize objects and contours present in
the webcam display and display it on the screen as distinct objects
The project will make use of Object tracking, painting with the object along the
screen and able to recognize distinct shapes and contours. As well as a video
demo to show the future capabilities of this technology.
CONTRIBUTION

 We would at foremost like to thank the community on Stack Overflow for


helping us with any doubts and debugging process, this project wouldn’t be
complete without their contribution
 Coursera course on ‘OpenCV and Artificial Intelligence’ was used as a
means to aid and teach the project members with the basic information and
knowledge about OpenCV and digital image processing
 Python.com for providing necessary libraries and files needed for the
development and implementation of the project
 Individual contribution Sahil Singh (18BCI0167) for providing with the
necessary documentation and overall development of the project (coding)
 Individual contribution Apurva Anand (18BCE2102) for aiding in the
testing process and debugging alongwith idea representation
 Individual contribution Yashvardhan Khemka (18BCE0735) for providing
with the Project problem and designing the project alongwith debugging and
testing.
ORGANISATION OF REPORT

This report has been divided into 4 main sections

SECTION 1: Literature Survey

SECTION 2: Background and related components

SECTION 3: Design and Implementation

SECTION 4: Evaluation and Results

SECTION 5: Conclusion and references


LITERATURE SURVEY

1. CONTOUR BASED OBJECT TRACKING:

Xu and Ahuja proposed a contour based object tracking algorithm to track object contours in
video sequences. In their algorithm, they segmented the active contour using the graph-cut
image segmentation method. The resulting contour of the previous frame is taken as
initialization in each frame. New object contour is found out with the help of intensity
information of current frame and difference of current frame and the previous frame.
Dokladal et al. the proposed approach is active contour based object tracking. For the
driver’s-face tracking problem they used the combination of feature-weighted gradient and
contours of the object. In the segmentation step they computed the gradient of an image.
They
proposed a gradient-based attraction field for object tracking

2. Edge Detection Techniques

In this section, work done in the area of edge detection is reviewed and focus has been
made on detecting the edges of the digital images. Edge detection is a problem of
fundamental importance in image analysis. In typical images, edges characterize
object boundaries and are therefore useful for segmentation, registration, and
identification of objects in a scene. Edge detection of an image reduces significantly
the amount of data and filters out information that may be regarded as less relevant,
preserving the important structural properties of an image.
Literature survey Table

Reference Name of the Model/System Dataset Used Brief Description Performance


Number about the
model/system

1. Contour Based Object Webcam or Uses webcam to find Works well


Tracking Video files contours in objects under simple
and tracks motion conditions

Can
2. Edge Detection Techniques Video files Uses algorithm to distinguish
find different objects between
in the frame different
shapes &
sizes
Background of the Project

SOFTWARE REQUIREMENTS

This application is written in Python 3.6 and it uses the very famous and
widely used OpenCV library. OpenCV is a computer vision and machine
learning software library that includes many common image analysis
algorithms that will help us build custom, intelligent computer vision
applications.
We will also need numpy for some arithmetic operations and will also need
to import deque from collections, a data structure about which we shall
discuss later on.

METHODOLOGY
To run this program, there are some essential steps that are required.
We will be mentioning each step which are required followed by
their brief explanation.
PROPOSED WORK

Initialization
Firstly, we import the necessary libraries.

import numpy as np
import cv2
from collections import deque

Then we initialize variables


blueLower = np.array([100, 60, 60])
blueUpper = np.array([140, 255, 255])

Define kernel
kernel = np.ones((5, 5), np.uint8)

Define different colors in bi-directional arrays


bpoints =
[deque(maxlen=512)]
gpoints =
[deque(maxlen=512)]
rpoints =
[deque(maxlen=512)]
ypoints =
[deque(maxlen=512)]

bindex = 0
rindex = 0
yindex = 0
gindex = 0

colors = [(255, 0, 0), (0, 255, 0), (0, 0, 255), (0, 255, 255)]
colorIndex = 0

Start Reading The Video (Frame by Frame)


# Load the video
camera = cv2.VideoCapture(0)
# Keep looping
while True:
# Grab the current paintWindow
(grabbed, frame) = camera.read()
frame = cv2.flip(frame, 1)
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# Check to see if we have reached the end of the video (useful when input is a video file not a live video stream)
if not grabbed:
break

# Add the same paint interface to the camera feed captured through the webcam (for ease of usage)
frame = cv2.rectangle(frame, (40,1), (140,65), (122,122,122), -1)
frame = cv2.rectangle(frame, (160,1), (255,65), colors[0], -1)
frame = cv2.rectangle(frame, (275,1), (370,65), colors[1], -1)
frame = cv2.rectangle(frame, (390,1), (485,65), colors[2], -1)
frame = cv2.rectangle(frame, (505,1), (600,65), colors[3], -1)
cv2.putText(frame, "CLEAR ALL", (49, 33), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2, cv2.LINE_AA)
cv2.putText(frame, "BLUE", (185, 33), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2, cv2.LINE_AA)
cv2.putText(frame, "GREEN", (298, 33), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2, cv2.LINE_AA)
cv2.putText(frame, "RED", (420, 33), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2, cv2.LINE_AA)
cv2.putText(frame, "YELLOW", (520, 33), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (150,150,150), 2, cv2.LINE_AA)
Find The Contour-Of-Interest
# Determine which pixels fall within the blue boundaries and then blur the
binary image blueMask = cv2.inRange(hsv, blueLower, blueUpper)
blueMask = cv2.erode(blueMask, kernel, iterations=2)
blueMask = cv2.morphologyEx(blueMask, cv2.MORPH_OPEN,
kernel) blueMask = cv2.dilate(blueMask, kernel,
iterations=1)

# Find contours in the image


(_, cnts, _) = cv2.findContours(blueMask.copy(), cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)

Start Drawing And Store The Drawings


if len(cnts) > 0:
# Sort the contours and find the largest one -- we assume this contour correspondes to the area of the
bottle cap cnt = sorted(cnts, key = cv2.contourArea, reverse = True)[0]
# Get the radius of the enclosing circle around the found
contour ((x, y), radius) = cv2.minEnclosingCircle(cnt)
# Draw the circle around the contour
cv2.circle(frame, (int(x), int(y)), int(radius), (0, 255, 255), 2)
# Get the moments to calculate the center of the contour (in this case a
circle) M = cv2.moments(cnt)
center = (int(M['m10'] / M['m00']), int(M['m01'] / M['m00']))

Show The Drawings On The Screen


# Draw lines of all the colors (Blue, Green, Red and
Yellow) points = [bpoints, gpoints, rpoints,
ypoints]
for i in range(len(points)):
for j in range(len(points[i])):
for k in range(1, len(points[i][j])):
if points[i][j][k - 1] is None or points[i][j][k]
is None: continue
cv2.line(frame, points[i][j][k - 1], points[i][j][k], colors[i], 2)
cv2.line(paintWindow, points[i][j][k - 1], points[i][j][k], colors[i], 2)

# Show the frame and the


paintWindow image
cv2.imshow("Tracking", frame)
cv2.imshow("Paint", paintWindow)

# If the 'q' key is pressed, stop


the loop if cv2.waitKey(1) & 0xFF
== ord("q"):
break
BLOCK DIAGRAM TO SHOW THE PROCESS BEHIND THE CODE
EVALUATION AND RESULTS 

Object Tracking as seen by the User The algorithm used can clearly track
the object on the basis of its color (Blue)

Path Mapping using Paint Application The algorithm further tracks the
movement of the object and defines its path on a Paint Application
Contour of Interest tracking (Defining edges) Here is another photo of how
well the algorithm can differentiate between color even in Dimly lit
environment
How the software sees object through the mask v/s How the user sees the
object The following algorithm applies a mask to determine contrasting
objects

What Software sees What Human Eye sees

Demo (Application in road transportation to find objects)


Following is a demo of how this algorithm works in real time
environment.
How the Algorithm sees the path and understands different guidelines
AI can be trained to keep the car in between 2 white lines at all times in a Self
driven Vechicle.

TABULAR COMPARISON WITH EXISTING WORK

Reference Name of Dataset Used Brief Performance


Number Model/System description
1. Contour Based Webcam and Uses webcam Works well
Object video files to find under simple
conditions
Tracking contours in
objects and
tracks motion

2. Edge Detection Video files Uses algorithm Can distinguish


Techniques to find different between
objects in the different shapes
frame & sizes
3. OUR PROJECT Webcam and Uses Algorithm Clubs both the
Video files to find different techniques to
objects and bring out a
track their path sophisticated
alongiwth means to track
contour tracking motion and
define path
OVERALL DISCUSSION

This algorithm can be exploited to make use of in Artificial Intelligence systems


to monitor and learn and understand to differentiate between different objects
and entities within a given video frame

It can be used by Law Enforcement to determine suspects location and path


travelled along with motion tracking

It can be implemented in Medical field to determine dead tissue from living


tissue on the basis of color and movement

It can be used by Automation technologies such as Machine Driven Cars to


determine their path and movement on roads

It can also be implemented to track movement of aerial objects or used in


telemetry to pinpoint location of fast moving objects as well

The use of the prepared algorithm isn’t just limited to these applications, The
applications of this technology are limitless.
CONCLUSION

This is a simple demonstration of the image processing capabilities of


OpenCV.

We see that OpenCV can be used to easily make tools which are interesting
to use and may come in handy in certain cases and certain people.

Our Project focuses mainly on the video tracking and contour based object
detection techniques which can be used to implement applications in field
related to 

 Artificial Intelligence networks


 Surveillance Systems
 Law Enforcements
 Medical Science
 Space Science
 Sports
 Self Driving Cars and Vehicles

To name a few.

With this project we wish to bring notice to the limitless capabilities


of OpenCV as a tool to empower Image Processing in our day to day
lives and create a better life for everyone.
REFERENCES
https://docs.opencv.org/2.4/doc/tutorials/tutorials.html
https://docs.python.org/2/library/collections.html
https://www.youtube.com/watch?v=pSOPQuUFkmY

1. Shihu Zhu, “Edge Detection Based on Multi-structure Elements Morphology


and Image Fusion”, ICIE, IEEE 2nd International Conference Vol. 2, 2011.

2. Ng Geok See and Chan Khuehiang. “Edge Detection using supervised


Learning and voting scheme”. Nanyang Technological University, National
university of Singapore, Singapore, 2010.

3. M Rama Bai, Dr V Venkata Krishna and J SreeDevi , “A new Morphological


Approach for Noise Removal cum Edge Detection”, IJCSI International
Journal of Computer Science Issues, Vol. 7, Issue 6, November 2010.

4. WenshuoGao, “An Improved Sobel Edge Detection”,ICCSIT,Third IEEE


International Conference Vol.5., 2006

5. Zhengquan He, M.Y.Siyal. “An image detection technique based on


morphological edge detection and background differencing for real-time
traffic analysis”. Elsevier Science Inc.New York, NY, USA, 2008

6. Tzu-HengHenry Lee, “Edge Detection Analysis”, IJCSI International Journal


of Computer Science Issues, Vol. 5, Issue 6,No 1, September 2012.

7. MitraBasu, Senior Member IEEE, “Gaussian Based Edge-Detection Methods:


A Survey”, IEEETransactions on System,man,and cybernetics part
c:Application and Reviews,Vol. 32, No. 3, August 2002.

8. Sabina Priyadarshini, GadadharSahoo, “A New Edge Detection Method based


on additions and Divisions”, International Journal of Computer Applications
(0975 – 8887) Volume 9, No.10, November 2010.

9. RenyanZhang,GuolingZhao,LiSu. “New edge detection method in image


processing”. College of Autom., Harbin Engineering University, China, 2009

10. StamatiaGiannarou, Tania Stathaki. “Edge Detection Using Quantitative


Combination of Multiple Operators”. Communications and Signal Processing
Group, Imperial College LondonExhibition Road, SW7 2AZ London, UK.,
2008

11. Mohamed A. El-Sayed, “A New Algorithm Based Entropic Threshold for


Edge Detection in Images”, IJCSI International Journal of Computer Science
Issues, Vol. 8, No 1, September 2011.
Appendix

OpenCV OpenCV is a library of programming functions mainly aimed at real-time


computer vision. Originally developed by Intel, it was later supported by Willow Garage then
Itseez. The library is cross-platform and free for use under the open-source BSD license.

Numpy NumPy is a library for the Python programming language, adding support for
large, multi-dimensional arrays and matrices, along with a large collection of high-level
mathematical functions to operate on these arrays.

Dlib Dlib is a general purpose cross-platform software library written in the programming
language C++. Its design is heavily influenced by ideas from design by contract and
component-based software engineering. Thus it is, first and foremost, a set of independent
software components.

You might also like