lOMoARcPSD|57894472
CCS349- Image and Video Analytics
image and video analytics (Anna University)
Scan to open on Studocu
Studocu is not sponsored or endorsed by any college or university
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept (
[email protected])
lOMoARcPSD|57894472
ANNA UNIVERSITY REGIONAL CAMPUS, COIMBATORE
LABORATORY RECORD
2023 – 2024
NAME : ______________________________
REGISTER NUMBER : ______________________________
BRANCH : B.E. - COMPUTER SCIENCE AND ENGINEERING
SUBJECT CODE : CCS349
SUBJECT TITLE : IMAGE AND VIDEO ANALYTICS
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
ANNA UNIVERSITY-REGIONAL CAMPUS
COIMBATORE - 641 046
1|Page
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
ANNA UNIVERSITY
REGIONAL CAMPUS, COIMBATORE
DEPARTMENT OF COMPUTER SCIENCEAND ENGINEERING
BONAFIDE CERTIFICATE
Certified that this is the bonafide record of Practical done in CCS349 – IMAGE AND
VIDEO ANALYTICS LABORATORY by ________________________________
Register No.______________________ in Third Year - Sixth Semester during 2023 - 2024.
STAFF IN-CHARGE HEAD OF THE DEPARTMENT
University Register No: ………………………………………
Submitted for the University Practical Examination held on………………...
INTERNAL EXAMINER EXTERNAL EXAMINER
2|Page
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
INDEX
EX DATE TITLE PAGE MARKS SIGN
NO. NO.
3|Page
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
EXPT NO.: T-PYRAMID OF AN IMAGE
DATE:
AIM:
To write python program for T- pyramid of an image.
ALGORITHM:
1. First load the image
2. Then construct the Gaussian pyramid with 3 levels.
3. For the Laplacian pyramid, the topmost level remains the same as in Gaussian. The
remaining levels are constructed from top to bottom by subtracting that Gaussian level
from its upper expanded level.
4|Page
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
PROGRAM:
import cv2
import numpy as np
def build_t_pyramid(image,levels):
pyramid=[image]
for _ in range (levels-1):
image=cv2.pyrDown(image)
pyramid.append(image)
return pyramid
def main():
image_path="img.jpg"
levels=3
original_image=cv2.imread(image_path)
if original_image is None:
print("Error:vould not load the image")
return
t_pyramid=build_t_pyramid(original_image,levels)
for i,level_image in enumerate(t_pyramid):
cv2.imshow(f"Levels{i}",level_image)
cv2.waitkey(0)
cv2.destroyAllWindows()
if __name__=="__main__":
main()
5|Page
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
OUTPUT:
RESULT:
Thus the python program for T-pyramid implemented and the output is obtained
successfully.
6|Page
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
EXPT NO.: QUAD TREE REPRESENTATION
DATE:
AIM:
To write a python program for quad tree representation of an image using the
homogeneity criterion of equal intensity.
ALGORITHM:
1. Divide the current two dimensional space into four boxes.
2. If a box contains one or more points in it, create a child object, storing in it the two
dimensional space of the box
3. If a box does not contain any points, do not create a child for it
4. Recurse for each of the children.
7|Page
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
PROGRAM:
import matplotlib.pyplot as plt
import cv2
import numpy as np
img= cv2.imread("img.jpg")
from operator import add
from functools import reduce
def split4(image):
half_split= np.array_split(image,2)
res= map(lambda x: np.array_split(x,2,axis=1),half_split)
return reduce(add,res)
split_img=split4(img)
split_img[0].shape,split_img[1].shape
fig,axs=plt.subplots(2,2)
axs[0,0].imshow(split_img[0])
axs[0,1].imshow(split_img[1])
axs[1,0].imshow(split_img[2])
axs[1,1].imshow(split_img[3])
def concatenate4(north_west,north_east,south_west,south_east):
top=np.concatenate((north_west,north_east),axis=1)
bottom=np.concatenate((south_west,south_east),axis=1)
return np.concatenate((top,bottom),axis=0)
full_img=concatenate4(split_img[0],split_img[1],split_img[2],split_img[3])
plt.figure()
plt.imshow(full_img)
plt.show()
8|Page
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
OUTPUT:
RESULT:
Thus the python program for quad tree representation was implementation and
output is obtained successfully.
9|Page
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
EXPT NO.: GEOMETRIC TRANSFORMS
DATE:
AIM:
To Develop programs for the following geometric transforms:
(a) Rotation.
(b) Change of scale.
(c) Skewing.
(d) Affine transform calculated from three pairs of corresponding points.
(e)Bilinear transform calculated from four pairs of corresponding points.
ALGORITHM:
TRANSFORMATION MATRICES:
For each desired transformation, create a corresponding transformation matrix. For
example:
1. Translation: Create a 3×3 matrix with a 1 in the diagonal and the translation
values in the last column.
2. Rotation: Compute the rotation matrix using trigonometric functions (sin and
cos) and the given rotation angle.
3. Scaling: Create a 3×3 matrix with scaling factors along the diagonal and 1 in
the last row and column.
4. Shearing: Create an affine transformation matrix with shear factors in the
off diagonal elements.
COMBINE TRANSFORMATION MATRICES:
5. Multiply the individual transformation matrices in the order you want to apply
them. Matrix multiplication is not commutative, so the order matters. The combined matrix
represents the sequence of transformations.
APPLY THE COMBINED TRANSFORMATION MATRIX:
In image processing, you can use libraries like OpenCV or Pillow to apply the
combined transformation matrix to the image. For example, in OpenCV:
6. Convert the 3×3 matrix to a 2×3 matrix by removing the last row.
7. Use cv2.warpAffine() for affine transformations or cv2.warpPerspective() for
projective transformations.
8. Provide the combined transformation matrix and the input image as arguments
to apply the transformations.
10 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
PROGRAM:
import cv2
import numpy as np
def rotate_image(image, angle):
height, width = image.shape[:2]
rotation_matrix = cv2.getRotationMatrix2D((width / 2, height / 2), angle, 1)
rotated_image = cv2.warpAffine(image, rotation_matrix, (width, height))
return rotated_image
# Usage
image = cv2.imread("img.jpg")
angle_degrees = 45
rotated = rotate_image(image, angle_degrees)
cv2.imshow("Rotated Image", rotated)
cv2.waitKey(0)
cv2.destroyAllWindows()
def scale_image(image, scale_x, scale_y):
scaled_image = cv2.resize(image, None, fx=scale_x, fy=scale_y)
return scaled_image
# Usage
image = cv2.imread("img.jpg")
scale_factor_x = 1.5
scale_factor_y = 1.5
scaled = scale_image(image, scale_factor_x, scale_factor_y)
cv2.imshow("Scaled Image", scaled)
cv2.waitKey(0)
cv2.destroyAllWindows()
def skew_image(image, skew_x, skew_y):
height, width = image.shape[:2]
skew_matrix = np.float32([[1, skew_x, 0], [skew_y, 1, 0]])
skewed_image = cv2.warpAffine(image, skew_matrix, (width, height))
return skewed_image
11 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
# Usage
image = cv2.imread("img.jpg")
skew_factor_x = 0.2
skew_factor_y = 0.1
skewed = skew_image(image, skew_factor_x, skew_factor_y)
cv2.imshow("Skewed Image", skewed)
cv2.waitKey(0)
cv2.destroyAllWindows()
def affine_transform(image, pts_src, pts_dst):
matrix = cv2.getAffineTransform(pts_src, pts_dst)
transformed_image = cv2.warpAffine(image, matrix, (image.shape[1],
image.shape[0]))
return transformed_image
image = cv2.imread("img.jpg")
src_points = np.float32([[50, 50], [200, 50], [50, 200]])
dst_points = np.float32([[10, 100], [200, 50], [100, 250]])
affine_transformed = affine_transform(image, src_points, dst_points)
cv2.imshow("Affine Transformed Image", affine_transformed)
cv2.waitKey(0)
cv2.destroyAllWindows()
def bilinear_transform(image, pts_src, pts_dst):
matrix = cv2.getPerspectiveTransform(pts_src, pts_dst)
transformed_image = cv2.warpPerspective(image, matrix, (image.shape[1],image.shape[0]))
return transformed_image
image = cv2.imread("img.jpg")
src_points = np.float32([[56, 65], [368, 52], [28, 387], [389, 390]])
dst_points = np.float32([[0, 0], [300, 0], [0, 300], [300, 300]])
bilinear_transformed = bilinear_transform(image, src_points, dst_points)
12 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
cv2.imshow("Bilinear Transformed Image", bilinear_transformed)
cv2.waitKey(0)
cv2.destroyAllWindows()
OUTPUT:
Rotation:
Skewing:
13 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
Change Of Scale:
Affine transform calculated from three pairs of corresponding points:
14 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
Bilinear Transform from Four Corresponding Points:
RESULT:
Thus the python program for geometric transforms implemented and output is
obtained successfully.
15 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
EXPT NO.: OBJECT DETECTION AND RECOGNITION
DATE:
AIM:
To Develop a program to implement Object Detection and Recognition.
ALGORITHM:
1. The first step is to have Python installed on your computer. Download and install
Python 3 from the official Python website.
2. Once you have Python installed on your computer, install the following
dependencies using pip:
Python
$ pip install python 3.7.6
TensorFlow
$ pip install tensorflow
OpenCV
$ pip install opencv-python
Keras
$ pip install keras
ImageAI
$ pip install imageAI
3. Now download the TinyYOLOv3 model file that contains the classification model
that will be used for object detection.
4. Now let’s see how to actually use the ImageAI library.
We need the necessary folders:
Object detection: root folder.
models: stores pre-trained model.
input: stores image file on which we want to perform object detection.
output: stores image file with detected objects.
16 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
Input image:
5. Open your preferred text editor for writing Python code and create a new
file detector.py.
6. Running the python file detector.py.
17 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
PROGRAM:
# importing the required library
from imageai.Detection import ObjectDetection
# instantiating the class
detector = ObjectDetection()
# defining the paths
path_model = "yolo-tiny.h5"
path_input = "./Input/images.jpg"
path_output = "./Output/newimage.jpg"
# using the setModelTypeAsTinyYOLOv3() function
detector.setModelTypeAsTinyYOLOv3()
# setting the path of the Model
detector.setModelPath(path_model)
# loading the model
detector.loadModel()
# calling the detectObjectsFromImage() function
detection = detector.detectObjectsFromImage(
input_image = path_input,
output_image_path = path_output
)
# iterating through the items found in the image
for eachItem in detection:
print(eachItem["name"] , " : ", eachItem["percentage_probability"])
18 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
OUTPUT:
1/1 [==============================] - ETA: 0s
1/1 [==============================] - 0s 393ms/step
car : 81.67955875396729
car : 86.47009134292603
car : 71.90941572189331
car : 51.41249895095825
car : 50.27420520782471
car : 54.530930519104004
person : 68.99164915084839
person : 85.42444109916687
car : 66.63046479225159
person : 73.05858135223389
person : 60.30835509300232
person : 74.38961267471313
person : 58.86450409889221
car : 82.88856148719788
car : 77.34288573265076
19 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
person : 69.11083459854126
person : 63.95843029022217
person : 62.82603144645691
person : 82.48097896575928
person : 84.3036949634552
person : 57.25393295288086
>>>
RESULT:
Thus the python program for Object Detection and Recognition implemented and
output is obtained successfully.
20 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
EXPT NO.: MOTION ANALYSIS USING MOVING EDGES
DATE:
AIM:
To Develop a program for motion analysis using moving edges, and apply it to your
image sequences.
ALGORITHM:
Objective
Creating automated Laban movement annotation:
1. Training four different machine learning algorithms through supervised learning on
existing human motion datasets of video and skeletal sequences.
2. Test feature extraction methods (within and across frames) to improve the
annotation accuracy.
3. Input raw videos and export Laban annotated videos.
PROGRAM:
import cv2
import numpy as np
# Function to perform motion analysis using moving edges
def motion_analysis(video_path):
cap = cv2.VideoCapture(video_path)
# Read the first frame
ret, prev_frame = cap.read()
prev_gray = cv2.cvtColor(prev_frame, cv2.COLOR_BGR2GRAY)
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
# Convert the current frame to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
21 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
# Perform Canny edge detection on both frames
edges_prev = cv2.Canny(prev_gray, 50, 150)
edges_curr = cv2.Canny(gray, 50, 150)
# Compute frame difference to detect moving edges
frame_diff = cv2.absdiff(edges_prev, edges_curr)
# Display the moving edges
cv2.imshow('Moving Edges', frame_diff)
if cv2.waitKey(30) & 0xFF == ord('q'):
break
# Update the previous frame and previous grayscale image
prev_gray = gray.copy()
cap.release()
cv2.destroyAllWindows()
# Replace 'path_to_video.mp4' with your video file path
video_path = "Human Analytics video.mp4"
motion_analysis(video_path)
22 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
OUTPUT:
RESULT:
Thus the python program for motion analysis using moving edge was implemented
and output is obtained successfully.
23 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
EXPT NO.: FACIAL DETECTION AND RECOGNITION
DATE:
AIM:
To Develop a program for Facial Detection and Recognition.
ALGORITHM:
Face Detection:
The very first task we perform is detecting faces in the image or video stream. Now
that we know the exact location/coordinates of face, we extract this face for further
processing ahead.
Feature Extraction:
Now that we have cropped the face out of the image, we extract features from it. Here
we are going to use face embeddings to extract the features out of the face. A neural network
takes an image of the person’s face as input and outputs a vector which represents the most
important features of a face. In machine learning, this vector is called embedding and thus
we call this vector as face embedding.
ARCHITECTURE:
Face Recognition:
Face recognition technology is a method of identifying or confirming an individual’s
identity using their face. It operates through biometric analysis, which involves measuring
and analysing specific biological characteristics.
1. Collecting face images using OpenCV and saving them in a folder.
2. Training an image classification model using Teachable Machine, a web
based tool by Google.
3. Downloading the model in Keras format and loading it in Python.
4. Detecting faces from a webcam and predicting their names using the trained
model.
24 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
PROGRAM:
Face Detection:
import cv2
# Load the pre-trained face detection classifier
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades +
'haarcascade_frontalface_default.xml')
# Load the image
image_path = 'img1.jpg' # Replace 'image.jpg' with the path to your image
image = cv2.imread(image_path)
# Convert the image to grayscale
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Detect faces in the image
faces = face_cascade.detectMultiScale(gray_image, scaleFactor=1.1, minNeighbors=5,
minSize=(30, 30))
# Draw rectangles around the detected faces
for (x, y, w, h) in faces:
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Display the result
cv2.imshow('Facial Detection', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Face Recognition:
Datacollect.py:
import cv2
import os
video=cv2.VideoCapture(0)
25 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
facedetect=cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
count=0
nameID=str(input("Enter Your Name: ")).lower()
path='images/'+nameID
isExist = os.path.exists(path)
if isExist:
print("Name Already Taken")
nameID=str(input("Enter Your Name Again: "))
else:
os.makedirs(path)
while True:
ret,frame=video.read()
faces=facedetect.detectMultiScale(frame,1.3, 5)
for x,y,w,h in faces:
count=count+1
name='./images/'+nameID+'/'+ str(count) + '.jpg'
print("Creating Images........." +name)
cv2.imwrite(name, frame[y:y+h,x:x+w])
cv2.rectangle(frame, (x,y), (x+w, y+h), (0,255,0), 3)
cv2.imshow("WindowFrame", frame)
cv2.waitKey(1)
if count>500:
break
video.release()
cv2.destroyAllWindows()
26 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
test.py:
import tensorflow as tf
from tensorflow import keras
import numpy as np
import cv2
from keras.models import load_model
import numpy as np
facedetect = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
cap=cv2.VideoCapture(0)
cap.set(3, 640)
cap.set(4, 480)
font=cv2.FONT_HERSHEY_COMPLEX
model = load_model('keras_model.h5',compile=False)
def get_className(classNo):
if classNo==0:
return "Paranjothi Karthik"
elif classNo==1:
return "virat"
while True:
sucess, imgOrignal=cap.read()
faces = facedetect.detectMultiScale(imgOrignal,1.3,5)
for x,y,w,h in faces:
crop_img=imgOrignal[y:y+h,x:x+h]
img=cv2.resize(crop_img, (224,224))
img=img.reshape(1, 224, 224, 3)
prediction=model.predict(img)
classIndex = (model.predict(img) > 0.5).astype("int32")
classIndex = classIndex.any()
27 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
probabilityValue=np.amax(prediction)
if classIndex==0:
cv2.rectangle(imgOrignal,(x,y),(x+w,y+h),(0,255,0),2)
cv2.rectangle(imgOrignal, (x,y-40),(x+w, y), (0,255,0),-2)
cv2.putText(imgOrignal, str(get_className(classIndex)),(x,y-10), font,
0.75, (255,255,255),1, cv2.LINE_AA)
elif classIndex==1:
cv2.rectangle(imgOrignal,(x,y),(x+w,y+h),(0,255,0),2)
cv2.rectangle(imgOrignal, (x,y-40),(x+w, y), (0,255,0),-2)
cv2.putText(imgOrignal, str(get_className(classIndex)),(x,y-10), font,
0.75, (255,255,255),1, cv2.LINE_AA)
cv2.putText(imgOrignal,str(round(probabilityValue*100, 2))+"%" ,(180, 75),
font, 0.75, (255,0,0),2, cv2.LINE_AA)
cv2.imshow("Result",imgOrignal)
k=cv2.waitKey(1)
if k==ord('q'):
break
cap.release()
cv2.destroyAllWindows()
OUTPUT:
28 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
RESULT:
Thus the python program for Facial Detection and Recognition was implemented and
output is obtained successfully.
29 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
EXPT NO.: EVENT DETECTION IN VIDEO
SURVEILLANCE SYSTEM
DATE:
AIM:
To Write a program for event detection in video surveillance system.
ALGORITHM:
1. Preprocessing:
• This stage involves cleaning and preparing the data from sensors like cameras. This
might include noise reduction or format conversion.
2. Background Modeling:
• This step establishes a baseline for "normal" activity in the scene. It can use
techniques like:
• Frame differencing: Compares consecutive video frames to detect changes
(movement).
• Statistical methods: Builds a model of the background based on pixel intensity
variations over time.
3. Object Detection and Tracking:
• This stage identifies and tracks objects of interest (people, vehicles) in the scene.
Common techniques include:
• Background subtraction: Isolates foreground objects from the background model.
• Machine Learning: Employs algorithms like Support Vector Machines (SVMs) or
Convolutional Neural Networks (CNNs) to identify objects based on training data.
4. Event Definition and Classification:
• Here, the system analyzes object behavior and interactions to define events. This
might involve:
• Motion analysis: Tracks object movement patterns and speed.
• Object interaction: Analyzes how objects interact with each other or the environment
(e.g., entering restricted zones).
• Classification algorithms (e.g., decision trees, rule-based systems) then categorize
these events (loitering, fighting, etc.).
5. Decision Making and Alerting:
• Finally, the system evaluates the classified event's severity and triggers pre-defined
actions based on rules. This might involve:
30 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
• Generating alerts for security personnel.
• Recording video footage of the event.
PROGRAM:
import cv2
# Initialize video capture
video_capture = cv2.VideoCapture("human surveillance.mp4") # Replace with your video
file
# Initialize background subtractor
bg_subtractor = cv2.createBackgroundSubtractorMOG2()
while video_capture.isOpened():
ret, frame = video_capture.read()
if not ret:
break
# Apply background subtraction
fg_mask = bg_subtractor.apply(frame)
# Apply thresholding to get a binary mask
_, thresh = cv2.threshold(fg_mask, 50, 255, cv2.THRESH_BINARY)
# Find contours
contours, _ = cv2.findContours(thresh,
cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
# Filter contours based on area (adjust the threshold as needed)
if cv2.contourArea(contour) > 100:
# Draw a bounding box around detected objects or events
x, y, w, h = cv2.boundingRect(contour)
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
# Display the processed frame
31 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])
lOMoARcPSD|57894472
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release video capture and close OpenCV windows
video_capture.release()
cv2.destroyAllWindows()
OUTPUT:
RESULT:
Thus the python program for event detection in video surveillance system was
implemented and output is obtained successfully.
32 | P a g e
Downloaded by J.Priya ,Asst.Prof - AI & DS Dept ([email protected])