Animal Detection in Farms Using OpenCV 3
Animal Detection in Farms Using OpenCV 3
org © 2022 IJCRT | Volume 10, Issue 3 March 2022 | ISSN: 2320-2882
INTRODUCTION
One of the significant issues that is faced by farmers are their yields get damaged by wild animals that intrudes
their crops. Wild animal interruption has forever been a continuing issue to the ranchers. A portion of the
animals that go about as a danger to the yields are wild boar, deer, wild buffalo, elephants, tiger, monkeys and
others. These animals might benefit from crops and furthermore go around the field without any rancher and
accordingly make damage those yields. This may thus bring about critical misfortune in the yield and will
make extra monetary security all together arrangement with the result of the damage.
In any case, wildlife-friendly cultivating regularly brings about lower effectiveness. In this manner, endeavors
have been made to foster programmed frameworks fit for identifying wild animals in the harvest without
superfluous discontinuance of the cultivating activity. For instance, a recognition framework in light of
infrared sensors has been answered to lessen wildlife mortality in Germany [1]. In [2] a UAV-based framework
OBJECTIVE
The primary goal of the project is to safe watchman the farming field from wild animals and furthermore to
safeguard them by pushing them away as opposed to killing. The project additionally plans to safeguard
human lives from creature assaults. We are involving an integrative methodology in the field of Deep Learning
to give a checking and repulsing framework for crop insurance against creature assaults.
EXISTING SYSTEM
The current systems essentially give the observation usefulness. Additionally these systems don't give security
from wild animals, particularly in such an application region. They additionally need to make moves in light
of the on the kind of animal that attempts to enter the region, as various techniques are taken on to keep various
animals from entering such confined regions. Likewise the ranchers resort to different techniques by raising
human manikins and likenesses in their homesteads, which is ineffectual in warding off the wild animals,
however is valuable somewhat to avert birds .The other usually involved strategies by the ranchers to forestall
the harvest vandalization by animals incorporate structure actual obstructions, utilization of electric wall and
manual reconnaissance and different such thorough and risky techniques.
Fruitful farmers generally look to decide the acceptable degree of wild animal harvest assurance utilizing one
of the accompanying innovations:
1. Agricultural fences
Electric fences
Plastic fences
Wire fences
Wood fences
2. Natural repellents
Lavender and beans
Chilli peppers
garlic emulsion
Smoke Fish
Egg based repellent
SCOPE OF STUDY
For developing this system we have to collect some dataset. Here our datasets are images of wild animals of
8 different classes. After collecting the dataset it will go through an image pre-processing step, it is also called
as annotation and we will get our final dataset for training. These dataset contains the class file, images and
txt files of those images, which is automatically created after successful annotation. Now we can start to train
our system, we are using google colab for training and testing. While the dataset is going through training
process it will generate some weights that is later useful for testing. Training process may take approximately
12-15 hours, but testing can be done easily with the help of those trained weights. While testing the system it
will check an image and look for matching class and it will predict an output which have an accuracy near to
1. After that it will send these predicted class name to firebase which is a real-time database, further the class
name will be notified to the user’s device
DATASET
Image dataset is used to this model. We use 8 classes, they are elephant, tiger, leopard, wild boar, deer, wild
buffalo, monkey and peacock. These animals are some of the main intruders of the agriculture area and they
are also threat to human life. Dataset are mainly divided into 2 parts. The majority will go for training and the
other for testing
The training dataset includes:
Elephant – 523 images
Tiger – 550 images
Leopard – 670 images
Wild boar – 520 images
Deer – 560 images
Wild buffalo – 534 images
Monkey – 600 images
Peacock – 533 images
YOLOv3
Object grouping frameworks are utilized by AI projects that see the subjects of interest that are explicit objects
in a class. The frameworks sort objects in pictures into bunches where objects with comparative qualities are
set together, while others are ignored except if customized to do in any case. As average for object finders,
the elements learned by the convolutional layers are gone to a classifier which makes the identification
forecast. In YOLO, it uses convolutional layer for the expectation which depends on a layer that utilizes 1×1
convolutions.
YOLO is named "you just look once" on the grounds that its forecast utilizes 1×1 convolutions; the size of
the expectation map is actually the size of the element map before it. YOLO is CNN that is developed to doing
an object location in real-time. CNNs are classifier-based frameworks that can interaction input pictures as
organized varieties of information and distinguish designs between them (view picture underneath). YOLO
enjoys the benefit of being a lot quicker than different networks nevertheless keeps up with exactness. It
permits the model at the test time to take some gander at the entire picture, so it can educate the forecasts by
the worldwide setting in the picture. YOLO and other CNN "score" locales in light of their likenesses to
predefined classes.
The first step to using YOLOv3 would be to decide on a specific object detection project. YOLOv3 performs
real-time detections, so choosing a simple project that has an easy premise, such as detecting a certain kind of
animal or car in a video, is ideal for beginners to get started with YOLOv3.In this section, we will go over the
essential steps and what you have to know for using YOLOv3 successfully.
MODEL WEIGHTS
Weights and cfg (or configuration) files can be downloaded from the website of the original creator of
YOLOv3: https://pjreddie.com/darknet/yolo. You can also (more easily) use YOLO’s COCO pretrained
weights by initializing the model with model = YOLOv3 (). Using COCO’s pre-trained weights means that
you can only use YOLO for object detection with any of the 80 pretrained classes that come with the COCO
dataset. This is a good option for beginners because it requires the least amount of new code and
customization.
MAKING A PREDICTION
The convolutional layers remembered for the YOLOv3 design produce a recognition expectation in the wake
of passing the elements learned onto a classifier or regressor. These elements incorporate the class name,
directions of the bouncing boxes, sizes of the jumping boxes, and the sky is the limit from there. In YOLOv3
and its different renditions, the way this forecast map is deciphered is that every cell predicts a proper number
of jumping boxes. Then, at that point, whichever cell contains the focal point of the ground truth box of an
object of interest is assigned as the cell that will be at last liable for foreseeing the object. There is a huge load
of math behind the internal functions of the forecast design.
LOSS FUCTION
YOLO uses sum-squared error between predictions (the one with highest IoU) and ground truth to calculate
loss. The loss function composes of:
Classification loss
𝑆2
𝑜𝑏𝑗
∑ 1𝑖 .∑𝑐𝑐∈𝑐𝑙𝑎𝑠𝑠𝑒𝑠 (pi(cc) − p̂𝑖 (𝑐𝑐))2
𝑖=0
where 1𝑜𝑏𝑗
𝑖 = 1 if an object appears in cell i, otherwise 0;
p̂𝑖 (𝑐𝑐) denotes the conditional class probability for class cc in cell i.
Localization loss
𝟐 𝒐𝒃𝒋
𝝀𝒄𝒐𝒐𝒓𝒅 ∑𝑺𝒊=𝟎 ∑𝑩 ̂𝒊 )𝟐 + (𝒚𝒊 − 𝒚
𝒋=𝟎 𝟏𝒊𝒋 [(𝒙𝒊 − 𝒙 ̂𝒊 )𝟐 ] +
𝟐 𝟐
𝒐𝒃𝒋 ̂ 𝒊 )𝟐 ]
𝝀𝒄𝒐𝒐𝒓𝒅 ∑𝑺𝒊=𝟎 ∑𝑩 ̂ 𝒊 ) + (√𝒉𝒊 − √𝒉
𝒋=𝟎 𝟏𝒊𝒋 [(√𝒘𝒊 𝒊 − √𝒘
Where 1𝑜𝑏𝑗
𝑖𝑗 = 1 if the jth boundary box in cell i is responsible for detecting
𝝀𝒄𝒐𝒐𝒓𝒅 Increases the weight for the loss in the boundary box coordinates.
IJCRT2203110 International Journal of Creative Research Thoughts (IJCRT) www.ijcrt.org a844
www.ijcrt.org © 2022 IJCRT | Volume 10, Issue 3 March 2022 | ISSN: 2320-2882
YOLO predicts the square root of bounding box width and height in order to differentiate large and small
boxes. By setting 𝝀𝒄𝒐𝒐𝒓𝒅 (default: 5), we put more emphasis on the boundary box accuracy.
Confidence loss
𝑺𝟐 𝑩
𝒐𝒃𝒋 𝟐
̂𝒊 )
∑ ∑ 𝟏𝒊𝒋 (𝑪𝑰 − 𝑪
𝒊=𝟎 𝒋=𝟎
Where 1𝑜𝑏𝑗
𝑖𝑗 = 1if the jth boundary box in cell i is responsible for detecting the object, otherwise 0;
̂ 𝒊 is the box confidence score of the box j in cell i.
𝑪
However, if an object is not detected:
𝑺𝟐 𝑩
𝝀𝒃𝒂𝒄𝒌𝒈 ∑ ∑ 𝟏𝒊𝒋
𝒃𝒂𝒄𝒌𝒈 ̂ 𝒊 )𝟐
(𝑪𝑰 − 𝑪
𝒊=𝟎 𝒋=𝟎
𝒃𝒂𝒄𝒌𝒈 𝒐𝒃𝒋
Where 𝟏𝒊𝒋 is the complement of 𝟏𝒊𝒋 .
̂ 𝒊 is the box confidence score of the box j in cell i.
𝑪
𝝀𝒃𝒂𝒄𝒌𝒈 weights down the loss when detecting background.
As most boxes do not contain any objects, we weight the loss down by a factor 𝝀𝒃𝒂𝒄𝒌𝒈 (default: 0.5) to
balance the weight.
CONCLUSION
The issue of yield destroying by wild animals has turned into a significant social issue in the current time. It
requires dire consideration and a powerful arrangement. Subsequently this project conveys an extraordinary
social significance as it plans to resolve this issue. Thus we have planned a shrewd installed farmland
protection and observation based framework which is minimal expense, and furthermore consumes less
energy. The principle point is to forestall the deficiency of yields and to shield the region from intruders and
wild animals which represent a significant danger to the rural regions. Such a framework will be useful to the
ranchers in safeguarding their plantations and fields and save them from critical monetary misfortunes and
furthermore saves them from ineffective endeavors that they suffer for the protection of their fields. This
framework will likewise help them in accomplishing better harvest yields consequently prompting their
monetary prosperity.
[2]
Israel, M. A UA-Based Roe Deer Fawn Detection System. In Proceedings of the International
Conference on Unmanned Aerial Vehicle in Geomatics (UAV-g), Zurich, Switzerland, 14–16
September 2011; pp. 1–5.
[3]
https://www.itsrm.org/itd-exploring-wildlifedetection-system-in-northern-idaho-to-improve-driver-
safety/
[4]
A Mammerri, “Multi static radar for detection of wild animals” Christ Church University, U.K.
‘News Report about wild animal attacks in agriculture field’, http://www.tribuneindia.com/news/himachal/
[5]