0% found this document useful (0 votes)
2 views

Project Final Out

The document provides an overview of machine learning, emphasizing its ability to learn from data without explicit programming, and discusses its applications in recommendation systems, fraud detection, and more. It contrasts traditional programming with machine learning, highlighting the unsustainability of coding all rules as systems grow complex. Additionally, it outlines hardware and software requirements for machine learning systems, discusses existing and proposed systems for bird sound classification, and details design considerations for user interfaces and data input.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Project Final Out

The document provides an overview of machine learning, emphasizing its ability to learn from data without explicit programming, and discusses its applications in recommendation systems, fraud detection, and more. It contrasts traditional programming with machine learning, highlighting the unsustainability of coding all rules as systems grow complex. Additionally, it outlines hardware and software requirements for machine learning systems, discusses existing and proposed systems for bird sound classification, and details design considerations for user interfaces and data input.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

CHAPTER 1

1. INTRODUCTION

1.1 Overview

Machine Learning is a system of computer algorithms that can learn from


example through self-improvement without being explicitly coded by a programmer.
Machine learning is a part of artificial Intelligence which combines data with
statistical tools to predict an output which can be used to make actionable insights.
The breakthrough comes with the idea that a machine can singularly learn from the
data (i.e., example) to produce accurate results. Machine learning is closely related
to data mining and Bayesian predictive modeling. The machine receives data as input
and uses an algorithm to formulate answers.

A typical machine learning tasks are to provide a recommendation. For those


who have a Netflix account, all recommendations of movies or series are based on
the user's historical data. Tech companies are using unsupervised learning to improve
the user experience with personalizing recommendation. Machine learning is also
used for a variety of tasks like fraud detection, predictive maintenance, portfolio
optimization, automatize task and so on.

Traditional programming differs significantly from machine learning. In


traditional programming, a programmer code all the rules in consultation with an
expert in the industry for which software is being developed. Each rule is based on a
logical foundation; the machine will execute an output following the logical

1
statement. When the system grows complex, more rules need to be written. It can
quickly become unsustainable to maintain.

Traditional programming differs significantly from machine learning. In


traditional programming, a programmer code all the rules in consultation with an
expert in the industry for which software is being developed. Each rule is based on a
logical foundation; the machine will execute an output following the logical
statement. When the system grows complex, more rules need to be written. It can
quickly become unsustainable to maintain.
Unsupervised learning had a catalytic effect in reviving interest in deep
learning, but has since been overshadowed by the successes of purely supervised
learning. Although we have not focused on it in this Review, we expect unsupervised
learning to become far more important in the longer term. Human and animal
learning is largely unsupervised: we discover the structure of the world by observing
it, not by being told the name of every object. Human vision is an active process that
sequentially samples the optic array in an intelligent, task-specific way using a small,
high-resolution fovea with a large, low-resolution surround. We expect much of the
future progress in vision to come from systems that are trained end-to-end and
combine with RNNs that use reinforcement learning to decide where to look.
Systems combining deep learning and reinforcement learning are in their infancy, but
they already outperform passive vision systems at classification tasks and produce
impressive results in learning to play many different video games. Natural language
understanding is another area in which deep learning is poised to make a large impact
over the next few years.
We expect systems that use RNNs to understand sentences or whole
documents will become much better when they learn strategies for selectively
attending to one part at a time. Ultimately, major progress in artificial intelligence

2
will come about through systems that combine representation learning with complex
reasoning.

1.2 SYSTEM SPECIFICATION


1.2.1 HARDWARE REQUIREMENTS:

 System : Pentium i3 Processor.


 Hard Disk : 500 GB.
 Monitor : 15’’ LED

 Input Devices : Keyboard, Mouse


: 4 GB
 Ram

1.2.3 SOFTWARE REQUIREMENTS:

 Operating system : Windows 10.

 Coding Language : Python


 Web Framework : Flask

3
1.3 SOFTWARE SPECIFICATION
The Python language has a substantial body of documentation, much of it contributed by
various authors. The markup used for the Python documentation is restructured Text, developed by
the docutils project, amended by custom directives and using a toolset named Sphinx to postprocess
the HTML output.

This document describes the style guide for our documentation as well as the custom
restructured Text markup introduced by Sphinx to support Python documentation and how it should
be used.

The documentation in HTML, PDF or EPUB format is generated from text files written
using the restructured Text format and contained in the CPython Git repository.

Introduction

Python’s documentation has long been considered to be good for a free programming
language. There are a number of reasons for this, the most important being the early commitment
of Python’s creator, Guido van Rossum, to providing documentation on the language and its
libraries, and the continuing involvement of the user community in providing assistance for
creating and maintaining documentation.

The involvement of the community takes many forms, from authoring to bug reports to just
plain complaining when the documentation could be more complete or easier to use.

This document is aimed at authors and potential authors of documentation for Python. More
specifically, it is for people contributing to the standard documentation and developing additional
documents using the same tools as the standard documents. This guide will be less useful for
authors using the Python documentation tools for topics other than Python, and less useful still for
authors not using the tools at all.

4
If your interest is in contributing to the Python documentation, but you don’t have the time
or inclination to learn restructured Text and the markup structures documented here, there’s a
welcoming place for you among the Python contributors as well. Any time you feel that you can
clarify existing documentation or provide documentation that’s missing, the existing
documentation team will gladly work with you to integrate your text, dealing with the markup for
you. Please don’t let the material in this document stand between the documentation and your desire
to help out!

Use of white space

All files use an indentation of 3 spaces; no tabs are allowed. The maximum line length is 80
characters for normal text, but tables, deeply indented code samples and long links may extend
beyond that. Code example bodies should use normal Python 4-space indentation. Make generous
use of blank lines where applicable; they help group things together.

A sentence-ending period may be followed by one or two spaces; while reST ignores the
second space, it is customarily put in by some users, for example to aid Emacs’ auto-fill mode.

Footnotes

Footnotes are generally discouraged, though they may be used when they are the best way
to present specific information. When a footnote reference is added at the end of the sentence, it
should follow the sentence-ending punctuation. The reST markup should appear something like
this:

Footnotes should be gathered at the end of a file, or if the file is very long, at the end of a
section. The docutils will automatically create backlinks to the footnote reference. Footnotes may
appear in the middle of sentences where appropriate.

5
Capitalization

In the Python documentation, the use of sentence case in section titles is preferable, but
consistency within a unit is more important than following this rule. If you add a section to a chapter
where most sections are in title case, you can either convert all titles to sentence case or use the
dominant style in the new section title.

Many special names are used in the Python documentation, including the names of
operating systems, programming languages, standards bodies, and the like. Most of these entities
are not assigned any special markup, but the preferred spellings are given here to aid authors in
maintaining the consistency of presentation in the Python documentation. Other terms and words
deserve special mention as well; these conventions should be used to ensure consistency throughout
the documentation:

CPU

For “central processing unit.” Many style guides say this should be spelled out on the first
use (and if you must use it, do so!). For the Python documentation, this abbreviation should be
avoided since there’s no reasonable way to predict which occurrence will be the first seen by
the reader. It is better to use the word “processor” instead. POSIX

The name assigned to a particular group of standards. This is always uppercase.

The name of our favorite programming language is always capitalized.

For “restructured Text,” an easy to read, plaintext markup syntax used to produce Python
documentation. When spelled out, it is always one word and both forms start with a lower
case ‘r’. Unicode The name of a character coding system. This is always written capitalized.

6
Unix

The name of the operating system developed at AT&T Bell Labs in the early 1970s.
Affirmative Tone

The documentation focuses on affirmatively stating what the language does and how to use
it effectively.

Except for certain security or segfault risks, the docs should avoid wording along the lines
of “feature x is dangerous” or “experts only”. These kinds of value judgments belong in external
blogs and wikis, not in the core documentation.

Bad example (creating worry in the mind of a reader):

Warning: failing to explicitly close a file could result in lost data or excessive resource
consumption. Never rely on reference counting to automatically close a file.

Good example (establishing confident knowledge in the effective use of the language):

A best practice for using files is use a try/finally pair to explicitly close a file after it is used.
Alternatively, using a with-statement can achieve the same effect. This assures that files are flushed
and file descriptor resources are released in a timely manner.

Economy of Expression

More documentation is not necessarily better documentation. Err on the side of being
succinct.

It is an unfortunate fact that making documentation longer can be an impediment to


understanding and can result in even more ways to misread or misinterpret the text. Long
descriptions full of corner cases and caveats can create the impression that a function is more
complex or harder to use than it actually is.

7
Security Considerations (and Other Concerns)

Some modules provided with Python are inherently exposed to security issues (e.g. shell
injection vulnerabilities) due to the purpose of the module (e.g. ssl). Littering the documentation
of these modules with red warning boxes for problems that are due to the task at hand, rather than
specifically to Python’s support for that task, doesn’t make for a good reading experience.

Instead, these security concerns should be gathered into a dedicated “Security


Considerations” section within the module’s documentation, and cross-referenced from the
documentation of affected interfaces

Similarly, if there is a common error that affects many interfaces in a module (e.g. OS
level pipe buffers filling up and stalling child processes), these can be documented in a “Common
Errors” section and cross-referenced rather than repeated for every affected interface.

Code Examples

Short code examples can be a useful adjunct to understanding. Readers can often grasp a
simple example more quickly than they can digest a formal description in prose.

People learn faster with concrete, motivating examples that match the context of a typical
use case. For instance, the str.rpartition() method is better demonstrated with an example splitting
the domain from a URL than it would be with an example of removing the last word from a line of
Monty Python dialog.

The ellipsis for the sys.ps2 secondary interpreter prompt should only be used sparingly,
where it is necessary to clearly differentiate between input lines and output lines. Besides
contributing visual clutter, it makes it difficult for readers to cut-and-paste examples so they can
experiment with variations.

8
CHAPTER 2

2. SYSTEM STUDY

2.1. EXISTING SYSTEM

In existing have used Light weight CNN (LWCNN) architecture with


VGG for crowd counting purpose. In the front end, VGG-16 has given 10
convolution layers and 3 max-pooling layers. It is feasible to reliably count
the number of individuals present in a crowd using a compressed convolution
depth of 6 and a dilation factor of 2. It have suggested the use of Extreme
Learning Machine (ELM) algorithm to overcome the drawback of
FeedForward Neural Network such as slow computation with different
methodologies such as Evolutionary ELM, Voting based ELM, Ordinal ELM,
fully complex ELM, Symmetric ELM, etc. Accuracy of ELM based
classification algorithm is 94.10%. It have used dataset having 400 samples
of bird sound recordings in total, with recordings of four birds: cuckoo,
sparrow, crow, and laughing dove each having an input space of 100
recordings. Bird sound recordings were collected from xeno-canto.com, a site
devoted to the sharing of bird sounds from throughout the world. Each clip is
between 5 and 20 seconds long and is transformed to a fixed sampling
frequency of 44100Hz or 48000Hz in order to preserve diversity and avoid
overfitting. The data for these examples comes from the Google Recording
and Libri Speech ASR datasets.

9
2.1.1 DRAWBACKS OF EXISTING SYSTEM

 The existing system is a time constraint; it takes a long time to analyze the
sound of the birds.

 The existing system usually requires much more data. Lots of training data is
required

 The existing system is also more computationally expensive.

 The existing system can take several weeks to train the dataset completely
from scratch.

 The amount of computational power needed is also more.

10
2.2. PROPOSED SYSTEM

The first step of implementation is gathering data from dataset which is


obtained from Kaggle. The audio recordings of the birds in .wav format are included
in this resource. This dataset contains audio recordings of the birds in .wav format.
Kaggle are open websites dedicated for dataset where users upload their own
recordings. Since many features are defined in dataset, combination of them are used
to define class (like genus and species, etc.) and classify birds according to them.

An Artificial Neural Network (ANN) classification algorithm is a popular


method for analyzing and recognizing bioacoustics signals. As a classification model,
the multilayer perceptron (MLP) is used. The MLP takes a set of predetermined
attributes as input and produces a unique outcome for each bird species to be
identified. Training and testing are the two steps in this identifying procedure. In the
training process, syllables of specified bird sounds were utilized to train the
multilayer perceptron, resulting in the right MLP output being triggered. The training
process is carried out by repeatedly delivering known sounds to the network and then
iteratively adjusting the network's weighting. The goal of this training is to lower the
total error between the supplied and expected results till a predefined error
requirement is accomplished. For the output, user can use GUI i.e., Graphical User
Interface to analyse the species of the bird. With the help of GUI user can upload the
dataset, process and show the outcome.

11
2.2.1 ADVANTAGES OF PROPOSED SYSTEM

 The proposed system requires only lesser time to analyze the sound of the birds.

 The proposed system can handle with less number of training data also.
 The proposed system is not more expensive when compared to the existing
system models.

 The proposed system may take only few hours train the dataset completely from
scratch which is much lesser than the existing system model.

 The proposed system requires only less amount of computational power.

 The proposed system has good fault tolerance.

 The proposed is also has a good distributed memory.

12
CHAPTER 3

3. SYSTEM DESIGN AND DEVELOPMENT

3.1 FILE DESIGN


A file design is a selection list that simplifies computer data access or entry. Instead of
remembering what to enter, the user chooses from a list of options and types the option letter
associated with it. A menu limits a user’s choice of response but reduces the chance for errors in
data entry. Menu design is the process of designing menus in Graphical User Interface Software
to help the user by providing a user-friendly interface. Menus have been designed and placed in
the top of the each form. On checking them they provide a drop-down list indicating the items
present on that particular menu. Short cut keys helps to jump to the particular form directly by
mentioning in the menu with the specific key.

3.2 INPUT DESIGN


The input design is the link between the information system and the user. It comprises the
developing specification and procedures for data preparation and those steps are necessary to put
transaction data in to a usable form for processing can be achieved by inspecting the computer to
read data from a written or printed document or it can occur by having people keying the data
directly into the system. The design of input focuses on controlling the amount of input required,
controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The
input is designed in such a way so that it provides security and ease of use with retaining the
privacy.

Input Design considered the following things:

 What data should be given as input?

 How the data should be arranged or coded?

13
 The dialog to guide the operating personnel in providing input.
 Methods for preparing input validations and steps to follow when error occur.

3.3 OUTPUT DESIGN


A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to other
system through outputs. In output design it is determined how the information is to be displaced
for immediate need and also the hard copy output. It is the most important and direct source
information to the user. Efficient and intelligent output design improves the system’s relationship
to help user decision-making.

1. Designing computer output should proceed in an organized, well thought out manner; the right
output must be developed while ensuring that each output element is designed so that people
will find the system can use easily and effectively. When analysis design computer output, they
should Identify the specific output that is needed to meet the requirements.

2. Select methods for presenting information.

3. Create document, report, or other formats that contain information produced by the system.

The output form of an information system should accomplish one or more of the following
objectives.

 Convey information about past activities, current status or projections of the  Future.

 Signal important events, opportunities, problems, or warnings.

 Trigger an action.

 Confirm an action.

3.4 CODE DESIGN


The source code has been generated the function of a module should be apparent without
reference to a design specification. The coding is understandable. The coding style encompasses a
coding philosophy that stresses simplicity and clarity. The elements of style included the following.

14
1. Code Documentation

2. Data Declaration

3. Statement Construction

4. Input Output Style

Coding is the most important task, which ensures the quality of the product. The product designed
should be with less complexity such as eliminating unnecessary loops, jump statements, endless
loops etc., Using enhanced methods such as functions, structures etc. help to improve the quality
of the product. It is a tool that can be used in the design and planning process, but goes further and
is more regulatory than other forms of guidance commonly used in the English planning system
over recent decades. It can be thought of as a process and document – and therefore a mechanism
which operationalizes design guidelines or standards which have been established through a master
plan process.

3.5 DATABASE DESIGN


A database is a collection of stored data organized in such a way that all the data
requirements are satisfied. In order to design the database and the tables used in the system, MS
Access provides extra optional facilities which aid and control each user's access to use the
database for adding, modifying and retrieving data and facilitate data independence, integrity and
security.

3.6 SYSTEM DEVELOPMENT


 The system is more users friendly.

 To make the system flexible enough to undergo extension.

15
 To produce well formatted output display.

 Provides a high uniformity among all screens format.


 The system works in high speed and accuracy.

3.6.1 DESCRIPTION OF THE MODULES

⚫ Dataset

⚫ Importing the necessary libraries

⚫ Exploratory Data Analysis of Audio data

⚫ Imbalance Dataset check

⚫ Data Preprocessing

⚫ Splitting the dataset

⚫ Audio Classification Model Creation

⚫ Compile the Model

⚫ Train the Model

⚫ Check the Test Accuracy

⚫ Saving the Trained Model

Dataset:
In the first module, we developed the system to get the input dataset for the
training and testing purpose.
The dataset consists of 5,422 Bird Sound Classification Using Deep Learning
and can practice it on Kaggle itself.

16
Importing the necessary libraries:
The very important and great library that supports audio and music analysis is
Librosa. Simply use the Pip command to install the library. It provides building
blocks that are required to construct an information retrieval model from music.

Exploratory Data Analysis of Audio data


We have 5 different folders under the urban dataset folder. Before applying any
preprocessing, we will try to understand how to load audio files and how to visualize
them in form of the waveform. If you want to load the audio file and listen to it, then
you can use the IPython library and directly give it an audio file path. We have taken
the first audio file in the fold 1 folder that belongs to the dog bark category. Now we
will use Librosa to load audio data. So when we load any audio file with Librosa, it
gives us 2 things. One is sample rate, and the other is a two-dimensional array. Let
us load the above audio file with Librosa and plot the waveform using Librosa.
Sample rate – It represents how many samples are recorded per second. The default
sampling rate with which librosa reads the file is 22050. The sample rate differs by
the library you choose.
2-D Array – The first axis represents recorded samples of amplitude. And the second
axis represents the number of channels. There are different types of channels –
Monophonic(audio that has one channel) and stereo(audio that has two channels).
we load the data with librosa, then it normalizes the entire data and tries to give it in
a single sample rate. The same we can achieve using scipy python library also. It will
also give us two pieces of information – one is sample rate, and the other is data.

17
When you print the sample rate using scipy-it is different than librosa. Now let us
visualize the wave audio data. One important thing to understand between both is-
when we print the data retrieved from librosa, it can be normalized, but when we try
to read an audio file using scipy, it can’t be normalized. Librosa is now getting
popular for audio signal processing because of the following three reasons.

1. It tries to converge the signal into mono(one channel).

2. It can represent the audio signal between -1 to +1(in normalized form), so a


regular pattern is observed.
3. It is also able to see the sample rate, and by default, it converts it to 22 kHz,
while in the case of other libraries, we see it according to a different value.

Imbalance Dataset check:


Now we know about the audio files and how to visualize them in audio
format. Moving format to data exploration we will load the CSV data file provided
for each audio file and check how many records we have for each class.
The data we have is a filename and where it is present so let us explore 1st file, so it
is present in fold 5 with category as a dog bark. Now use the value counts function
to check records of each class.
When you see the output so data is not imbalanced, and most of the classes have an
approximately equal number of records

Data Preprocessing:

Some audios are getting recorded at a different rate-like 44KHz or 22KHz.


Using librosa, it will be at 22KHz, and then, we can see the data in a normalized
pattern. Now, our task is to extract some important information, and keep our data in
the form of independent(Extracted features from the audio signal) and dependent

18
features(class labels). We will use Mel Frequency Cepstral coefficients to extract
independent features from audio signals.
MFCCs – The MFCC summarizes the frequency distribution across the window
size. So, it is possible to analyze both the frequency and time characteristics of the
sound. This audio representation will allow us to identify features for classification.
So, it will try to convert audio into some kind of features based on time and frequency
characteristics that will help us to do classification.
To demonstrate how we apply MFCC in practice, first, we will apply it on a single
audio file that we are already using.

Now, we have to extract features from all the audio files and prepare the
dataframe. So, we will create a function that takes the filename(file path where it is
present). It loads the file using librosa, where we get 2 information. First, we’ll find
MFCC for the audio data, And to find out scaled features, we’ll find the mean of the
transpose of an array.

Now, to extract all the features for each audio file, we have to use a loop over
each row in the dataframe. We also use the TQDM python library to track the
progress. Inside the loop, we’ll prepare a customized file path for each file and call
the function to extract MFCC features and append features and corresponding labels
in a newly formed dataframe.

Splitting the dataset:


Split the dataset into train and test. 80% train data and 20% test data.

Audio Classification Model Creation:

We have extracted features from the audio sample and splitter in the train and test
set. Now we will implement an ANN model using Keras sequential API. The number

19
of classes is 10, which is our output shape(number of classes), and we will create
ANN with 3 dense layers and architecture is explained below.
1. The first layer has 100 neurons. Input shape is 40 according to the number of
features with activation function as Relu, and to avoid any overfitting, we’ll
use the Dropout layer at a rate of 0.5.
2. The second layer has 200 neurons with activation function as Relu and the
drop out at a rate of 0.5.
3. The third layer again has 100 neurons with activation as Relu and the drop out
at a rate of 0.5.

Compile the Model

To compile the model we need to define loss function which is categorical


crossentropy, accuracy metrics which is accuracy score, and an optimizer which is
Adam.

20
Train the Model

We will train the model and save the model in HDF5 format. We will train a model
for 250 epochs and batch size as 32. We’ll use callback, which is a checkpoint to
know how much time it took to train over data.

Check the Test Accuracy

Now we will evaluate the model on test data. We got near about 97 percent accuracy
on the training dataset and 100 percent on test data.

Saving the Trained Model:


Once you’re confident enough to take your trained and tested model into the
production-ready environment, the first step is to save it into a .h5 or .pkl file using
a library like pickle.
Make sure you have pickle installed in your environment.
Next, let’s import the module and dump the model into .h5 file.

21
CHAPTER 4

4. SYSTEM TESTING AND IMPLEMENTATION

4.1 SYSTEM TESTING

The purpose of testing is to discover errors. Testing is the process of trying to


discover every conceivable fault or weakness in a work product. It provides a way to
check the functionality of components, sub assemblies, assemblies and/or a finished
product It is the process of exercising software with the intent of ensuring that the
software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.

4.1.1 Unit Testing

Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid outputs.
All decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on knowledge
of its construction and is invasive. Unit tests perform basic tests at component level
and test a specific business process, application, and/or system configuration. Unit
tests ensure that each unique path of a business process performs accurately to the
documented specifications and contains clearly defined inputs and expected results.

22
4.1.2 Integration Testing

Integration tests are designed to test integrated software components to


determine if they actually run as one program. Testing is event driven and is more
concerned with the basic outcome of screens or fields. Integration tests demonstrate
that although the components were individually satisfaction, as shown by
successfully unit testing, the combination of components is correct and consistent.
Integration testing is specifically aimed at exposing the problems that arise from the
combination of components.

4.1.3 Validation Testing

Formalizing the queries, instead of writing them down in natural language, and
formalizing the expected answers as well, allows for a system that automatically checks if the
ontology meets the requirements stated with the competency questions. Thus the proposed system
under consideration has been tested by using validation testing and found to be working
satisfactorily.

4.1.4 User Acceptance Testing

User Acceptance Testing is a critical phase of any project and requires


significant participation by the end user. It also ensures that the system meets the
functional requirements.

4.2 SYSTEM IMPLEMENTATION


Implementation is the most crucial stage in achieving a successful system and giving users
confidence that the new system is workable and effective. Ontology provides an overarching
framework and vocabulary for describing system components and relationships. As such, they
represent a means to devise, analyze and compare information systems. This project investigates
the development of a software-based on ontology.

23
Moreover it has been pointed out that information overload, the constraints of timeliness,
and the high human and financial costs of medical error mean that it will become increasingly
difficult for physicians to practice high-quality evidence-based medicine without the aid of
computerized decision support systems at the point of care.
The implementation of this website will satisfy the needs of company as well as customers.
The effort spent on developing this website result in success only when the system is implemented
effectively.

24
CONCLUSION
In this project, significance of management of crops was studied vastly. Farmers need
assistance with recent technology to grow their crops. Proper prediction of crops can be informed
to agriculturists in time basis. Many Machine Learning techniques have been used to analyze the
agriculture parameters. Some of the techniques in different aspects of agriculture are studied by a
literature study. Blooming Neural networks, Soft computing techniques plays significant part in
providing recommendations. Considering the parameter like production and season, more
personalized and relevant recommendations can be given to farmers which makes them to yield
good volume of production.

25
BIBLIOGRAPHY

REFERENCES
[1] Shreya S. Bhanose, Kalyani A. Bogawar (2016) “Crop And Yield Prediction Model”,
International Journal of Advance Scientific Research and Engineering Trends, Volume
1,Issue 1, April 2016

[2] Tripathy, A. K., et al.(2011) "Data mining and wireless sensor network for agriculture
pest/disease predictions." Information and Communication Technologies (WICT), 2011
World Congress on. IEEE.

[3] Ramesh Babu Palepu (2017) ” An Analysis of Agricultural Soils by using Data Mining
Techniques”, International Journal of Engineering Science and Computing, Volume 7 Issue
No.
10 October.

[4] Rajeswari and K. Arunesh (2016) “Analysing Soil Data using Data Mining Classification
Techniques”, Indian Journal of Science and Technology, Volume 9, May.

[5] A.Swarupa Rani (2017), “The Impact of Data Analytics in Crop Management based on
Weather Conditions”, International Journal of Engineering Technology Science and
Research, Volume 4,Issue 5,May.

[6] Pritam Bose, Nikola K. Kasabov (2016), “Spiking Neural Networks for Crop Yield
Estimation
Based on Spatiotemporal Analysis of Image Time Series”, IEEE Transactions On Geoscience And
Remote Sensing.

26
[7] Priyanka P.Chandak (2017),” Smart Farming System Using Data Mining”, International
Journal of Applied Engineering Research, Volume 12, Number 11.

[8] Vikas Kumar, Vishal Dave (2013), “KrishiMantra: Agricultural Recommendation System”,
Proceedings of the 3rd ACM Symposium on Computing for Development, January.

[9] Savae Latu (2009), ”Sustainable Development : The Role Of Gis And Visualisation”, The
Electronic Journal on Information Systems in Developing Countries, EJISDC 38, 5, 1-17.

[10] Nasrin Fathima.G (2014), “Agriculture Crop Pattern Using Data Mining Techniques”,
International Journal of Advanced Research in Computer Science and Software
Engineering, Volume 4, May.

[11] Ramesh A.Medar (2014), ”A Survey on Data Mining Techniques for Crop Yield
Prediction”, International Journal of Advance Research in Computer Science and
Management Studies, Volume 2, Issue 9, September.

[12] Shakil Ahamed.A.T.M, Navid Tanzeem Mahmood (2015),” Applying data mining
techniques to predict annual yield of major crops and recommend planting different crops
in different districts in Bangladesh”, ACIS 16th International Conference on Software
Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing
(SNPD),IEEE,June.

27
APPENDICES

A. SYSTEM FLOW DIAGRAM

28
B. SAMPLE CODING

from flask import Flask, render_template, request

from keras.models import load_model

from keras.preprocessing import image

from keras.metrics import AUC

import numpy as np

import pandas as pd

import os

import librosa

import librosa, warnings

import numpy as np

import pandas as pd

from sklearn.preprocessing import LabelEncoder

from tensorflow.keras.models import load_model

app = Flask(__name__)

final = pd.read_pickle("extracted_df.pkl")

y = np.array(final["name"].tolist())

le = LabelEncoder()

le.fit_transform(y)

Model1_ANN = load_model("Model1.h5")

def extract_feature(audio_path):

audio_data, sample_rate = librosa.load(audio_path, res_type="kaiser_fast")

feature = librosa.feature.mfcc(y=audio_data, sr=sample_rate, n_mfcc=40)

feature_scaled = np.mean(feature.T, axis=0)

return np.array([feature_scaled])

def ANN_print_prediction(audio_path):

prediction_feature = extract_feature(audio_path)

predicted_vector = np.argmax(Model1_ANN.predict(prediction_feature), axis=-1)

29
predicted_class = le.inverse_transform(predicted_vector)

return predicted_class[0]

@app.route("/")

@app.route("/first")

def first():

return render_template('first.html')

@app.route("/login")

def login():

return render_template('login.html')

@app.route("/index", methods=['GET'])

def index():

return render_template("index.html")

@app.route("/submit", methods = ['GET', 'POST'])

def get_output():

if request.method == 'POST':

audio_path = request.files['wavfile']

img_path = "static/tests/" + audio_path.filename

audio_path.save(img_path)

predict_result = ANN_print_prediction(img_path)

return render_template("prediction.html", prediction = predict_result, audio_path= img_path)

@app.route("/chart")

def chart():

return render_template('chart.html')

if __name__ =='__main__':

app.run(debug = True)

<!doctype html>

<html lang="en">

<head>

<meta charset="utf-8">

30
<meta name="viewport" content="width=device-width, initial-scale=1">

<meta name="description" content="">

<meta name="author" content="">

<title>Automated Bird Species Identification using Audio Signal Processing and Neural Network </title>

<!-- CSS FILES -->

<link rel="preconnect" href="https://fonts.googleapis.com">

<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

<link href="https://fonts.googleapis.com/css2?family=DM+Sans:wght@400;500;700&display=swap"
rel="stylesheet">

<link href="../static/css/bootstrap.min.css" rel="stylesheet">

<link href="../static/css/bootstrap-icons.css" rel="stylesheet">

<link href="../static/css/templatemo-leadership-event.css" rel="stylesheet">

<!--

TemplateMo 575 Leadership Event

https://templatemo.com/tm-575-leadership-event

-->

</head>

<body>

<nav class="navbar navbar-expand-lg">

<div class="container">

<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarNav"


aria-controls="navbarNav" aria-expanded="false" aria-label="Toggle navigation">

<span class="navbar-toggler-icon"></span>

</button>

<a href="index.html" class="navbar-brand mx-auto mx-lg-0">

<i class="bi-bullseye brand-logo"></i>

<span class="brand-text">Bird Audio<br>Detection </span>

</a>

<div class="collapse navbar-collapse" id="navbarNav">

31
<ul class="navbar-nav ms-auto">

<li class="nav-item">

<a class="nav-link click-scroll" href="{{url_for('first')}}">Home</a>

</li>

<li class="nav-item">

<a class="nav-link click-scroll" href="{{url_for('login')}}">Login</a>

</li>

<li class="nav-item">

<a class="nav-link click-scroll" href="{{url_for('index')}}">Preview</a>

</li>

<li class="nav-item">

<a class="nav-link click-scroll" href="{{url_for('chart')}}">Chart</a>

</li>

</ul>

<div>

</div>

</nav>

<main>

<section class="call-to-action section-padding">

<div class="container">

<div class="row align-items-center">

<div class="col-lg-7 col-12">

<h2 class="text-white mb-4">Automated Bird Species Identification using Audio Signal Processing
and Neural Network <u class="text-info"></u></h2>

<p class="text-white"></p>

</div>

<div class="col-lg-3 col-12 ms-lg-auto mt-4 mt-lg-0">

<a href="#section_5" class="custom-btn btn">Chart</a>

</div>

32
</div>

</div>

</section>

<section class="pricing section-padding" id="section_5">

<div class="container">

<div class="row">

<div class="row">

<div class="col-sm-12">

<div class="title-box text-center">

<center><h3 class="title-a">

Graph

</h3></center>

<body>

<div class="col-lg-8 mx-auto">

<!-- To configure the contact form email address, go to mail/contact_me.php and update the email
address in the PHP file on line 19. --

<div><span style="margin-left:100px;color:red"><font size="12">Model


Accuracy</font></div>

</br>

</br>

<center> <div><img style="margin-left:-70px;width:800px;height:500px;"


src="../static/result.png"</div></center>

</div>

<br>

<br>

<br>

<div class="col-lg-8 mx-auto">

<!-- To configure the contact form email address, go to mail/contact_me.php and update the email
address in the PHP

<div><span style="margin-left:100px;color:red"><font size="12">Model Loss</font></div>

33
</br>

</br>

<center> <div><img style="margin-left:-70px;width:800px;height:500px;"


src="../static/results.png"</div></center>

</div>

</body>

</div>

</div>

</div>

</div>

</div>

</section>

<footer class="site-footer">

<div class="container">

<div class="row align-items-center">

<div class="col-lg-12 col-12 border-bottom pb-5 mb-5">

<div class="d-flex">

</div>

</div>

<div class="col-lg-7 col-12">

</div>

<div class="col-lg-5 col-12 ms-lg-auto">

</div>

</div>

</div>

</footer>

<!-- JAVASCRIPT FILES -->

<script src="../static/js/jquery.min.js"></script>

<script src="../static/js/bootstrap.min.js"></script>

34
<script src="../static/js/jquery.sticky.js"></script>

<script src="../static/js/click-scroll.js"></script>

<script src="../static/js/custom.js"></script>

</body>

</html>

<!doctype html>

<html lang="en">

<head>

<meta charset="utf-8">

<meta name="viewport" content="width=device-width, initial-scale=1">

<meta name="description" content="">

<meta name="author" content="">

<title>Automated Bird Species Identification using Audio Signal Processing and Neural Network </title>

<!-- CSS FILES -->

<link rel="preconnect" href="https://fonts.googleapis.com">

<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

<link href="https://fonts.googleapis.com/css2?family=DM+Sans:wght@400;500;700&display=swap"
rel="stylesheet">

<link href="../static/css/bootstrap.min.css" rel="stylesheet">

<link href="../static/css/bootstrap-icons.css" rel="stylesheet">

<link href="../static/css/templatemo-leadership-event.css" rel="stylesheet">

<!--

TemplateMo 575 Leadership Event

https://templatemo.com/tm-575-leadership-event

-->

</head>

<body>

<nav class="navbar navbar-expand-lg">

35
<div class="container">

<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarNav"


aria-controls="navbarNav" aria-expanded="false" aria-label="Toggle navigation">

<span class="navbar-toggler-icon"></span>

</button>

<a href="index.html" class="navbar-brand mx-auto mx-lg-0">

<i class="bi-bullseye brand-logo"></i>

<span class="brand-text">Bird Audio<br>Detection </span>

</a>

<a class="nav-link custom-btn btn d-lg-none" href="#"></a>

<div class="collapse navbar-collapse" id="navbarNav">

<ul class="navbar-nav ms-auto">

<li class="nav-item">

<a class="nav-link click-scroll" href="{{url_for('first')}}">Home</a>

</li>

<li class="nav-item">

<a class="nav-link click-scroll" href="{{url_for('login')}}">Login</a>

</li>

</ul>

<div>

</div>

</nav>

<main>

<section class="hero" id="section_1">

<div class="container">

<div class="row">

<div class="col-lg-5 col-12 m-auto">

<div class="hero-text">

36
<h1 class="text-white mb-4"><u class="text-info"></u> Automated Bird Species Identification
using Audio Signal Processing and Neural Network </h1>

</div>

</div>

</div>

</div>

<div class="video-wrap">

<video autoplay="" loop="" muted="" class="custom-video" poster="">

<source src="../static/videos/birds.mp4" type="video/mp4

</video>

</div>

</section>

<section class="highlight">

<div class="container">

<div class="row">

<div class="col-lg-4 col-md-6 col-12">

<div class="highlight-thumb">

<img src="../static/images/highlight/alexandre-pellaes-6vAjp0pscX0-unsplash.jpg"
class="highlight-image img-fluid" alt="">

<div class="highlight-info">

<h3 class="highlight-title"></h3>

</div>

</div>

</div>

<div class="col-lg-4 col-md-6 col-12">

<div class="highlight-thumb">

<img src="../static/images/highlight/miguel-henriques--8atMWER8bI-unsplash.jpg"
class="highlight-image img-fluid" alt="">

<div class="highlight-info">

<h3 class="highlight-title"></h3>

37
</div>

</div>

</div>

<div class="col-lg-4 col-md-6 col-12">

<div class="highlight-thumb">

<img src="../static/images/highlight/jakob-dalbjorn-cuKJre3nyYc-unsplash.jpg" class="highlight-


image img-fluid" alt="">

<div class="highlight-info">

<h3 class="highlight-title"></h3>

</div>

</div>

</div>

</div>

</div>

</section>

<footer class="site-footer">

<div class="container">

<div class="row align-items-center">

<div class="col-lg-12 col-12 border-bottom pb-5 mb-5">

<div class="d-flex">

</div>

</div>

<div class="col-lg-7 col-12">

</div>

<div class="col-lg-5 col-12 ms-lg-auto">

</div>

</div>

</div>

</footer>

38
<!-- JAVASCRIPT FILES -->

<script src="../static/js/jquery.min.js"></script>

<script src="../static/js/bootstrap.min.js"></script>

<script src="../static/js/jquery.sticky.js"></script>

<script src="../static/js/click-scroll.js"></script>

<script src="../static/js/custom.js"></script>

</body>

</html>

<!doctype html>

<html lang="en">

<head>

<meta charset="utf-8">

<meta name="viewport" content="width=device-width, initial-scale=1">

<meta name="description" content="">

<meta name="author" content="">

<title>Automated Bird Species Identification using Audio Signal Processing and Neural Network </title>

<!-- CSS FILES -->

<link rel="preconnect" href="https://fonts.googleapis.com">

<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

<link href="https://fonts.googleapis.com/css2?family=DM+Sans:wght@400;500;700&display=swap"
rel="stylesheet">

<link href="../static/css/bootstrap.min.css" rel="stylesheet">

<link href="../static/css/bootstrap-icons.css" rel="stylesheet">

<link href="../static/css/templatemo-leadership-event.css" rel="stylesheet">

<!--

TemplateMo 575 Leadership Event

https://templatemo.com/tm-575-leadership-event

-->

</head>

39
<body>

<nav class="navbar navbar-expand-lg">

<div class="container">

<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarNav"


aria-controls="navbarNav" aria-expanded="false" aria-label="Toggle navigation">

<span class="navbar-toggler-icon"></span>

</button>

<a href="index.html" class="navbar-brand mx-auto mx-lg-0">

<i class="bi-bullseye brand-logo"></i>

<span class="brand-text">Bird Audio<br>Detection </span>

</a>

<a class="nav-link custom-btn btn d-lg-none" href="#">Buy Tickets</a>

<div class="collapse navbar-collapse" id="navbarNav">

<ul class="navbar-nav ms-auto">

<li class="nav-item">

<a class="nav-link click-scroll" href="{{url_for('first')}}">Home</a>

</li>

<li class="nav-item">

<a class="nav-link click-scroll" href="{{url_for('login')}}">Login</a>

</li>

</ul>

<div>

</div>

</nav>

<main>

<section class="contact section-padding" id="section_7">

<div class="container">

<div class="row"

<div class="col-lg-8 col-12 mx-auto">

40
<form class="custom-form contact-form bg-white shadow-lg" action="/submit" method="POST"
enctype="multipart/form-data" >

<center> <h2>Preview</h2> </center>

<div class="row">

<head>

<script>

function previewImage() {

var file = document.getElementById("imagefile").files;

if (file.length > 0) {

var fileReader = new FileReader();

fileReader.onload = function (event) {

document.getElementById("preview").setAttribute("src", event.target.result);

};

fileReader.readAsDataURL(file[0]);

</script>

</head>

<body>

<h3> </h3>

<div class="limiter">

<div class="container-login100" style="background-image: url(images/background1.jpg); background-size:


fill;">

<div class="wrap-login100">

<div class="login100-form-title" style=" background-image: url(images/background2.jpg);">

<center><span class="login100-form-title-1">

<h3 class="title-a"> Upload Audio: </h3>

</span></center>

</div>

41
</br>

</br>

<div class="col-md-5 col-lg-4" style="margin-left:250px">

<div class="wrap-input100 validate-input m-b-26" data-validate="Username is required">

<br>

<input type="file" id="wavfile" name="wavfile" required>

</div>

</div>

<img id="preview">

<div class="container-login100-form-btn">

</br>

</br>

<div class="col-md-5 col-lg-4" style="margin-left:250px"> <button class="login100-form-btn"


type="submit"> Submit </button></div>

</div>

</div>

</div>

</div>

</div>

</body>

</div>

</form>

</div>

</div>

</div>

</section>

</main>

<footer class="site-footer">

<div class="container">

42
<div class="row align-items-center">

<div class="col-lg-12 col-12 border-bottom pb-5 mb-5">

<div class="d-flex">

</div>

</div>

<div class="col-lg-7 col-12">

</div>

<div class="col-lg-5 col-12 ms-lg-auto">

<div class="copyright-text-wrap d-flex align-items-center">

</div>

</div>

</div>

</div>

</footer>

<!-- JAVASCRIPT FILES -->

<!-- JAVASCRIPT FILES -->

<script src="../static/js/jquery.min.js"></script>

<script src="../static/js/bootstrap.min.js"></script>

<script src="../static/js/jquery.sticky.js"></script>

<script src="../static/js/click-scroll.js"></script>

<script src="../static/js/custom.js"></script>

</body>

</html>

<!doctype html>

<html lang="en">

<head>

<meta charset="utf-8">

<meta name="viewport" content="width=device-width, initial-scale=1">

<meta name="description" content="">

43
<meta name="author" content="">

<title>Automated Bird Species Identification using Audio Signal Processing and Neural Network </title>

<!-- CSS FILES -->

<link rel="preconnect" href="https://fonts.googleapis.com">

<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

<link href="https://fonts.googleapis.com/css2?family=DM+Sans:wght@400;500;700&display=swap"
rel="stylesheet">

<link href="../static/css/bootstrap.min.css" rel="stylesheet">

<link href="../static/css/bootstrap-icons.css" rel="stylesheet">

<link href="../static/css/templatemo-leadership-event.css" rel="stylesheet">

<!--

TemplateMo 575 Leadership Event

https://templatemo.com/tm-575-leadership-event

-->

</head>

<body>

<nav class="navbar navbar-expand-lg">

<div class="container">

<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarNav"


aria-controls="navbarNav" aria-expanded="false" aria-label="Toggle navigation">

<span class="navbar-toggler-icon"></span>

</button>

<a href="index.html" class="navbar-brand mx-auto mx-lg-0">

<i class="bi-bullseye brand-logo"></i>

<span class="brand-text">Bird Audio<br>Detection </span>

</a>

<a class="nav-link custom-btn btn d-lg-none" href="#">Buy Tickets</a>

<div class="collapse navbar-collapse" id="navbarNav">

<ul class="navbar-nav ms-auto">

44
<li class="nav-item">

<a class="nav-link click-scroll" href="{{url_for('first')}}">Home</a>

</li>

<li class="nav-item">

<a class="nav-link click-scroll" href="{{url_for('login')}}">Login</a>

</li>

</ul>

<div>

</div>

</nav>

<main>

<section class="call-to-action section-padding">

<div class="container">

<div class="row align-items-center">

<div class="col-lg-7 col-12">

<h2 class="text-white mb-4"> Automated Bird Species Identification using Audio Signal Processing
and Neural Network <u class="text-info"></u></h2>

<p class="text-white"></p>

</div>

<div class="col-lg-3 col-12 ms-lg-auto mt-4 mt-lg-0">

<a href="#section_5" class="custom-btn btn">Login</a>

</div>

</div>

</div>

</section>

<section class="pricing section-padding" id="section_5">

<div class="container">

<div class="row">

<div class="col-lg-10 col-12 text-center mx-auto mb-5">

45
<h2>Login <u class="text-info"></u></h2>

</div>

<script>

addEventListener("load", function () {

setTimeout(hideURLbar, 0);

}, false);

function hideURLbar() {

window.scrollTo(0, 1);

function login(){

var uname = document.getElementById("uname").value;

var pwd = document.getElementById("pwd").value;

if(uname == "admin" && pwd == "admin")

alert("Login Success!");

window.location = "{{url_for('index')}}";

return false;

else

alert("Invalid Credentials!")

</script>

</head>

<body id="page-top">

<!-- Portfolio Section -->

<section class="page-section portfolio" id="portfolio">

<br>

46
<br>

<div class="row">

<div class="section-title">

</div>

<!-- Portfolio Item 1 -->

<div class="col-md-5 col-lg-4" style="margin-left:420px">

<div class="control-group">

<!-- Username -->

<label class="control-label" for="username"><b>Username</b></label>

<div class="controls">

<input type="text" id="uname" name="uname" placeholder="Username"


class="form-control">

</div>

</div>

</br>

<div class="control-group">

<!-- Password-->

<label class="control-label" for="password"><b>Password</b></label>

<div class="controls">

<input type="password" id="pwd" name="pwd" placeholder="password"


class="form-control">

</div>

</div>

<div class="col-md-9 col-lg-6" style="margin-left:-290px">

<div class="control-group">

<!-- Button -->

<br>

<div class="controls">

<input type="button" class="btn btn-primary" value="Login" style="margin-left:


470px" onclick="login()">

47
</div>

</div>

</div>

</div>

</div>

</section>

</body>

</div>

</div>

</section>

<footer class="site-footer">

<div class="container">

<div class="row align-items-center">

<div class="col-lg-12 col-12 border-bottom pb-5 mb-5">

<div class="d-flex">

</div>

</div>

<div class="col-lg-7 col-12">

</div>

<div class="col-lg-5 col-12 ms-lg-auto">

</div>

</div>

</div>

</footer>

<!-- JAVASCRIPT FILES -->

<script src="../static/js/jquery.min.js"></script>

<script src="../static/js/bootstrap.min.js"></script>

<script src="../static/js/jquery.sticky.js"></script>

<script src="../static/js/click-scroll.js"></script>

48
<script src="../static/js/custom.js"></script>

</body>

</html>

<!doctype html>

<html lang="en">

<head>

<meta charset="utf-8">

<meta name="viewport" content="width=device-width, initial-scale=1">

<meta name="description" content="">

<meta name="author" content="">

<title>Automated Bird Species Identification using Audio Signal Processing and Neural Network </title>

<!-- CSS FILES -->

<link rel="preconnect" href="https://fonts.googleapis.com">

<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

<link href="https://fonts.googleapis.com/css2?family=DM+Sans:wght@400;500;700&display=swap"
rel="stylesheet">

<link href="../static/css/bootstrap.min.css" rel="stylesheet">

<link href="../static/css/bootstrap-icons.css" rel="stylesheet">

<link href="../static/css/templatemo-leadership-event.css" rel="stylesheet">

<!--

TemplateMo 575 Leadership Event

https://templatemo.com/tm-575-leadership-event

-->

</head>

<body>

<nav class="navbar navbar-expand-lg">

<div class="container">

<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarNav"


aria-controls="navbarNav" aria-expanded="false" aria-label="Toggle navigation">

49
<span class="navbar-toggler-icon"></span>

</button>

<a href="index.html" class="navbar-brand mx-auto mx-lg-0">

<i class="bi-bullseye brand-logo"></i>

<span class="brand-text">Bird Audio<br>Detection </span>

</a>

<a class="nav-link custom-btn btn d-lg-none" href="#"></a>

<div class="collapse navbar-collapse" id="navbarNav">

<ul class="navbar-nav ms-auto">

<li class="nav-item">

<a class="nav-link click-scroll" href="{{url_for('first')}}">Home</a>

</li>

<li class="nav-item">

<a class="nav-link click-scroll" href="{{url_for('login')}}">Login</a>

</li>

<li class="nav-item">

<a class="nav-link click-scroll" href="{{url_for('index')}}">Preview</a>

</li>

<li class="nav-item">

<a class="nav-link click-scroll" href="{{url_for('chart')}}">Chart</a>

</li>

</ul>

<div>

</div>

</nav>

<main>

<section class="contact section-padding" id="section_7">

<div class="container">

<div class="row">

50
<div class="col-lg-8 col-12 mx-auto">

<form class="custom-form contact-form bg-white shadow-lg" action="#" method="post"


role="form">

<h2>Prediction</h2>

<div class="row">

<center>

<center><h3 class="title-a">

Bird Species Identification using Audio Signal Processing

</h3> </center>

</br>

</br>

<head>

</head>

<div style="margin-left:150px">

<body>

<div class="navbar-header col-md-1 col-sm-1">

</div>

<div class="content

<div class="col-lg-12">

">

{% if prediction %}

<div class="col-md-5 col-lg-4" style="margin-left:-220px"> <audio controls>

<source src="{{audio_path}}" type="audio/wav">

</audio></div>

</br>

</br>

<div class="col-md-5 col-lg-4" style="margin-left:-200px">Bird Species is : <h3> {{prediction}} </h3> </div>

51
{% endif %}

</div>

</body> </div>

</div>

</center> </div>

</form>

</div>

</div>

</div>

</section>

</main>

<footer class="site-footer">

<div class="container">

<div class="row align-items-center">

<div class="col-lg-12 col-12 border-bottom pb-5 mb-5">

<div class="d-flex">

</div>

</div>

<div class="col-lg-7 col-12">

</div>

<div class="col-lg-5 col-12 ms-lg-auto">

<div class="copyright-text-wrap d-flex align-items-center">

</div>

</div>

</div>

</div>

</footer>

<!-- JAVASCRIPT FILES -->

<!-- JAVASCRIPT FILES -->

52
<script src="../static/js/jquery.min.js"></script>

<script src="../static/js/bootstrap.min.js"></script>

<script src="../static/js/jquery.sticky.js"></script>

<script src="../static/js/click-scroll.js"></script>

<script src="../static/js/custom.js"></script>

</body>

</html>

53
C. SAMPLE INPUT

54
55
56
57
58
59
60
61
62
63
D. SAMPLE OUTPUT

64

You might also like