Volume Control Using Gestures
Volume Control Using Gestures
ISSN No:-2456-2165
Abstract:- In this paper we are developing a volume Vision Based method requires a web camera, so that one can
controller in which we are using hand gestures as the realize natural interaction between humans and computer
input to control the system , Opencv module is basically without using any other devices.
used in this implementation to control the gestures.This
system basically uses the web camera to record or The challenging part in these systems is background
capture the images /videos and accordingly on the basis images or videos which is recorded or captured during
of the input, the volume of the system is controlled by taking the inputs i.e. hand gesture by the user, also sometime
this application. The main function is to increase and lightning effect the quality of the input taken which creates
decrease the volume of the system. The project is the problem in recognizing the gestures. Process to find a
implemented using Python, OpenCV. connected region within the image with some of the
property such as color ,intensity and a relationship between
We can use our hand gestures to control the basic pixels i.e. pattern is termed as segmentation. And have used
operation of a computer like increasing and decreasing some important packages which have OpenCv-python,
volume. Therefore ,people will not have to learn tensorflow, numpy, mediapipe, imutils, scipy, numpy.
machine-like skills which are a burden most of the
time.This type of hand gesture systems provides a II. EXISTING SYSTEM
natural and innovative modern way of non verbal
communication. These systems has a wide area of Gesture Recognition using Accelerometer - The author
has introduced an ANN application used for the
application in human computer interaction.
classification and gesture recognition. Wii remote is
The purpose of this project is to discuss a volume basically used in this system as this remote rotate in the X,
control using hand gesture recognition system based on Y, Z direction. In order to decrease the cost and memory of
detection of hand gestures. In this the system is consist of the system the author has used two levels to implement the
a high resolution camera to recognise the gesture taken system. In the first level User is authenticated for gesture
as input by the user. The main goal of hand gesture recognition. Gesture recognition method which is used by
recognition is to create a system which can identify the the author Accelerometer- Based .
human hand gestures and use same input as the
information for controlling the device and by using real After that in second level of the system signal are
time gesture recognition specific user can control a processed for gesture recognition using
computer by using hand gesture in front of a system automata(Fuzzy).After this the data is used for
video camera linked to a computer. In this project we normalization using k means and Fast Fourier algorithm.
are developing a hand gesture volume controller system Now, the recognition accuracy has increases up to 95%.
with the help of OpenCV ,Python. In this system can be Hand Gesture Recognition by using Hidden Markov
controlled by hand gesture without making use of the Models - In this paper the author has made a system to
keyboard and mouse. recognize the numbers from 0-9 using the dynamic hand
Keywords:- Hand gesture ,Opencv-Python, volume gestures. The author has used two steps in this paper .In the
controller, mediapipe package , numpy package , Human initial stage preprocessing is done and in second step
computer Interface. classification is carried out. Basically, there are two types of
gestures Key gestures and Link gestures. For spotting
I. INTRODUCTION purpose the key gesture is used and the link gestures is used
in continuous gestures. In this paper ,Discrete Hidden
Hand gestures is the powerful communication medium Markov Model(DHMM) is used for the classification
for Human Computer Interaction (HCI).Several input purpose. This DHMM is trained by an algorithm named
devices are available for interaction with computer, such as Baum-Welch algorithm. Average recognition rates using
keyboard, mouse, joystick and touch screen, but these HMM ranges from 93.84% to 97.34%.
devices does not provide easier way to communicate. In this,
the system which is proposed will consists of desktop and Robust Part-Based Hand Gesture Recognition Using
laptop interface, hand gesture can be used by the users need Kinect Sensor- In this author has used low cost cameras in
to wear data gloves also can use the web camera or separate order to make the things affordable for the users. A kinect
cameras for recording the hand gestures. sensor is a sensor whose resolution is lower in comparison
of other cameras but can detect and capture the big images
The first and most important step toward any hand and objects .To deal with the noisy hand gestures ,only
gesture recognition system is to implement hand tracking fingers are matched with FEMD but not the whole hand .
system. Some Sensor devices are generally used in Data- This system works perfectly and efficiently in uncontrolled
Glove based methods for digitizing hand and finger motions environments. The accuracy of 93.2% is achieved with the
into multi parametric data. Other sensors used in this system experimental result.
will collect hand configuration and hand movements. The
In vision community hand gesture is an active area of In this project we are using python technology to
research, for the purpose of sign language recognition and develop the project , the code is written and designed in
human computer interaction. In this we have used some python language using Opensv and NumPy modules. In this
algorithms and some modules to detect the gestures of the project firstly we import the libraries which are to be used
person and these gestures are taken as the input in the for further processing of the input and the output. The
system . Here, several modules are used like opencv-python, libraries which are used in this project which needs to be
mediapipe, numpy etc for the purpose of tracking the imported are OpenCV, mediapipe, math, ctypes, pycaw and
gestures. numpy.We get video inputs from our primary camera.
After capturing the input from the user the image is Now, here mediapipe is used to detect the video as the
used in the hand tracking system to check the dimensions input from our camera and use mphand.hands module to
and shape of the gesture which is received in the system. detect the gesture .Then , in order to access the speaker we
have used the pycaw and we have provided the range of the
Hand tracking module plays a important role in volume from minimum volume to maximum volume.
identifying the input recorded in the system, after that
classification and segmentation process is used to classify Next step is to convert the input image to rgb image to
the gestures in the system .Machine learning and deep complete the processing of the input captured. Then its turn
learning is also used to identify the training data from the to specify the points of thumb in input and fingers.
system and identify it according to the requirement of the
system .After this the gestures are identified from the trained Volume range id processed using the hand range in this
data and on the basis of that data the gestures rae recognized process numpy is used to convert this process and process
and is used for processing of the the system to implement the required output. NumPy package is fundamental
the functions like increase and decrease in volume. package for computing in Python language. It is consist of
several things like-
powerful N-dimensional array
object broadcasting
tools to integrate C
Fourier transform, and random number capabilities.
V. RESULTS
Fig. 2
Fig. 3
Fig. 4
Fig. 5
REFERENCES