Course Title ECE3003 Microcontrollers and Its Applications (E2 Slot) Attendance Through Face Recognition
Course Title ECE3003 Microcontrollers and Its Applications (E2 Slot) Attendance Through Face Recognition
Team Members –
ABSTRACT CHAPTER 1
1.1 AIM
1.2 OBJECTIVE
1.3 INTRODUCTION
CHAPTER2
2.1 RELATED WORKS OR LITERATURE SURVEY
2.2 EXISTING AND PROPOSED SYSTEM.
CHAPTER 3
3.1 PROPOSED SYSTEM DESIGN ARCHITECTURE
3.2 ARCHITECTURE EXPLANATION
3.3 ALORITHMS AND PSEUDOCODE
CHAPTER 4
4.1 RESULTS AND DISCUSSION
4.3 CONCLUSION AND FUTURE WORK
REFERNCES
Abstract:
The project involves the use of machine learning to build a
Automatic Attendance System through Face Recognition. The System will
be capable of recognizing faces and taking the attendance Automatically
and can update it to any database that we aim to send. The entire project
is developed on a raspberry pi, along with a camera module. The interface
is totally wireless as the user can turn the system on and off with the VNC
android app. The system is capable of recognizing multiple users, it
captures a video feed from the camera module and then processes the
image frames using opencv to find out any strangers. Once it has
detected any user it automatically takes writes the user’s name to a file
and can automatically update once the user wish to update the database
even from the mobile app.
Chapter 1
1.1 Aim
The aim of Automatic Attendance system is to record the
attendance without any human interference and update it directly
to any given database.
1.2 Objective
Objectives of the Project are:
1) The Faculty/ User can start recording attendance via face recognition
from any given time.
2) To make it convinent for students to get attendance hassle free
3) To make it easier for faculty as it automatically updates the attendance
whenever he wants without any manual work.
4) We can even moniter from anywhere. This helps faculty to take
attendance from anywhere even from their mobile phones via VNC app.
1.3 Introduction
We have chosen this project because it provides the faculty with total
wireless control of the Attendance System. The current attendance system
(Fingerprint System) implemented require the faculty to be in the class for
taking attendance also some students escape after giving attendance
when faculty isn’t noticing. The best part of our project is that it uses
minimal hardware because it can implemented with minimal cost .The
basic Hardware consists of only a raspberry pi and a camera module. The
installation will require minimum cost and also the System can be
triggered from anywhere in the world. The user can simply log into our
App and then start taking attendance from anywhere on earth thanks to
the VNC cloud connection. In this project we simply took a raspberry pi
and a camera module attached to it and developed a simple machine
learning model to identify Studnts of that given class. The camera module
captures a video feed of the class which is then analyzed by the raspberry
pi using OpenCV library. As soon as a Student is detected, It automatically
recognises him with images in the database and records it in a file and
then updates when ever faculty wants to. The faculty can then open the
VNC Viewer App to trigger the process even when they are not here in the
class. To control the system we used the VNC Server and Viewer to enable
total wireless control of the system, to interface with the system and also
to control execution of the system. The best application of the system can
be implemented in Classrooms of VIT itself. With a system such as
fingerprint is implimenting everywhere, we can reduce the cost of the
hardware and increase the convinience for the students as they have to
stand in the queue for a long time to give attendance or even to go out/
Come into the campus. As this system requires minimum hardware and
along with the features implemented will give a better user experience to
the faculty and students.
Chapter -2
2.1 Related Works or Literature Survey
As the more number of devices connect to internet, security Concers rises
a lot more. The best part of the system is implimented locally on a
raaspberry pi. This can avoid the privacy concerns of what the hackers can
do with the data. This system can be compramised if this is connected to
internet but fortunately we figured out a way to make it completly offline
and can update it automatically on the database or can be set according
to our need.
Nowadays most the devices becoming smarter, We can update the way we
autheticate a user to increase convience to everyone. This can help users
to locally control the things like letting the user in/ out in very less time
accurately. This can also detect if the user’s behaviour is anomalous and
can help the security to just check that individual alone.
video_capture = cv2.VideoCapture(0)
goutham_image = face_recognition.load_image_file("goutham.jpg")
goutham_face_encoding = face_recognition.face_encodings(goutham_image)[0]
known_face_encodings = [
goutham_face_encoding
]
known_face_names = [
"Goutham"
]
# Resize frame of video to 1/4 size for faster face recognition processing
small_frame = cv2.resize(frame, (0, 0), fx=0.2, fy=0.2)
# Convert the image from BGR color (which OpenCV uses) to RGB color (which
face_recognition uses)
rgb_small_frame = small_frame[:, :, ::-1]
face_names = []
for face_encoding in face_encodings:
# See if the face is a match for the known face(s)
matches = face_recognition.compare_faces(known_face_encodings,
face_encoding)
name = "Unknown"
# Or instead, use the known face with the smallest distance to the new
face
face_distances = face_recognition.face_distance(known_face_encodings,
face_encoding)
best_match_index = np.argmin(face_distances)
if matches[best_match_index]:
name = known_face_names[best_match_index]
tim = now.strftime("%H:%M:%S")
f.write(name+tim)
f.close()
flag=1
face_names.append(name)
if flag==1:
break
process_this_frame = not process_this_frame
(Model.py)
import utils
from utils import LRN2D
def create_model():
myInput = Input(shape=(96, 96, 3))
# Inception3a
inception_3a_3x3 = Conv2D(96, (1, 1), name='inception_3a_3x3_conv1')(x)
inception_3a_3x3 = BatchNormalization(axis=3, epsilon=0.00001,
name='inception_3a_3x3_bn1')(inception_3a_3x3)
inception_3a_3x3 = Activation('relu')(inception_3a_3x3)
inception_3a_3x3 = ZeroPadding2D(padding=(1, 1))(inception_3a_3x3)
inception_3a_3x3 = Conv2D(128, (3, 3), name='inception_3a_3x3_conv2')
(inception_3a_3x3)
inception_3a_3x3 = BatchNormalization(axis=3, epsilon=0.00001,
name='inception_3a_3x3_bn2')(inception_3a_3x3)
inception_3a_3x3 = Activation('relu')(inception_3a_3x3)
# Inception3b
inception_3b_3x3 = Conv2D(96, (1, 1), name='inception_3b_3x3_conv1')
(inception_3a)
inception_3b_3x3 = BatchNormalization(axis=3, epsilon=0.00001,
name='inception_3b_3x3_bn1')(inception_3b_3x3)
inception_3b_3x3 = Activation('relu')(inception_3b_3x3)
inception_3b_3x3 = ZeroPadding2D(padding=(1, 1))(inception_3b_3x3)
inception_3b_3x3 = Conv2D(128, (3, 3), name='inception_3b_3x3_conv2')
(inception_3b_3x3)
inception_3b_3x3 = BatchNormalization(axis=3, epsilon=0.00001,
name='inception_3b_3x3_bn2')(inception_3b_3x3)
inception_3b_3x3 = Activation('relu')(inception_3b_3x3)
# Inception3c
inception_3c_3x3 = utils.conv2d_bn(inception_3b,
layer='inception_3c_3x3',
cv1_out=128,
cv1_filter=(1, 1),
cv2_out=256,
cv2_filter=(3, 3),
cv2_strides=(2, 2),
padding=(1, 1))
inception_3c_5x5 = utils.conv2d_bn(inception_3b,
layer='inception_3c_5x5',
cv1_out=32,
cv1_filter=(1, 1),
cv2_out=64,
cv2_filter=(5, 5),
cv2_strides=(2, 2),
padding=(2, 2))
#inception 4a
inception_4a_3x3 = utils.conv2d_bn(inception_3c,
layer='inception_4a_3x3',
cv1_out=96,
cv1_filter=(1, 1),
cv2_out=192,
cv2_filter=(3, 3),
cv2_strides=(1, 1),
padding=(1, 1))
inception_4a_5x5 = utils.conv2d_bn(inception_3c,
layer='inception_4a_5x5',
cv1_out=32,
cv1_filter=(1, 1),
cv2_out=64,
cv2_filter=(5, 5),
cv2_strides=(1, 1),
padding=(2, 2))
#inception4e
inception_4e_3x3 = utils.conv2d_bn(inception_4a,
layer='inception_4e_3x3',
cv1_out=160,
cv1_filter=(1, 1),
cv2_out=256,
cv2_filter=(3, 3),
cv2_strides=(2, 2),
padding=(1, 1))
inception_4e_5x5 = utils.conv2d_bn(inception_4a,
layer='inception_4e_5x5',
cv1_out=64,
cv1_filter=(1, 1),
cv2_out=128,
cv2_filter=(5, 5),
cv2_strides=(2, 2),
padding=(2, 2))
inception_4e_pool = MaxPooling2D(pool_size=3, strides=2)(inception_4a)
inception_4e_pool = ZeroPadding2D(padding=((0, 1), (0, 1)))(inception_4e_pool)
#inception5a
inception_5a_3x3 = utils.conv2d_bn(inception_4e,
layer='inception_5a_3x3',
cv1_out=96,
cv1_filter=(1, 1),
cv2_out=384,
cv2_filter=(3, 3),
cv2_strides=(1, 1),
padding=(1, 1))
#inception_5b
inception_5b_3x3 = utils.conv2d_bn(inception_5a,
layer='inception_5b_3x3',
cv1_out=96,
cv1_filter=(1, 1),
cv2_out=384,
cv2_filter=(3, 3),
cv2_strides=(1, 1),
padding=(1, 1))
inception_5b_pool = MaxPooling2D(pool_size=3, strides=2)(inception_5a)
inception_5b_pool = utils.conv2d_bn(inception_5b_pool,
layer='inception_5b_pool',
cv1_out=96,
cv1_filter=(1, 1))
inception_5b_pool = ZeroPadding2D(padding=(1, 1))(inception_5b_pool)
inception_5b_1x1 = utils.conv2d_bn(inception_5a,
layer='inception_5b_1x1',
cv1_out=256,
cv1_filter=(1, 1))
inception_5b = concatenate([inception_5b_3x3, inception_5b_pool,
inception_5b_1x1], axis=3)