Detection of Human Emotions in An Image Using CNN
Detection of Human Emotions in An Image Using CNN
Facial emotion detection applications are spread Original data if directly used for emotion detection, it takes
across different fields like medicine, e-learning, marketing, lot of computational power due to more number of input
monitoring, entertainment, and law. Counselling. parameters. Model is said to be robust only when it uses less
Determination of medical state of person, determining computational power.
feelings, comfort level to treatment [7], adjusting the
To overcome these problems the data is pre-processed.
learning technique by determining emotion, ATM not
Data preprocessing includes:
dispensing money when person is scared while withdrawing
money, prioritizing angry calls in call centres, recognizing a) Face detection
mood and satisfying needs, purchasing decisions and b) Normalization
© 2020, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 6253
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 04 | Apr 2020 www.irjet.net p-ISSN: 2395-0072
Haar cascades classifier is trained by using the positive The results are showed in the fig.3. Gray level
and negative images. Edge, line, four rectangle, diagonal are Equalization increases the contrast of image. It makes details
its main features and are shown in Fig.1. Important facial clearer. The obtained image is conducive for facial features
features are extracted from large number of Haar-like extraction.
features. Haar Classifier is highly efficient and so it is used
widely.
The nature of deep learning method is to build neural Input layer is two-dimensional matrix. It is composed of
networks which learns features. The rules are developed at image pixels. The model presented in this paper used four
the end of training as shown in the figure 5. Rules are convolution layers. The gray scale image with 48 x 48 pixel
nothing but the weights. In the training phase, the network is matrix is the input to this layer. Every feature map is
initialized with random weights. Training pattern is feed in connected to its previous map. In every layer there are
to get output. The obtained output is compared to target several feature maps. The convolution layer C1 uses 64
output. Adjust weights based on error. This process is convolutional nuclei. The size of convolution nuclei is 3 x 3.
repeated until all patterns are passed one time which is Layer C2 and C3 uses 128 nuclei. Softmax layer contains 7
called as epoch. Like this the process is carried out for every neurons. The feature of the output layer is classified among
epoch. the seven emotions. .Layers as shown in figure 9.
A. CONVOLUTION LAYER
© 2020, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 6255
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 04 | Apr 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 6256
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 04 | Apr 2020 www.irjet.net p-ISSN: 2395-0072
the probability for every emotion is calculated. All Happiness is the most desired expression of a
probability values are compared the one which is having human. The path of the image is set to trained data model to
highest is declared as the emotion state for the given input. recognize the emotions. Model discussed in this paper can
detect a maximum number of images in a group as shown in
fig 11.
© 2020, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 6257
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 04 | Apr 2020 www.irjet.net p-ISSN: 2395-0072
robust to noise. We achieved a performance of detecting Comput., Commun., Appl., vol. 14, no. 1s, Apr. 2018, Art.
expression with 80 percent accuracy. The proposed model no. 27.
detects the emotion of a person if face is recognized more [15] H. Ma and T. Celik, ``FER-Net: Facial expression
than 70% in an image. recognition using densely connected convolutional
network,'' Electron. Lett., vol. 55, no. 4, pp. 184_186, Feb.
Future works should attempt to the development of 2019.
model to detect emotion of a person even with half faces, [16] L. Wei, C. Tsangouri, F. Abtahi, and Z. Zhu, ``A recursive
detecting emotion in video’s and dynamic recognition of framework for expression recognition: From Web
emotion with 3D technology. images to deep models to game dataset,'' Mach. Vis.
Appl., vol. 29, no. 3, pp. 489_502, 2018.
REFERENCES [17] S. Li andW. Deng, ``Reliable crowdsourcing and deep
locality-preserving learning for unconstrained facial
[1] R. M. Mehmood, R. Du, and H. J. Lee, ``Optimal feature expression recognition,'' IEEE Trans.Image Process., vol.
selection and deep learning ensembles method for 28, no. 1, pp. 356_370, Jan. 2018
emotion recognition from human brain [18] A.Mehrabian, “Communication without words,”
[2] T. Song, W. Zheng, C. Lu, Y. Zong, X. Zhang, and Z. Cui, Psychology today, vol.2, no.4, pp.53-56, 1968.
``MPED: A multi-modal physiological emotion database [19] R.W. Picard, Affective Computing. Cambridge.MA : MIT
for discrete emotion recognition,''IEEE Access, vol. 7, pp. Press, 1997.
12177_12191, 2019 [20] D. Beymer, A. Shashua, and T. Poggio, Example Based
[3] E. Batbaatar, M. Li, and K. H. Ryu, ``Semantic-emotion Image Analysis and Synthesis, M.I.T. A.I. Memo No. 1431,
neural network for emotion recognition from text,'' IEEE 1993.
Access, vol. 7, pp. 111866_111878, 2019. [21] I.A. Essa and A. Pentland, “A Vision System for Observing
[4] H. Meng, N. Bianchi-Berthouze, Y. Deng, J. Cheng, and J. P. and Extracting Facial Action Parameters”, Proc. IEEE
Cosmas,``Time-delay neural network for continuous CVPR, pp.76-83, 1994.
emotional dimension prediction from facial expression [22] H. Li, P. Roivainen, and R. Forcheimer,“3D Motion
[5] sequences,'' IEEE Trans. Cybern., vol. 46, no. 4, pp. Estimation in Model-Based Facial Image Coding”, IEEE
916_929, Apr. 2016. Trans. Pattern Analysis and Machine intelligence, vol.
[6] X. U. Feng and J.-P. Zhang, ``Facial microexpression 15, pp. 545-555,1993.
recognition: A survey,''Acta Automatica Sinica, vol. 43, [23] K. Mase, “Recognition of Facial Expression from Optical
no. 3, pp. 333_348, 2017 Flow”, IEICE Trans., vol. E 74, pp. 3474-3483, 1991.
[7] M. S. Özerdem and H. Polat, ``Emotion recognition based [24] K. Matsuno, C. Lee, and S.Tsuji, “Recognition of Human
on EEG features in movie clips with channel selection,'' Facial Expressions without Feature Extraction”, Proc.
Brain Inf., vol. 4, no. 4, pp. 241_252, 2017. ECCV, pp. 513-520, 1994.
[8] F. Vella, I. Infantino, and G. Scardino, ``Person [25] M. Rosenblum, Y. Yacoob, and L.S. Davis, “Human
identi_cation through entropy oriented mean shift Emotion Recognition from Motion Using a Radial Basis
clustering of human gaze patterns,'' Multime-dia Tools Function Network Architecture”, IEEE Workshop Motion
Appl., vol. 76, no. 2, pp. 2289_2313, Jan. 2017. of Non-Rigid and Articulated Objects, Austin, Texas, pp.
[9] S. K. A. Kamarol, M. H. Jaward, H. Kälviäinen, J. 43-49, Nov. 1994.
Parkkinen, and R. Parthiban, ``Joint facial expression [26] D. Terzopoulos and K. Waters, “Analysis and Synthesis of
recognition and intensity estimation based on weighted Facial Image Sequences Using Physical and Anatomical
votes of image sequences,'' Pattern Recognit. Lett., vol. Models”, IEEE Trans. Pattern Analysis and Machine
92, pp. 25_32, Jun. 2017. Intelligence, vol. 15, pp. 569-579, 1993.
[10] J. Cai, Q. Chang, X.-L. Tang, C. Xue, and C. Wei, ``Facial [27] Y.Yacob and L Devis, “Recognizing Human facial
expressionrecognition method based on sparse batch expression from long image sequences using optical
normalization CNN,'' in Proc.37th Chin. Control Conf. flow”, IEEE transaction on Pattern Analysis and Machine
(CCC), Jul. 2018, pp. 9608_9613. Intelligence [PAMI], 18{6}: 636-642, 1996.
[11] M. Takalkar, M. Xu, Q. Wu, and Z. Chaczko, ``A survey: [28] J. N. Bassili, Emotion recognition: The role of facial
Facial micro-expression recognition,'' Multimedia Tools movement and the relative importance of upper and
Appl., vol. 77, no. 15, pp. 19301_19325, 2018. lower areas of the face, J. Personality and Social Psych.,
[12] Magudeeswaran and J. F. Singh, ``Contrast limited fuzzy vol. 37, pp. 204
adaptive histogram equalization for enhancement of
brain
[13] images,'' Int. J. Imag. Syst. Technol., vol. 27, no. 1, pp.
98_103, 2017.
[14] F. Zhang, Q. Mao, X. Shen, Y. Zhan, and M. Dong,
``Spatially coherent feature learning for pose-invariant
facial expression recognition,'' ACM Trans. Multimedia
© 2020, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 6258