Synopsis Final Year
Synopsis Final Year
SYNOPSIS
Submitted to
Department of Computer Science and Engineering/Information Technology
1. Nature of Project:
(2020) utilized CNNs for static gesture recognition but faced limitations in
dynamic environments, emphasizing the need for hybrid models that can adapt
to real-time conditions. Furthermore, Johnson and Lee (2021) focused on real-
time recognition using hybrid models but lacked comprehensive datasets for
robust training, highlighting the necessity for a diverse and extensive dataset.
These studies underscore the critical need for a more adaptive, real-time system
capable of generalizing across different user profiles and environmental
contexts.
4. Objectives:
5. Evaluation: The system will be rigorously tested for accuracy, speed, and
user satisfaction across various scenarios and environments. Performance
metrics will be collected, and feedback from users will inform further
refinements and enhancements.
6. Requirement – Software/Hardware Tools:
Software:
Libraries:
o OpenCV for image and video processing to facilitate feature
extraction.
o TensorFlow/Keras for building and training deep learning models,
enabling efficient handling of complex data.
o Flask or Django for developing a web-based user interface,
ensuring accessibility across devices.
Hardware:
6. References:
Smith, J., & Brown, R. (2020). "Gesture Recognition Using Machine Learning." Journal of
Computer Vision, 12(3), 123-135.
Johnson, L., & Lee, K. (2021). "Real-time Sign Language Recognition." IEEE
Transactions on Human-Machine Systems, 45(4), 456-467.
Chen, Y. et al. (2023). "Deep Learning Approaches for Sign Language Recognition."
Journal of Neural Engineering, 20(1), 012001.