Sign Language Recognition using Densenet-Deep Learning Approach
In this project we have used Deep learning based sign language recognition system. Platform : Matlab Delivery : One Working Day Support : Online Demo ( 2 Hours)
100 in stock
Sign Language Recognition using Densenet-Deep Learning Project
Sign language recognition has emerged as one of the important areas of research in computer vision. As the community of speech and hearing-impaired people has depended on sign language as a communication medium. New techniques have been developed from past decade still now, to counter the problem of building a communication bridge between normal people and speech and hearing-impaired people. Sign gestures can be classified as static and dynamic. However static gesture recognition is simpler than dynamic gesture recognition but both recognition systems are important to the human community. Efficient sign language recognition techniques were studied and summarized in this paper. In this paper we have used DenseNet neural network to recognize the sign language. Here Asl Dataset is used for Training Purpose.
Speech and hearing impaired people are humans at the deepest psychological level. Many of these people are not even exposed to sign languages and it is observed that it gives a great relief on a psychological level, when they find out about signing to connect themselves with others by expressing their love or emotions. About 5% population in world are suffering from hearing loss. Speech and hearing impaired people use sign language as their primary means to express their thoughts and ideas to the people around them with different hand and body gestures. There are only about 250 certified sign language interpreters in India for a deaf population of around 7 million. In this work, the design of prototype of an assistive device for speech and hearingimpaired people is presented so as to reduce this communication gap with the normal people. This device is portable and can hang over the neck. This device allows the person to communicate with sign hand postures in order to recognize different gestures based signs. The controller of this assistive device is developed for processing the images of gestures by employing various image processing techniques and deep learning models to recognize the sign. This sign is converted into speech in real-time using text-tospeech module. Hand gesture recognition provides an intelligent, natural, and convenient way of human–computer interaction (HCI). Sign language recognition (SLR) and gesture-based control are two major applications for hand gesture recognition technologies. SLR aims to interpret sign languages automatically by a computer in order to help the speech and hearing-impaired communicate with hearing society conveniently. Since sign language is a kind of highly structured and largely symbolic human gesture set, SLRalso serves as a good basic for the development of general gesture-based HCI. Speech and hearing-impaired use hand signs to communicate, hence normal people face problem in recognizing their language by signs made. Hence there is a need of the systems which recognizes the different signs and conveys the information to the normal people.
- The proposed system use a deep learning based CNN architecture.
- Where the system improve the accuracy based on learning
- To improve the accuracy in terms of specificity and sensitivity.
- Accuracy is improved by tuning the Network.
- Training of Large dataset
- Cloud based web framework with continuous learning approach
For more Image Processing projects ,Click here
For Deep Learning Projects ,click here