Languages are everywhere and communication is the integral part of advancement in any field. For a long time, physically challenged people with hearing and speaking difficulties had trouble to interact with society. Sign languages are the way in which they communicate a thought or an idea. The main setback for them is that regular people don’t learn these languages and it becomes a challenge to communicate with a stranger without an interpreter present.
This problem can be addressed using Artificial Intelligence and aid sign language users to communicate through gesture recartificial intelligenceognition. Using computer vision techniques , gestures are translated into both text and speech. Sentences spoken by a person are translated to text using STT(Speech to Text) for the Sign language users to read.
Sign language translation comes with its own challenges. To translate it accurately one must account to various facial expressions, head tilting, shoulder raising, mouthing and various signals apart from hand signs to create meaning.
For a complete sign language translator, we would need three sub-domains of computer vision:
∙ Detecting body movement and position
∙ Analyzing facial expressions
∙ Detecting hand and finger shapes