I. sign language began in the 17th century in

I.                   INTRODUCTION

 

A sign language (also signed language or simply signing) is a language which, instead of acoustically conveyed sound patterns, uses manual communication and body language to convey meaning. This can involve simultaneously combining hand shapes, orientation and movement of the hands, arms or body, and facial expressions to fluidly express a speaker’s thoughts. They share many similarities with spoken languages (sometimes called “oral languages”, which depend primarily on sound), which is why linguists consider both to be natural languages, but there are also some significant differences between signed and spoken languages. The written history of sign language began in the 17th century in Spain. In 1620, Juan Pablo Bonet published Reducción de las letras y arte para enseñar a hablar a los mudos (‘Reduction of letters and art for teaching mute people to speak’) in Madrid. It is considered the first modern treaty of phonetics and speech therapy, setting out a method of oral education for deaf children by means of the use of manual signs, in the form of a manual alphabet to improve communication among and with deifies. Wherever communities of deaf people exist, sign languages develop. Signing is also done by persons who can hear, but cannot physically speak. While they utilize space for grammar in a way that spoken languages do not, sign languages exhibit the same linguistic properties and use the same language faculty as do spoken languages. Hundreds of sign languages are in use around the world and are at the cores of local deaf cultures. Some sign languages have obtained some form of legal. American Sign Language (ASL) (National Institute on Deafness & Other communication Disorders, 2005) is a complete language that employs signs made with the hands and other facial expressions and postures of the body. ASL is the fourth most used language in the United States only behind English, Spanish and Italian ASL is a visual language, meaning it is not expressed through sound but rather through combining hand shapes through movement of hands, arms and facial expressions. Facial expressions are extremely important in signing. A sign language Translator is designed and implemented for good efficiency and improved version of older one, needs the special requirements such as sensor technology, real time operations, instant responses and accelerometers.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

II.                OBJECTIVE

 

The goal of this project is to design a useful and fully functional real-world product that efficiently translates the movement of the fingers that is Sign Language into readable language. To do so sensor technology is used where sensors are placed on the finger and the movement is translated into text and displayed.

 

III.             BACKGROUND

 

The American Sign Language (ASL) is a visual language based on hand gestures. It has been well developed by the deaf community over the past centuries and is the Fourth most used language in the United States today. In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. There have significant projects in the field beginning with finger spelling hands such as ”Ralph” (robotics), Cyber-Gloves (virtual reality sensors to capture isolated and continuous signs), camera-based projects such as the Copycat interactive American Sign Language game (computer vision), and sign recognition software (Hidden Markov Modeling and neural network systems). Avatars such as ”Tessa” (Text and Sign Support Assistant; three-dimensional imaging) and spoken language to sign language translation systems such as Poland’s project entitled ”THETOS” (Text into Sign Language Automatic Translator, which operates in Polish; natural language processing). The application of this research to education is also explored. The ”ICICLE” (Interactive Computer Identification and Correction of Language Errors) project, for example, uses intelligent computer-aided instruction to build a tutorial system for deaf or hard-of-hearing children that analyzes their English writing and makes tailored lessons and recommendations.

 

IV.             DESCRIPTION ON ASL

 

ASL also has its own grammar that is different from other sign languages such as English and Swedish and it consists of approximately 6000 gestures of common words or proper nouns. Finger spelling used to communicate unclear words or proper nouns. Finger spelling uses one hand and 26 gestures to communicate the 26 letters of the alphabet. Processing and neural network the system can interpret ASL alphabets. First category relies on electromechanical devices that are used to measure the different gesture parameters such as hand’s position, angle, and the location of the fingertips. Systems that use such devices are called glove-based systems. A major problem with such systems is that they force the signer to wear cumbersome and inconvenient devices. Hence the way by which the user interacts with the system will be complicated and less natural. The second category uses machine vision and image processing techniques to create visual based hand gesture recognition systems. Visual based gesture recognition systems are further divided into two categories: The first one relies on using specially designed gloves with visual markers called “visual-based gesture with glove-markers (VBGwGM)” that helps in determining hand postures. But using gloves and markers do not provide the naturalness required in human computer interaction systems. Besides, if colored gloves are used, the processing complexity is increased. The second one that is an alternative to the second kind of visual based gesture recognition systems can be called “pure visual-based gesture (PVBG)” means visual-based gesture without glove-markers.

 

V.                DESIGN & IMPLEMENTATION

 

The sign language translator works in real time. All 26 letters can be successfully recognized by detecting the fingers’ positions. Hierarchical detection is fast and stable for most of the letters, but confusion exists among the following groups of letters:

‘U’ and ‘E’

‘V’ and ‘W’

‘G’, ‘Q’, and ‘L’

The ambiguity exists mainly because of two reasons. The flex sensors are too sensitive, so they are sometimes activated when other parts of the hand are moving. Also, some flex sensors are not sewed on the glove in their ideal position, and thus cannot be activated for certain letters. By signing carefully and lengthening the debounce time, the program is able to recognize these ambiguous letters with at most 1 or 2 misinterpretations. The fastest rate we can achieve is about 1 to 2 letters per second. The solder board on the Detection Unit is isolated from the user’s arm with the use of the second solder board, leaving no chance of the user being shocked from that circuit. The Detection Unit does transmit on an unregulated band, but it sends a sync and address byte before transmitting any data. The address byte was randomly generated, and so long as it is not the same as anybody else’s address, and others also filter based on the address, there should not be any interference with others’ designs. Anybody with the ability to see can pick up the glove and learn how to use it. It does take some learning, about fifteen minutes, to manipulate the glove to correctly translate the letters that are easily confused. Once this initial learning period is passed, however, the results are fairly accurate.

The Figure-1 shows the basic circuit diagram of the detection unit connected with the glove circuit. The input from the flex sensor is fed into the microcontroller after passing through an operational amplifier. The five flex sensors are connected to the pins RA0 to RA1 in the microcontroller.

The LCD display is connected to the microcontroller and the RF transceiver is soldered to the microcontroller chip. So, as the input comes from the sensors the microcontroller processes the data and as it already programmed it sends the output to the LCD display and to the RF transceiver.

A sign language recognition apparatus and method is provided for translating hand gestures into speech or written text. The apparatus includes a number of sensors on the hand to measure dynamic and static gestures. The sensors are connected to a microcontroller to search a library of gestures and generate output signals that can then be used to produce a synthesized voice or written text. The apparatus includes sensors such as flex sensors on the fingers and thumb. The sensors transmit the data to the microprocessor to determine the shape, position and orientation of the hand relative to the body of the user.

Two modes of ASL Translator

 

1.      Practice Mode:

 

After all the letters and words are programmed into the microcontroller it goes into the PRACTICE mode.

Fig-2: Block Diagram of Practice Mode

 

At this point in time, the microcontroller can be removed from the computer, and the unit can be taken anywhere. The user can then start practicing positions and checking the output on the LCD display. As the user adjust his or her fingers in order to form a sign to match a letter the same letter is displayed on the LCD screen without any delay. The Figure-2 shows the block diagram of the practice mode where the output from the glove circuit is processed by the MCU and displayed on the LCD.