Please use this identifier to cite or link to this item:
http://dspace.unimap.edu.my:80/xmlui/handle/123456789/31948
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Rajkumar, Palaniappan | - |
dc.date.accessioned | 2014-02-15T02:45:27Z | - |
dc.date.available | 2014-02-15T02:45:27Z | - |
dc.date.issued | 2012 | - |
dc.identifier.uri | http://dspace.unimap.edu.my:80/dspace/handle/123456789/31948 | - |
dc.description.abstract | Sign language recognition is one of the most promising sub-fields in gesture recognition research. Sign languages are commonly developed for hearing impaired communities, which can include interpreters, friends and families of hearing impaired people as well as people who are hard of hearing themselves. This thesis discusses the development of a Phoneme based sign language recognition system for the hearing impaired. Previous research on sign language recognition systems have concentrated on finger spellings recognition or isolated word recognition. This research focuses on developing a sign language recognition system for recognizing 44 English phonemes. To represent the 44 English phonemes, as a first step, 11 different gestures were developed. By selecting suitable combination of these 11 gestures for the right and left hand, 44 different gesture combinations were formulated. The signed data are collected from seven subjects using an ordinary web camera at a resolution of 640×480 pixels. The data is preprocessed and features are extracted from the segmented regions of the signed data. A newly proposed interleaving preprocessing algorithm used in developing the sign language recognition system is discussed in this thesis. Artificial Neural Network (ANN) provides alternative form of computing that attempts to mimic the functionality of the brain. The feature set is then feed to the neural network model to classify the phoneme sign. An audio system is installed to play the particular word for the communication between the ordinary people and hearing impaired community. Experimental results show that the use of proposed interleaving method yields a better classification accuracy compared to the conventional method. The vertical interleaving method using combined blur and affine moment invariant features and Elman network yields the maximum classification accuracy of 95.50%. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Universiti Malaysia Perlis (UniMAP) | en_US |
dc.subject | Sign language recognition | en_US |
dc.subject | Hand gesture | en_US |
dc.subject | Gesture recognition | en_US |
dc.subject | Artificial Neural Network (ANN) | en_US |
dc.subject | Sign languages | en_US |
dc.subject | Hearing impaired | en_US |
dc.title | Design and development of phoneme based sign language recognition system for the hearing impaired | en_US |
dc.type | Thesis | en_US |
dc.publisher.department | School of Mechatronic Engineering | en_US |
Appears in Collections: | School of Mechatronic Engineering (Theses) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Page 1-24.pdf | This item is protected by original copyright. | 96.54 kB | Adobe PDF | View/Open |
Full text.pdf | Access is limited to UniMAP community. | 1.68 MB | Adobe PDF | View/Open |
Items in UniMAP Library Digital Repository are protected by copyright, with all rights reserved, unless otherwise indicated.