Show simple item record

dc.contributor.authorRajkumar, Palaniappan
dc.date.accessioned2014-02-15T02:45:27Z
dc.date.available2014-02-15T02:45:27Z
dc.date.issued2012
dc.identifier.urihttp://dspace.unimap.edu.my:80/dspace/handle/123456789/31948
dc.description.abstractSign language recognition is one of the most promising sub-fields in gesture recognition research. Sign languages are commonly developed for hearing impaired communities, which can include interpreters, friends and families of hearing impaired people as well as people who are hard of hearing themselves. This thesis discusses the development of a Phoneme based sign language recognition system for the hearing impaired. Previous research on sign language recognition systems have concentrated on finger spellings recognition or isolated word recognition. This research focuses on developing a sign language recognition system for recognizing 44 English phonemes. To represent the 44 English phonemes, as a first step, 11 different gestures were developed. By selecting suitable combination of these 11 gestures for the right and left hand, 44 different gesture combinations were formulated. The signed data are collected from seven subjects using an ordinary web camera at a resolution of 640×480 pixels. The data is preprocessed and features are extracted from the segmented regions of the signed data. A newly proposed interleaving preprocessing algorithm used in developing the sign language recognition system is discussed in this thesis. Artificial Neural Network (ANN) provides alternative form of computing that attempts to mimic the functionality of the brain. The feature set is then feed to the neural network model to classify the phoneme sign. An audio system is installed to play the particular word for the communication between the ordinary people and hearing impaired community. Experimental results show that the use of proposed interleaving method yields a better classification accuracy compared to the conventional method. The vertical interleaving method using combined blur and affine moment invariant features and Elman network yields the maximum classification accuracy of 95.50%.en_US
dc.language.isoenen_US
dc.publisherUniversiti Malaysia Perlis (UniMAP)en_US
dc.subjectSign language recognitionen_US
dc.subjectHand gestureen_US
dc.subjectGesture recognitionen_US
dc.subjectArtificial Neural Network (ANN)en_US
dc.subjectSign languagesen_US
dc.subjectHearing impaireden_US
dc.titleDesign and development of phoneme based sign language recognition system for the hearing impaireden_US
dc.typeThesisen_US
dc.publisher.departmentSchool of Mechatronic Engineeringen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record