Please use this identifier to cite or link to this item: http://dspace.unimap.edu.my:80/xmlui/handle/123456789/31948
Title: Design and development of phoneme based sign language recognition system for the hearing impaired
Authors: Rajkumar, Palaniappan
Keywords: Sign language recognition
Hand gesture
Gesture recognition
Artificial Neural Network (ANN)
Sign languages
Hearing impaired
Issue Date: 2012
Publisher: Universiti Malaysia Perlis (UniMAP)
Abstract: Sign language recognition is one of the most promising sub-fields in gesture recognition research. Sign languages are commonly developed for hearing impaired communities, which can include interpreters, friends and families of hearing impaired people as well as people who are hard of hearing themselves. This thesis discusses the development of a Phoneme based sign language recognition system for the hearing impaired. Previous research on sign language recognition systems have concentrated on finger spellings recognition or isolated word recognition. This research focuses on developing a sign language recognition system for recognizing 44 English phonemes. To represent the 44 English phonemes, as a first step, 11 different gestures were developed. By selecting suitable combination of these 11 gestures for the right and left hand, 44 different gesture combinations were formulated. The signed data are collected from seven subjects using an ordinary web camera at a resolution of 640×480 pixels. The data is preprocessed and features are extracted from the segmented regions of the signed data. A newly proposed interleaving preprocessing algorithm used in developing the sign language recognition system is discussed in this thesis. Artificial Neural Network (ANN) provides alternative form of computing that attempts to mimic the functionality of the brain. The feature set is then feed to the neural network model to classify the phoneme sign. An audio system is installed to play the particular word for the communication between the ordinary people and hearing impaired community. Experimental results show that the use of proposed interleaving method yields a better classification accuracy compared to the conventional method. The vertical interleaving method using combined blur and affine moment invariant features and Elman network yields the maximum classification accuracy of 95.50%.
URI: http://dspace.unimap.edu.my:80/dspace/handle/123456789/31948
Appears in Collections:School of Mechatronic Engineering (Theses)

Files in This Item:
File Description SizeFormat 
Page 1-24.pdfThis item is protected by original copyright.96.54 kBAdobe PDFView/Open
Full text.pdfAccess is limited to UniMAP community.1.68 MBAdobe PDFView/Open


Items in UniMAP Library Digital Repository are protected by copyright, with all rights reserved, unless otherwise indicated.