Gesture recognition of everyday activities
Abstract
Multimodal recognition system is becoming a more common interaction tools
in the fields of ubiquitous and wearable computing. Recent technologies and
development of multimodal in human computer interaction have encouraged
the notion and analysis of multimodal in human daily life activities.
This thesis explores the concept of multimodal i.e. speech and gesture
recognition in everyday life activities. It propose an approach to recognize
goal of activity based on detecting and analyzing sequence of gesture,
speech, object, actions and locations that are being manipulated by the users.
In domains such as cooking, where there are involve many similar and
repeated of objects and actions can be a valuable and interest area to study
in determining the concept of multimodal in everyday activities. An experiment
of gesture and speech in cooking activity were analysed in term of object
manipulation and sequence of actions by using video analysis and RFID
tagged objects. There were compared with multimodal in computerized
interaction.
This study has demonstrate multimodal also been used during cooking
activity. Combination of speech and gesture results set sequence of actions
which be used to determine the goal of activity through ontology multimodal. It
also demonstrate a set of actions, objects and locations sequence guide to a
new multimodalities in real life activities.