iRepository at Perpustakaan UniMAP >
JOURNAL ARTICLES >
School of Mechatronic Engineering (Articles) >
Please use this identifier to cite or link to this item:
|Title: ||Combining spatial filtering and wavelet transform for classifying human emotions using EEG Signals|
Nagarajan, Ramachandran, Prof. Dr.
Sazali, Yaacob, Prof. Dr.
|Keywords: ||Electroencephalogram (EEG);Emotion assessment;Surface laplacian filtering;Wavelet transform;K nearest neighbor (KNN);Linear discriminant analysis (LDA)|
|Issue Date: ||2011|
|Publisher: ||Biomedical Engineering Society of the R.O.C.|
|Citation: ||Journal of Medical and Biological Engineering, vol. 31(1), 2011, pages 45-52|
|Abstract: ||In this paper, we present human emotion assessment using electroencephalogram (EEG) signals. The combination of surface Laplacian (SL) filtering, time-frequency analysis of wavelet transform (WT) and linear classifiers are used to classify discrete emotions (happy, surprise, fear, disgust, and neutral). EEG signals were collected from 20 subjects through 62 active electrodes, which were placed over the entire scalp based on the International 10-10 system. An audio-visual (video clips) induction-based protocol was designed for evoking discrete emotions. The raw EEG signals were preprocessed through surface Laplacian filtering method and decomposed into five different EEG frequency bands (delta, theta, alpha, beta and gamma) using WT. In this work, we used three different wavelet functions, namely: "db8", "sym8" and "coif5", for extracting the statistical features from EEG signal for classifying the emotions. In order to evaluate the efficacy of emotion classification under different sets of EEG channels, we compared the classification accuracy of the original set of channels (62 channels) with that of a reduced set of channels (24 channels). The validation of statistical features was performed using 5-fold cross validation. In this work, K nearest neighbor (KNN) outperformed linear discriminant analysis (LDA) by offering a maximum average classification rate of 83.04% on 62 channels and 79.17% on 24 channels, respectively. Finally, we present the average classification accuracy and individual classification accuracy of two different classifiers for justifying the performance of our emotion recognition system.|
|Description: ||Link to publisher's homepage at http://www.bmes.org.tw/|
|Appears in Collections:||School of Mechatronic Engineering (Articles)|
Ramachandran, Nagarajan, Prof. Dr.
Sazali Yaacob, Prof. Dr.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.