Please use this identifier to cite or link to this item: http://dspace.unimap.edu.my:80/xmlui/handle/123456789/34193
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKamarulzaman, Kamarudin-
dc.contributor.authorSyed Muhammad Mamduh, Syed Zakaria-
dc.contributor.authorAli Yeon, Md Shakaff, Prof. Dr.-
dc.contributor.authorShaharil, Mad Saad-
dc.contributor.authorAmmar, Zakaria-
dc.contributor.authorLatifah Munirah, Kamarudin, Dr.-
dc.contributor.authorAbu Hassan, Abdullah, Dr.-
dc.date.accessioned2014-04-30T06:47:49Z-
dc.date.available2014-04-30T06:47:49Z-
dc.date.issued2013-03-
dc.identifier.citationp. 247-251en_US
dc.identifier.isbn978-146735609-1-
dc.identifier.urihttp://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6530050-
dc.identifier.urihttp://dspace.unimap.edu.my:80/dspace/handle/123456789/34193-
dc.descriptionProceeding of The 9th International Colloquium on Signal Processing and its Applications 2013 (CSPA 2013) at Kuala Lumpur, Malaysia on 8 March 2013 through 10 March 2013en_US
dc.description.abstractMobile robotics has been strongly linked to localization and mapping especially for navigation purpose. A robot needs a sensor to see objects around it, avoid them and also map the surrounding area. The use of 1D and 2D proximity sensors such as ultrasonic sensor, sonar and laser range finder for area mapping is believed to be less effective since they do not provide information in Y or Z (horizontal and vertical) direction. The robot may miss an object due to its shape and position; thus increasing the risk of collision as well as inaccurate map. In this paper, a 3D visual device particularly Microsoft Kinect was used to perform area mapping. The 3D depth data from the device's depth sensor was retrieved and converted into 2D map using the presented method. A Graphical User Interface (GUI) was also implemented on the base station to depict the real-time map. It was found that the method applied has successfully mapped the potentially missing objects when using 1D or 2D sensor. The convincing results shown in this paper suggest that the Kinect is suitable for indoor SLAM application given that the device's limitations are solved.en_US
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.ispartofseriesProceeding of The 9th International Colloquium on Signal Processing and its Applications 2013 (CSPA 2013);-
dc.subjectImage processingen_US
dc.subjectIndoor SLAMen_US
dc.subjectMicrosoft Kinecten_US
dc.subjectNavigationen_US
dc.subjectRoboticsen_US
dc.titleMethod to convert Kinect's 3D depth data to a 2D map for indoor SLAMen_US
dc.typeWorking Paperen_US
dc.contributor.urlarul.unimap@gmail.comen_US
dc.contributor.urlaliyeon@unimap.edu.myen_US
dc.contributor.urllatifahmunirah@unimap.edu.myen_US
dc.contributor.urlabuhassan@unimap.edu.my-
Appears in Collections:Latifah Munirah Kamarudin, Associate Professor Dr.
Conference Papers
Ali Yeon Md Shakaff, Dato' Prof. Dr.
Abu Hassan Abdullah, Associate Prof. Ir. Ts. Dr.

Files in This Item:
File Description SizeFormat 
Method to convert Kinect's 3D depth data to a 2D map for indoor SLAM.pdfAccess is limited to UniMAP community556.05 kBAdobe PDFView/Open


Items in UniMAP Library Digital Repository are protected by copyright, with all rights reserved, unless otherwise indicated.