dc.contributor.author | Kamarulzaman, Kamarudin | |
dc.contributor.author | Syed Muhammad Mamduh, Syed Zakaria | |
dc.contributor.author | Ali Yeon, Md Shakaff, Prof. Dr. | |
dc.contributor.author | Shaharil, Mad Saad | |
dc.contributor.author | Ammar, Zakaria | |
dc.contributor.author | Latifah Munirah, Kamarudin, Dr. | |
dc.contributor.author | Abu Hassan, Abdullah, Dr. | |
dc.date.accessioned | 2014-04-30T06:47:49Z | |
dc.date.available | 2014-04-30T06:47:49Z | |
dc.date.issued | 2013-03 | |
dc.identifier.citation | p. 247-251 | en_US |
dc.identifier.isbn | 978-146735609-1 | |
dc.identifier.uri | http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6530050 | |
dc.identifier.uri | http://dspace.unimap.edu.my:80/dspace/handle/123456789/34193 | |
dc.description | Proceeding of The 9th International Colloquium on Signal Processing and its Applications 2013 (CSPA 2013) at Kuala Lumpur, Malaysia on 8 March 2013 through 10 March 2013 | en_US |
dc.description.abstract | Mobile robotics has been strongly linked to localization and mapping especially for navigation purpose. A robot needs a sensor to see objects around it, avoid them and also map the surrounding area. The use of 1D and 2D proximity sensors such as ultrasonic sensor, sonar and laser range finder for area mapping is believed to be less effective since they do not provide information in Y or Z (horizontal and vertical) direction. The robot may miss an object due to its shape and position; thus increasing the risk of collision as well as inaccurate map. In this paper, a 3D visual device particularly Microsoft Kinect was used to perform area mapping. The 3D depth data from the device's depth sensor was retrieved and converted into 2D map using the presented method. A Graphical User Interface (GUI) was also implemented on the base station to depict the real-time map. It was found that the method applied has successfully mapped the potentially missing objects when using 1D or 2D sensor. The convincing results shown in this paper suggest that the Kinect is suitable for indoor SLAM application given that the device's limitations are solved. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_US |
dc.relation.ispartofseries | Proceeding of The 9th International Colloquium on Signal Processing and its Applications 2013 (CSPA 2013); | |
dc.subject | Image processing | en_US |
dc.subject | Indoor SLAM | en_US |
dc.subject | Microsoft Kinect | en_US |
dc.subject | Navigation | en_US |
dc.subject | Robotics | en_US |
dc.title | Method to convert Kinect's 3D depth data to a 2D map for indoor SLAM | en_US |
dc.type | Working Paper | en_US |
dc.contributor.url | arul.unimap@gmail.com | en_US |
dc.contributor.url | aliyeon@unimap.edu.my | en_US |
dc.contributor.url | latifahmunirah@unimap.edu.my | en_US |
dc.contributor.url | abuhassan@unimap.edu.my | |