AI - Master Project (2009)27 Dec 2012
In my eight month master-project I looked at human-robot interaction. Humans can easily distinguish one object for another, but this is difficult for robots. I created a method with which a human teacher can easily designate and segment an object with a laser pointer, so that a robot can easily ‘learn’ this object using active vision. The thesis shows that the robot can recognize these objects in real-world conditions. The recognition results described in the thesis have been improved upon significantly after the thesis was finalized. The work was presented at the RO-MAN conference in Italy.
Overview of human-robot interaction
In this thesis, methods are presented that allow a mobile robot equipped with a stereo camera to automatically learn an accurate SURF-keypoint based representation of an arbitrary object. In the approach, a person designates the object to be learned with a laser. By using active vision to filter keypoints, the resulting object representations are robust and recognition time is considerably reduced.
The segmentation method was tested on an extensive set of 7 objects, while the creation of object representations and the recognition thereof was tested on a set of 21 objects. The objects vary greatly in size, shape, color and texture. This dataset is considerably larger than those used in similar research.
It is shown that by filtering the keypoints using the human segmentation and active vision, the number of keypoints can be greatly reduced while not decreasing recognition accuracy. The recognition was further tested on scenes representing typical office scenarios. It is shown that object recognition works reasonable in highly complex real-world environments, with lighting changes and object occlusions, even when using only one view of the scene.
My advisors for the project were Dr. Gert Kootstra, Prof. Dr. Schomaker and Dr. Paul Rybski.
Working with the robot