In robotics and computer vision, real-time recognition of the 6DOF pose of an object is an essential function for many applications such as robotic manipulation and automated visual inspection. Feature-based methods, such as SIFT and SURF, work well on the textured objects but they are unable to deal with textureless objects due to the reliability on discriminating features. In this work, we present a template-based method to perform real-time estimation of the viewpoint, scale, and translation of an object in front of the camera. Specifically, we use a 3D model to render example poses of a textureless object, and find the nearest match to the input image using a GPU implementation. We evaluate our method on existing dataset and validate that our method is robust (capable of handling textureless objects and different illumination), general (both RGB and RGB-D approach), fast (achieve state-of-the-art speed with multiple objects) and scalable (increase the object number with sub-linear runtime increase).This invention of object pose estimation has been applied in real robots to recognize object pose and perform manipulation. We are going to integrate object tracking and pose verification to further stablize the pose estimation performance. Scott McEvoy smcevoy@andrew.cmu.edu 412-268-6053
Smart, interactive desk
Get ready to take your space management game to the next level with the University of Glasgow’s innovative project! By combining the