Systems interacting with objects in unstructured environments require both haptic and visual sensors to acquire sufficient scene knowledge. Typically, separate sensors and processing systems are used for the two modalities. We propose to acquire haptic and visual measurements simultaneously, with the same standard camera that already observes the scene. The compression of a passive, deformable material mounted on the actuator is measured visually, ensuring coherence between the visual and haptic modalities. The approach is implemented and validated on a small robotic platform which is used for haptic exploration and interacts with objects by pushing them. Furthermore, footprints of obstacles are estimated, yielding a haptic map of static and movable objects.
A system is presented for the haptic exploration of objects in a household environment. Haptic properties are acquired using a passive, deformable sensor tip mounted on a platform. The sensor deformation is measured visually, allowing for a flexible and low-cost design. Navigation of simple robots, such as vacuum cleaners, is improved by active manipulation. For exploration, the robot pushes the sensor tip towards the object. A camera tracks points on the robot and the scene to measure applied forces and the reaction of the object. The pose of the sensor tip is obtained by tracking arbitrary templates.
Our camera-based tactile sensors enable low-cost, replaceable, or even under-actuated fingers that are still sensitive. Electronic components in the fingers are not required. The measurements from the camera include the grasping force, the force distribution along the finger, the gripper opening, as well as object properties like shape and deformation
Our sensor uses 6D visual tracking with an external camera to substitute dedicated sensor modules on the end-effector. This type of sensor is used in many robotics applications, such as force-control of a robot tool, teaching of robotic arms, or six-axis human input devices.
We use a passive "bumper" that can cover the entire circumference of a mobile platform, such as vacuum cleaner robots. It enables scene understanding for tasks like environment mapping, including haptic object properties, detection of transparent, soft and moveable objects, or navigation and controlled pushing of objects.