Demos/VideosDemos/Videos
A Camera-based Tactile Sensor for Sorting Tasks
This video shows the capabilities of the so-called visuo-haptic sensor in a sorting task. A linear robot with a two-finger gripper sorts plastic bottles based on their compliance. During the grasp procedure, the visuo-haptic sensor measures the contact forces, the pressure distribution along the gripper finger, finger position, the object deformation, and also estimates the object compliance as well as object properties like, e.g., shape and size. The novelty lies in the fact that all this data is extracted from camera images, i.e., the robot is “feeling by seeing”.
Reproduction of Textures based on Electrovibration
This demonstration presents a novel approach to display textures via electrovibration. We collect acceleration data from real textures using a sensorized tool-tip during controlled scans and captured images of the textures. We display these acceleration data using a common electrovibration setup.
Handheld Surface Material Classification Systems
This video presents a content-based surface material retrieval (CBSMR) system for tool-mediated freehand exploration of surface materials. We demonstrate a custom 3d printed sensorized tool and a common smartphone using an android-based surface classification app that are able to classify among a set of surface materials without relying on explicit scan force and scan velocity measurements.
Tactile Mouse for Interactive Surface Material Exploration
This demo demonstrates the prototype of a novel tactile input/output device for the exploration of object surface materials on a computer. Our proposed tactile computer mouse is equipped with a broad range of actuators to present various tactile cues to a user, but preserves the input capabilities of a common computer mouse.
Ultra-Low Delay Video Transmission
Low delay video transmission is becoming increasingly important. Delay critical, video enabled applications range from teleoperation scenarios such as controlling drones or telesurgery to autonomous control through computer vision algorithms applied on re
Compensating the Effect of Communication Delay in Client-Server-based Shared Haptic Virtual Environments
Shared Haptic Virtual Environments can be realized using a client-server architecture. In this architecture, each client maintains a local copy of the virtual environment (VE). A centralized physics simulation running on a server calculates the object states based on haptic device position information received from the clients. The object states are sent back to the clients to update the local copies of the VE, which are used to render interaction forces displayed to the user through a haptic device. Communication delay leads to delayed object state updates and increased force feedback rendered at the clients. In this work, we analyze the effect of communication delay on the transparency of the architecture and propose an adaptive force rendering scheme to compensate for this effect.
Remote Texture Exploration
The "Remote Texture Exploration" Android App is designed to run on the TPad Phone, a novel smartphone with a variable friction display.
Computational Haptics Lab 2014
This video is a compilation of the final lab projects implemented by the students during the Computational Haptics Lab in the summer semester 2014.
LMT Texture Database
While stroking a rigid tool over an object surface, vibrations induced on the tool, which represent the interaction between the tool and the surface texture, can be measured by means of an accelerometer. Such acceleration signals can be used to recognize or to classify object surface textures. The temporal and spectral properties of the acquired signals, however, heavily depend on different parameters like the applied force on the surface or the lateral velocity during the exploration. Robust features that are invariant against such scan-time parameters are currently lacking, but would enable texture classification and recognition using uncontrolled human exploratory movements. We introduce a haptic texture database which allows for a systematic analysis of feature candidates. The database includes recorded accelerations measured during controlled and well-defined texture scans, as well as uncontrolled human free hand texture explorations for 69 different textures.
A Visual-Haptic Multiplexing Scheme for Teleoperation over Constant-Bitrate Communication Links
This video shows a demo of the visual-haptic multiplexing scheme with our teleoperation setup.
Computational Haptics Lab 2013
This video is a compilation of the final lab projects implemented by the students during the Computational Haptics Lab in the summer semester 2013.
Visual Indoor Navigation
This video shows a live demonstration of the visual indoor localization & navigation system developed for the project NAVVIS at the Institute for Media Technology at Technical University Munich. By analyzing the camera images for distinctive visual features, the position and orientation of the smartphone is recognized and displayed on the map.
CoopMedia
Within the scope of a joint research project with Docomo Eurolabs, we have developed and optimized methods for the synchronization of videos in a multi-perspective UGC scenario. The video presents the problem and explains the solution. It also shows screen casts from the CoopMedia portal, the demonstration platform of the CoopMedia project.
TUM Indoor Viewer
This video shows the TUM Indoor Viewer, a web-based panorama image and pointcloud viewer, developed for the NAVVIS project at the Institute for Media Technology at TUM. The viewer provides interactive browsing of more than 40,000 images from the TUMindoor dataset, recorded at the main site of Technische Universität München.
Surprise Detection in Cognitive Mobile Robots
This video illustrates the acquisition of an image sequence by a mobile robot. The first part of the sequence is used in order to train the prior distributions along the robot's trajectory. In the second part a human puts objects on the table in a certain time interval. The robot computes surprise maps from the captured images and the prior distributions from the internal environment representation and updates the environment model.
Content Based Image Retrieval: CD cover search
This video demonstrates the capabilities of content based image retrieval (CBIR) algorithms when applied to mobile multimedia search. In this scenario we identify various CDs in real-time using low resolution video in a database of more than 390,000 CD covers.
Mobile Visual Localization
This video demonstrates the visual location recognition system developed at the Institute for Media Technology. A video recorded by a mobile device serves as a query to determine the location by retrieving visually similar Google Street View panoramas within an area of a few square kilometers. This is a challenging task due to seasonal changes, dynamic and clutter objects, occlusions and overlaps, and many more.
Web-based Human Machine Interface
We work on a flexible in-vehicle Human-Machine-Interface (HMI) that is based on a service-oriented architecture (SOA) enriched by graphical information. Our approach takes advantage of standard web technologies and targets at a solution that can be flexibly deployed in various application scenarios as, e.g., for in-vehicle driver-assistance systems, in-flight entertainment systems or generally spoken, systems which allow multiple persons to interact with an HMI. Furthermore, we consider the integration of consumer electronic devices. The proposed system benefits from its SOA-driven, generic approach and comes along with advantages like platform-independency, scalability, faster time to market and lower price compared to application-specific approaches.
Projektpraktikum Multimedia - Cooperative Haptic Games - Results Winter Semester 2009/2010
In the winter semester 2009/2010, the "Projektpraktikum Multimedia" focused on cooperative haptic gaming. In teams of two or three, the students designed and developed multimedia games involving rich 3D graphics and haptic force feedback. An important aspect of the developed games is the need for cooperative interaction among the participating players in order to achieve a common goal. This video is a compilation of the games developed during the course.
Establishing Multimodal Telepresence Sessions with SIP and Advanced Haptic Codecs
Session Initiation Protocol (SIP) is widely used to handle multimedia teleconference sessions with audio, video or text, and provides many services advantageous for establishing flexible dynamic connections between heterogeneous haptic interfaces and telerobotic systems. We apply this paradigm to the creation and negotiation of haptic telepresence sessions and propose to extend this framework to work with the haptic modality. To demonstrate the proposed session framework a proof-of-concept system was implemented. This video demonstrates successfully established haptics, audio, and video media streams, transmitting haptics data at close to 1kHz.
IT_Motive 2020 Demonstrators
While some decades ago, technical innovation in the automotive industry has generally been based on mechanical improvements, today’s car innovations are mainly (by about 90 %) driven by electronics and software. Since software and electronics have much faster innovation cycles than a car’s typical lifecycle, there’s an increasing amount of time in a car’s life in which the built in electronic and software components don’t meet the owner’s constraints. Additionally, it takes a significant amount of time for car manufacturers to test innovations that are new to the market and integrate them in their new products. To make things worse, today’s car IT architectures suffer from the demand of expandability, since they are built upon many different custom products (e.g. up to seven different busses, about 50 - 70 different electronic control units (ECUs)). In our work, a novel car IT architecture based on general purpose hardware and an ethernet based communication network was introduced.
Image-based Environment Modeling at AUTOMATICA 2008
At the international trade fair for automation and mechatronics AUTOMATICA we demonstrated in 2008 the acquisition of image-based environment representations using a mobile robot platform. A Pioneer 3-DX, which was equipped with two cameras, captured a dense set of images of a typical household scene which contained two glasses. Due to the refraction of light and specular highlights it is usually tedious to acquire acceptable virtual representations of glasses. This video shows that our representation, which stores the densely captured images and, in addition, a depth map and the camera pose for each image, allows for the computation of realistic virtual images in a continuous viewpoint space. At the end of the video, the application of virtual view interpolation to surprise detection is shown.