Prof. Dr. Michael Rohs

Prof. Dr. Michael Rohs
Appelstr. 9A
30167 Hannover
Germany
Room 906
+49 (511) 762-2435


Biography

1994 - 2000
Studium der Informatik an der Technischen Universität Darmstadt und der University of Colorado at Boulder
2000 - 2005
Doktorand und Assistent an der ETH Zürich
2005
Promotion (Dr. sc. ETH Zürich)
2005 - 2010
Senior Research Scientist bei den Deutsche Telekom Laboratories, einem An-Institut der Technischen Universität Berlin; Lehraufträge an der TU Berlin für verschiedene Vorlesungen und Seminare
2007 - 2008
Vertretungsprofessur für User Interface Engineering am Bonn-Aachen International Center for Information Technology (B-IT), Universität Bonn und Fraunhofer IAIS
2010 - 2012
Juniorprofessor für Medieninformatik an der Ludwig-Maximilians-Universität München
seit 1.7.2012
Professor für Mensch-Computer-Interaktion an der Leibniz-Universität Hannover

Curriculum Vitae

Publications

Journal Articles

WorldCupinion Experiences with an Android App for Real-Time Opinion Sharing During Soccer World Cup Games Robert Schleicher, Alireza Sahami Shirazi, Michael Rohs, Sven Kratz, Albrecht Schmidt Int. J. Mob. Hum. Comput. Interact.
     
Mobile devices are increasingly used in social networking applications and research. So far, there is little work on real-time emotion or opinion sharing in large loosely coupled user communities. One potential area of application is the assessment of widely broadcasted television TV shows. The idea of connecting non-collocated TV viewers via telecommunication technologies is referred to as Social TV. Such systems typically include set-top boxes for supporting the collaboration. In this work the authors investigated whether mobile phones can be used as an additional channel for sharing opinions, emotional responses, and TV-related experiences in real-time. To gain insight into this area, an Android app was developed for giving real-time feedback during soccer games and to create ad hoc fan groups. This paper presents results on rating activity during games and discusses experiences with deploying this app over four weeks during soccer World Cup. In doing so, challenges and opportunities faced are highlighted and an outlook on future work in this area is given.
Bridging the gap between the Kodak and the Flickr generations: A novel interaction technique for collocated photo sharing Christian Kray, Michael Rohs, Jonathan Hook, Sven Kratz Int. J. Hum.-Comput. Stud.
     
Passing around stacks of paper photographs while sitting around a table is one of the key social practices defining what is commonly referred to as the ‘Kodak Generation’. Due to the way digital photographs are stored and handled, this practice does not translate well to the ‘Flickr Generation’, where collocated photo sharing often involves the (wireless) transmission of a photo from one mobile device to another. In order to facilitate ‘cross-generation’ sharing without enforcing either practice, it is desirable to bridge this gap in a way that incorporates familiar aspects of both. In this paper, we discuss a novel interaction technique that addresses some of the constraints introduced by current communication technology, and that enables photo sharing in a way, which resembles the passing of stacks of paper photographs. This technique is based on dynamically generated spatial regions around mobile devices and has been evaluated through two user studies. The results we obtained indicate that our technique is easy to learn and as fast, or faster than, current technology such as transmitting photos between devices using Bluetooth. In addition, we found evidence of different sharing techniques influencing social practice around photo sharing. The use of our technique resulted in a more inclusive and group-oriented behavior in contrast to Bluetooth photo sharing, which resulted in a more fractured setting composed of sub-groups.
Impact of item density on the utility of visual context in magic lens interactions Michael Rohs, Robert Schleicher, Johannes Schöning, Georg Essl, Anja Naumann, Antonio Krüger Personal Ubiquitous Comput.
     
This article reports on two user studies investigating the effect of visual context in handheld augmented reality interfaces. A dynamic peephole interface (without visual context beyond the device display) was compared to a magic lens interface (with video see-through augmentation of external visual context). The task was to explore items on a map and look for a specific attribute. We tested different sizes of visual context as well as different numbers of items per area, i.e. different item densities. Hand motion patterns and eye movements were recorded. We found that visual context is most effective for sparsely distributed items and gets less helpful with increasing item density. User performance in the magic lens case is generally better than in the dynamic peephole case, but approaches the performance of the latter the more densely the items are spaced. In all conditions, subjective feedback indicates that participants generally prefer visual context over the lack thereof. The insights gained from this study are relevant for designers of mobile AR and dynamic peephole interfaces, involving spatially tracked personal displays or combined personal and public displays, by suggesting when to use visual context.
Interactivity for Mobile Music-Making Georg Essl, Michael Rohs Organised Sound
     
Mobile phones offer an attractive platform for interactive music performance. We provide a theoretical analysis of the sensor capabilities via a design space and show concrete examples of how different sensors can facilitate interactive performance on these devices. These sensors include cameras, microphones, accelerometers, magnetometers and multitouch screens. The interactivity through sensors in turn informs aspects of live performance as well as composition though persistence, scoring, and mapping to musical notes or abstract sounds.
Group Coordination and Negotiation through Spatial Proximity Regions around Mobile Devices on Augmented Tabletops Christian Kray, Michael Rohs, Jonathan Hook, Sven Kratz Horizontal Interactive Human Computer Systems, 2008. TABLETOP 2008. 3rd IEEE International Workshop on
     
Negotiation and coordination of activities involving a number of people can be a difficult and time-consuming process, even when all participants are collocated. We propose the use of spatial proximity regions around mobile devices on a table to significantly reduce the effort of proposing and exploring content within a group of collocated people. In order to determine the location of devices on ordinary tables, we developed a tracking mechanism for a camera-projector system that uses dynamic visual markers displayed on the screen of a device. We evaluated our spatial proximity region based approach using a photo-sharing application for people sat around a table. The tabletop provides a frame of reference in which the spatial arrangement of devices signals the coordination state to the users. The results from the study indicate that the proposed approach facilitates coordination in several ways, for example, by allowing for simultaneous user activity and by reducing the effort required to achieve a common goal. Our approach reduced the task completion time by 43% and was rated as superior in comparison to other established techniques.

Full Papers

Cruise Control for Pedestrians: Controlling Walking Direction using Electrical Muscle Stimulation Max Pfeiffer, Tim Duente, Stefan Schneegass, Florian Alt, Michael Rohs Proc. of CHI 2015
     
Pedestrian navigation systems require users to perceive, interpret, and react to navigation information. This can tax cognition as navigation information competes with information from the real world. We propose actuated navigation, a new kind of pedestrian navigation in which the user does not need to attend to the navigation task at all. An actuation signal is directly sent to the human motor system to influence walking direction. To achieve this goal we stimulate the sartorius muscle using electrical muscle stimulation. The rotation occurs during the swing phase of the leg and can easily be counteracted. The user therefore stays in control. We discuss the properties of actuated navigation and present a lab study on identifying basic parameters of the technique as well as an outdoor study in a park. The results show that our approach changes a user's walking direction by about 16 degree/m on average and that the system can successfully steer users in a park with crowded areas, distractions, obstacles, and uneven ground.
Let Me Grab This : A Comparison of EMS and Vibration for Haptic Feedback in Free-Hand Interaction Max Pfeiffer, Stefan Schneegass, Florian Alt, Michael Rohs Augmented Human
        
Free-hand interaction with large displays is getting more common, for example in public settings and exertion games. Adding haptic feedback offers the potential for more realis- tic and immersive experiences. While vibrotactile feedback is well known, electrical muscle stimulation (EMS) has not yet been explored in free-hand interaction with large displays. EMS offers a wide range of different strengths and qualities of haptic feedback. In this paper we first systematically inves- tigate the design space for haptic feedback. Second, we ex- perimentally explore differences between strengths of EMS and vibrotactile feedback. Third, based on the results, we evaluate EMS and vibrotactile feedback with regard to differ- ent virtual objects (soft, hard) and interaction with different gestures (touch, grasp, punch) in front of a large display. The results provide a basis for the design of haptic feedback that is appropriate for the given type of interaction and the material.
Around-Device Devices: My Coffee Mug is a Volume Dial Henning Pohl, Michael Rohs Proceedings of the 16th international conference on Human-computer interaction with mobile devices and services companion - MobileHCI '14
        
For many people their phones have become their main everyday tool. While phones can fulfill many different roles they also require users to (1) make do with affordance not specialized for the specific task, and (2) closely engage with the device itself. We propose utilizing the space and objects around the phone to offer better task affordance and to create an opportunity for casual interactions. Such around-device devices are a class of interactors that do not require users to bring special tangibles, but repurpose items already found in the user's surroundings. In a survey study, we determine which places and objects are available to around-device devices. Furthermore, in an elicitation study, we observe what objects users would use for ten interactions.
Tickle: A surface-independent interaction technique for grasp interfaces Katrin Wolf, Robert Schleicher, Sven Kratz, Michael Rohs Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction
        
We present a wearable interface that consists of motion sensors. As the interface can be worn on the user's fingers (as a ring) or fixed to it (with nail polish), the device controlled by finger gestures can be any generic object, provided they have an interface for receiving the sensor's signal. We implemented four gestures: tap, release, swipe, and pitch, all of which can be executed with a finger of the hand holding the device. In a user study we tested gesture appropriateness for the index finger at the back of a handheld tablet that offered three different form factors on its rear: flat, convex, and concave (undercut). For all three shapes, the gesture performance was equally good, however pitch performed better on all surfaces than swipe. The proposed interface is an example towards the idea of ubiquitous computing and the vision of seamless interactions with grasped objects. As an initial application scenario we implemented a camera control that allows the brightness to be configured using our tested gestures on a common SLR device.
Combining acceleration and gyroscope data for motion gesture recognition using classifiers with dimensionality constraints Sven Kratz, Michael Rohs, Georg Essl Proceedings of the 2013 international conference on Intelligent user interfaces
        
Motivated by the addition of gyroscopes to a large number of new smart phones, we study the effects of combining accelerometer and gyroscope data on the recognition rate of motion gesture recognizers with dimensionality constraints. Using a large data set of motion gestures we analyze results for the following algorithms: Protractor3D, Dynamic Time Warping (DTW) and Regularized Logistic Regression (LR). We chose to study these algorithms because they are relatively easy to implement, thus well suited for rapid prototyping or early deployment during prototyping stages. For use in our analysis, we contribute a method to extend Protractor3D to work with the 6D data obtained by combining accelerometer and gyroscope data. Our results show that combining accelerometer and gyroscope data is beneficial also for algorithms with dimensionality constraints and improves the gesture recognition rate on our data set by up to 4\u030603060025.
PalmSpace: Continuous Around-device Gestures vs. Multitouch for 3D Rotation Tasks on Mobile Devices Sven Kratz, Michael Rohs, Dennis Guse, Jörg Müller, Gilles Bailly, Michael Nischt Proceedings of the International Working Conference on Advanced Visual Interfaces
        
Rotating 3D objects is a diffcult task on mobile devices, because the task requires 3 degrees of freedom and (multi-)touch input only allows for an indirect mapping. We propose a novel style of mobile interaction based on mid-air gestures in proximity of the device to increase the number of DOFs and alleviate the limitations of touch interaction with mobile devices. While one hand holds the device, the other hand performs mid-air gestures in proximity of the device to control 3D objects on the mobile device's screen. A at hand pose de nes a virtual surface which we refer to as the PalmSpace for precise and intuitive 3D rotations. We constructed several hardware prototypes to test our interface and to simulate possible future mobile devices equipped with depth cameras. Pilot tests show that PalmSpace hand gestures are feasible. We conducted a user study to compare 3D rotation tasks using the most promising two designs for the hand location during interaction - behind and beside the device - with the virtual trackball, which is the current state-of-art technique for orientation manipulation on touchscreens. Our results show that both variants of PalmSpace have signi cantly lower task completion times in comparison to the virtual trackball.
ShoeSense: A New Perspective on Gestural Interaction and Wearable Applications Gilles Bailly, Jörg Müller, Michael Rohs, Daniel Wigdor, Sven Kratz Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
        
When the user is engaged with a real-world task it can be inappropriate or difficult to use a smartphone. To address this concern, we developed ShoeSense, a wearable system consisting in part of a shoe-mounted depth sensor pointing upward at the wearer. ShoeSense recognizes relaxed and discreet as well as large and demonstrative hand gestures. In particular, we designed three gesture sets (Triangle, Radial, and Finger-Count) for this setup, which can be performed without visual attention. The advantages of ShoeSense are illustrated in five scenarios: (1) quickly performing frequent operations without reaching for the phone, (2) discreetly performing operations without disturbing others, (3) enhancing operations on mobile devices, (4) supporting accessibility, and (5) artistic performances. We present a proof-of-concept, wearable implementation based on a depth camera and report on a lab study comparing social acceptability, physical and mental demand, and user preference. A second study demonstrates a 94-99% recognition rate of our recognizers.
A Taxonomy of Microinteractions: Defining Microgestures Based on Ergonomic and Scenario-dependent Requirements Katrin Wolf, Anja Naumann, Michael Rohs, Jörg Müller Proceedings of the 13th IFIP TC 13 International Conference on Human-computer Interaction - Volume Part I
        
This paper explores how microinteractions such as hand gestures allow executing a secondary task, e.g. controlling mobile applications and devices, without interrupting the manual primary tasks, for instance driving a car. We asked sports- and physiotherapists for using props while interviewing these experts in order to iteratively design microgestures. The required gestures should be easily performable without interrupting the primary task, without needing high cognitive effort, and without taking the risk of being mixed up with natural movements. Resulting from the expert interviews we developed a taxonomy for classifying these gestures according to their use cases and assess their ergonomic and cognitive attributes, focusing on their primary task compatibility. We defined 21 hand gestures, which allow microinteractions within manual dual task scenarios. In expert interviews we evaluated their level of required motor or cognitive resources under the constraint of stable primary task performance. Our taxonomy poses a basis for designing microinteraction techniques.
User-defined gestures for connecting mobile phones, public displays, and tabletops Christian Kray, Daniel Nesbitt, John Dawson, Michael Rohs Proceedings of the 12th international conference on Human computer interaction with mobile devices and services
        
Gestures can offer an intuitive way to interact with a computer. In this paper, we investigate the question whether gesturing with a mobile phone can help to perform complex tasks involving two devices. We present results from a user study, where we asked participants to spontaneously produce gestures with their phone to trigger a set of different activities. We investigated three conditions (device configurations): phone-to-phone, phone-to-tabletop, and phone to public display. We report on the kinds of gestures we observed as well as on feedback from the participants, and provide an initial assessment of which sensors might facilitate gesture recognition in a phone. The results suggest that phone gestures have the potential to be easily understood by end users and that certain device configurations and activities may be well suited for gesture control.
Semi-automatic zooming for mobile map navigation Sven Kratz, Ivo Brodien, Michael Rohs Proceedings of the 12th international conference on Human computer interaction with mobile devices and services
        
In this paper we present a novel interface for mobile map navigation based on Semi-Automatic Zooming (SAZ). SAZ gives the user the ability to manually control the zoom level of an SDAZ interface, while retaining the automatic zooming characteristics of that interface at times when the user is not explicitly controlling the zoom level. In a user study conducted using a realistic mobile map with a wide scale space, we compare SAZ with existing map interface techniques, multi-touch and Speed-Dependent Automatic Zooming (SDAZ). We extend a dynamic state-space model for Speed-Dependent Automatic Zooming (SDAZ) to accept 2D tilt input for scroll rate and zoom level control and implement a dynamically zoomable map view with access to high-resolution map material for use in our study. The study reveals that SAZ performs significantly better than SDAZ and that SAZ is comparable in performance and usability to a standard multi-touch map interface. Furthermore, the study shows that SAZ could serve as an alternative to multi-touch as input technique for mobile map interfaces.
Characteristics of pressure-based input for mobile devices Craig Stewart, Michael Rohs, Sven Kratz, Georg Essl Proceedings of the 28th international conference on Human factors in computing systems
        
We conducted a series of user studies to understand and clarify the fundamental characteristics of pressure in user interfaces for mobile devices. We seek to provide insight to clarify a longstanding discussion on mapping functions for pressure input. Previous literature is conflicted about the correct transfer function to optimize user performance. Our study results suggest that the discrepancy can be explained by different signal conditioning circuitry and with improved signal conditioning the user-performed precision relationship is linear. We also explore the effects of hand pose when applying pressure to a mobile device from the front, the back, or simultaneously from both sides in a pinching movement. Our results indicate that grasping type input outperforms single-sided input and is competitive with pressure input against solid surfaces. Finally we provide an initial exploration of non-visual multimodal feedback, motivated by the desire for eyes-free use of mobile devices. The findings suggest that non-visual pressure input can be executed without degradation in selection time but suffers from accuracy problems.
PhotoMap: Using Spontaneously Taken Images of Public Maps for Pedestrian Navigation Tasks on Mobile Devices Johannes Schöning, Antonio Krüger, Keith Cheverst, Michael Rohs, Markus Löchtefeld, Faisal Taher Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
        
In many mid- to large-sized cities public maps are ubiquitous. One can also find a great number of maps in parks or near hiking trails. Public maps help to facilitate orientation and provide special information to not only tourists but also to locals who just want to look up an unfamiliar place while on the go. These maps offer many advantages compared to mobile maps from services like Google Maps Mobile or Nokia Maps. They often show local landmarks and sights that are not shown on standard digital maps. Often these 'You are here' (YAH) maps are adapted to a special use case, e.g. a zoo map or a hiking map of a certain area. Being designed for a fashioned purpose these maps are often aesthetically well designed and their usage is therefore more pleasant. In this paper we present a novel technique and application called PhotoMap that uses images of 'You are here' maps taken with a GPS-enhanced mobile camera phone as background maps for on-the-fly navigation tasks. We discuss different implementations of the main challenge, namely helping the user to properly georeference the taken image with sufficient accuracy to support pedestrian navigation tasks. We present a study that discusses the suitability of various public maps for this task and we evaluate if these georeferenced photos can be used for navigation on GPS-enabled devices.
HoverFlow: Expanding the Design Space of Around-Device Interaction Sven Kratz, Michael Rohs Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
        
In this paper we explore the design space of around-device interaction (ADI). This approach seeks to expand the interaction possibilities of mobile and wearable devices beyond the confines of the physical device itself to include the space around it. This enables rich 3D input, comprising coarse movement-based gestures, as well as static position-based gestures. ADI can help to solve occlusion problems and scales down to very small devices. We present a novel around-device interaction interface that allows mobile devices to track coarse hand gestures performed above the device's screen. Our prototype uses infrared proximity sensors to track hand and finger positions in the device's proximity. We present an algorithm for detecting hand gestures and provide a rough overview of the design space of ADI-based interfaces.
Improving the Communication of Spatial Information in Crisis Response by Combining Paper Maps and Mobile Devices Johannes Schöning, Michael Rohs, Antonio Krüger, Christoph Stasch Mobile Response
        
Efficient and effective communication between mobile units and the central emergency operation center is a key factor to respond successfully to the challenges of emergency management. Nowadays, the only ubiquitously available modality is a voice channel through mobile phones or radio transceivers. This makes it often very difficult to convey exact geographic locations and can lead to misconceptions with severe consequences, such as a fire brigade heading to the right street address in the wrong city. In this paper we describe a handheld augmented reality approach to support the communication of spatial information in a crisis response scenario. The approach combines mobile camera devices with paper maps to ensure a quick and reliable exchange of spatial information.
Designing Low-Dimensional Interaction for Mobile Navigation in 3D Audio Spaces Till Schäfers, Michael Rohs, Sascha Spors, Alexander Raake, Jens Ahrens 34th International Conference of the Audio Engineering Society (AES 2008), Jeju Island, Korea, August 28-30, 2008
     
In this paper we explore spatial audio as a new design space for applications like teleconferencing and audio stream management on mobile devices. Especially in conjunction with input techniques using motion-tracking, the interaction has to be thoroughly designed in order to allow low-dimensional input devices like gyroscopic sensors to be used for controlling the rather complex spatial setting of the virtual audio space. We propose a new interaction scheme that allows the mapping of low-dimensional input data to navigation of a listener within the spatial setting.
Sensing-Based Interaction for Information Navigation on Handheld Displays Michael Rohs, Georg Essl Advances in Human-Computer Interaction Volume 2008 (2008)
        
Information navigation on handheld displays is characterized by the small display dimensions and limited input capabilities of today’s mobile devices. Special strategies are required to help users navigate to off-screen content and develop awareness of spatial layouts despite the small display. Yet, handheld devices offer interaction possibilities that desktop computers do not. Handheld devices can easily be moved in space and used as a movable window into a large virtual workspace. We investigate different information navigation methods for small-scale handheld displays using a range of sensor technologies for spatial tracking. We compare user performance in an abstract map navigation task and discuss the tradeoffs of the different sensor and visualization techniques.
Target Acquisition with Camera Phones when used as Magic Lenses Michael Rohs, Antti Oulasvirta Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
        
When camera phones are used as magic lenses in handheld augmented reality applications involving wall maps or posters, pointing can be divided into two phases: (1) an initial coarse physical pointing phase, in which the target can be directly observed on the background surface, and (2) a fine-control virtual pointing phase, in which the target can only be observed through the device display. In two studies, we show that performance cannot be adequately modeled with standard Fitts' law, but can be adequately modeled with a two-component modification. We chart the performance space and analyze users' target acquisition strategies in varying conditions. Moreover, we show that the standard Fitts' law model does hold for dynamic peephole pointing where there is no guiding background surface and hence the physical pointing component of the extended model is not needed. Finally, implications for the design of magic lens interfaces are considered.
Improving Interaction with Virtual Globes Through Spatial Thinking: Helping Users Ask "Why?" Johannes Schöning, Brent Hecht, Martin Raubal, Antonio Krüger, Meredith Marsh, Michael Rohs Proceedings of the 13th International Conference on Intelligent User Interfaces
        
Virtual globes have progressed from little-known technology to broadly popular software in a mere few years. We investigated this phenomenon through a survey and discovered that, while virtual globes are en vogue, their use is restricted to a small set of tasks so simple that they do not involve any spatial thinking. Spatial thinking requires that users ask "what is where" and "why"; the most common virtual globe tasks only include the "what". Based on the results of this survey, we have developed a multi-touch virtual globe derived from an adapted virtual globe paradigm designed to widen the potential uses of the technology by helping its users to inquire about both the "what is where" and "why" of spatial distribution. We do not seek to provide users with full GIS (geographic information system) functionality, but rather we aim to facilitate the asking and answering of simple "why" questions about general topics that appeal to a wide virtual globe user base.

Book Chapters

The Design Space of Mobile Phone Input Techniques for Ubiquitous Computing Rafael Ballagas, Michael Rohs, Jennifer Sheridan, Jan Borchers In: Joanna Lumsden (Ed.): Handbook of Research on User Interface Design and Evaluation for Mobile Technologies. IGI Global, Hershey, PA, USA, 2008. ISBN: 978-1-59904-871-0
  
The mobile phone is the first truly pervasive computer. In addition to its core communications functionality, it is increasingly used for interaction with the physical world. This chapter examines the design space of input techniques using established desktop taxonomies and design spaces to provide an indepth discussion of existing interaction techniques. A new five-part spatial classification is proposed for ubiquitous mobile phone interaction tasks discussed in our survey. It includes supported subtasks (position, orient, and selection), dimensionality, relative vs. absolute movement, interaction style (direct vs. indirect), and feedback from the environment (continuous vs. discrete). Key design considerations are identified for deploying these interaction techniques in real-world applications. Our analysis aims to inspire and inform the design of future smart phone interaction techniques.

Short Papers

One-Button Recognizer: Exploiting Button Pressing Behavior for User Differentiation Henning Pohl, Markus Krause, Michael Rohs Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing - UbiComp '15
        
We present a novel way to recognize users by the way they press a button. Our approach allows low-effort and fast interaction without the need for augmenting the user or controlling the environment. It eschews privacy concerns of methods such as fingerprint scanning. Button pressing behavior is sufficiently discriminative to allow distinguishing users within small groups. This approach combines recognition and action in a single step, e.g., getting and tallying a coffee can be done with one button press. We deployed our system for 5 users over a period of 4 weeks and achieved recognition rates of 95% in the last week. We also ran a larger scale but short-term evaluation to investigate effects of group size and found that our method degrades gracefully for larger groups.
Sketch-a-TUI: Low Cost Prototyping of Tangible Interactions Using Cardboard and Conductive Ink Alexander Wiethoff, Hanna Schneider, Michael Rohs, Andreas Butz, Saul Greenberg Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction
        
Graspable tangibles are now being explored on the current generation of capacitive touch surfaces, such as the iPad and the Android tablet. Because the size and form factor is relatively new, early and low fidelity prototyping of these TUIs is crucial in getting the right design. The problem is that it is difficult for the average interaction designer to develop such physical prototypes. They require a substantial amount time and effort to physically model the tangibles, and expertise in electronics to instrument them. Thus prototyping is sometimes handed off to specialists, or is limited to only a few design iterations and alternative designs. Our solution contributes a low fidelity prototyping approach that is time and cost effective, and that requires no electronics knowledge. First, we supply non-specialists with cardboard forms to create tangibles. Second, we have them draw lines on it via conductive ink, which makes their objects recognizable by the capacitive touch screen. They can then apply routine programming to recognize these tangibles and thus iterate over various designs.
Interaction with Magic Lenses: Real-world Validation of a Fitts' Law Model Michael Rohs, Antti Oulasvirta, Tiia Suomalainen Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
        
Rohs and Oulasvirta (2008) proposed a two-component Fitts' law model for target acquisition with magic lenses in mobile augmented reality (AR) with 1) a physical pointing phase, in which the target can be directly observed on the background surface, and 2) a virtual pointing phase, in which the target can only be observed through the device display. The model provides a good fit (R2=0.88) with laboratory data, but it is not known if it generalizes to real-world AR tasks. In the present outdoor study, subjects (N=12) did building-selection tasks in an urban area. The differences in task characteristics to the laboratory study are drastic: targets are three-dimensional and they vary in shape, size, z-distance, and visual context. Nevertheless, the model yielded an R2 of 0.80, and when using effective target width an R2 of 0.88 was achieved.
Real-time Nonverbal Opinion Sharing Through Mobile Phones During Sports Events Alireza Sahami Shirazi, Michael Rohs, Robert Schleicher, Sven Kratz, Alexander Müller, Albrecht Schmidt Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
        
Even with the rise of the World Wide Web, TV has remained the most pervasive entertainment medium and is nowadays often used together with other media, which allow for active participation. The idea of connecting non-collocated TV viewers via telecommunication technologies, referred to as Social TV, has recently received considerable attention. Such systems typically include set-top boxes for supporting collaboration. In this research we investigate if real-time opinion sharing about TV shows through a nonverbal (non-textual) iconic UI on mobile phones is reasonable. For this purpose we developed a mobile app, made it available to a large number of users through the Android Market, and conducted an uncontrolled user study in the wild during the soccer world cup 2010. The results of the study indicate that TV viewers who used the app had more fun and felt more connected to other viewers. We also show that by monitoring this channel it is possible to collect sentiments relevant to the broadcasted content in real-time. The collected data exemplify that the aggregated sentiments correspond to important moments, and hence can be used to generate a summary of the event.
Protractor3D: A Closed-form Solution to Rotation-invariant 3D Gestures Sven Kratz, Michael Rohs Proceedings of the 16th International Conference on Intelligent User Interfaces
        
Protractor 3D is a gesture recognizer that extends the 2D touch screen gesture recognizer Protractor to 3D gestures. It inherits many of Protractor's desirable properties, such as high recognition rate, low computational and low memory requirements, ease of implementation, ease of customization, and low number of required training samples. Protractor 3D is based on a closed-form solution to finding the optimal rotation angle between two gesture traces involving quaternions. It uses a nearest neighbor approach to classify input gestures. It is thus well-suited for application in resource-constrained mobile devices. We present the design of the algorithm and a study that evaluated its performance.
Use the Force (or something) - Pressure and Pressure-Like Input for Mobile Music Performance Georg Essl, Michael Rohs, Sven Kratz Proceedings of the International Conference on New Interfaces for Musical Expression (NIME 2010)
     
Impact force is an important dimension for percussive musical instruments such as the piano. We explore three possible mechanisms how to get impact forces on mobile multi-touch devices: using built-in accelerometers, the pressure sensing capability of Android phones, and external force sensing resistors. We find that accelerometers are difficult to control for this purpose. Android's pressure sensing shows some promise, especially when combined with augmented playing technique. Force sensing resistors can offer good dynamic resolution but this technology is not currently offered in commodity devices and proper coupling of the sensor with the applied impact is difficult.
Extending the Virtual Trackball Metaphor to Rear Touch Input Sven Kratz, Michael Rohs Proceedings of the 2010 IEEE Symposium on 3D User Interfaces (3DUI 2010)
        
Interaction with 3D objects and scenes is becoming increasingly important on mobile devices. We explore 3D object rotation as a fundamental interaction task. We propose an extension of the virtual trackball metaphor, which is typically restricted to a half sphere and single-sided interaction, to actually use a full sphere. The extension is enabled by a hardware setup called the ¿iPhone Sandwich,¿ which allows for simultaneous front-and-back touch input. This setup makes the rear part of the virtual trackball accessible for direct interaction and thus achieves the realization of the virtual trackball metaphor to its full extent. We conducted a user study that shows that a back-of-device virtual trackball is as effective as a front-of-device virtual trackball and that both outperform an implementation of tilt-based input.
A $3 gesture recognizer: simple gesture recognition for devices equipped with 3D acceleration sensors Sven Kratz, Michael Rohs Proceeding of the 14th international conference on Intelligent user interfaces
        
We present the $3 Gesture Recognizer, a simple but robust gesture recognition system for input devices featuring 3D acceleration sensors. The algorithm is designed to be implemented quickly in prototyping environments, is intended to be device-independent and does not require any special toolkits or frameworks. It relies solely on simple trigonometric and geometric calculations. A user evaluation of our system resulted in a correct gesture recognition rate of 80%, when using a set of 10 unique gestures for classification. Our method requires significantly less training data than other gesture recognizers and is thus suited to be deployed and to deliver results rapidly.
Impact of Item Density on Magic Lens Interactions Michael Rohs, Georg Essl, Johannes Schöning, Anja Naumann, Robert Schleicher, Antonio Krüger Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
        
We conducted a user study to investigate the effect of visual context in handheld augmented reality interfaces. A dynamic peephole interface (without visual context beyond the device display) was compared to a magic lens interface (with video see-through augmentation of external visual context). The task was to explore objects on a map and look for a specific attribute shown on the display. We tested different sizes of visual context as well as different numbers of items per area, i.e. different item densities. We found that visual context is most effective for sparse item distributions and the performance benefit decreases with increasing density. User performance in the magic lens case approaches the performance of the dynamic peephole case the more densely spaced the items are. In all conditions, subjective feedback indicates that participants generally prefer visual context over the lack thereof. The insights gained from this study are relevant for designers of mobile AR and dynamic peephole interfaces by suggesting when external visual context is most beneficial.
Using Hands and Feet to Navigate and Manipulate Spatial Data Johannes Schöning, Florian Daiber, Antonio Krüger, Michael Rohs Proceedings of the 27th international conference extended abstracts on Human factors in computing systems
        
We demonstrate how multi-touch hand gestures in combination with foot gestures can be used to perform navigation tasks in interactive systems. The geospatial domain is an interesting example to show the advantages of the combination of both modalities because the complex user interfaces of common Geographic Information System (GIS) requires a high degree of expertise from its users. Recent developments in interactive surfaces that enable the construction of low cost multi-touch displays and relatively cheap sensor technology to detect foot gestures allow the deep exploration of these input modalities for GIS users with medium or low expertise. In this paper, we provide a categorization of multitouch hand and foot gestures for the interaction with spatial data on a large-scale interactive wall. In addition we show with an initial evaluation how these gestures can improve the overall interaction with spatial information.
Map Torchlight: A Mobile Augmented Reality Camera Projector Unit Johannes Schöning, Michael Rohs, Sven Kratz, Markus Löchtefeld, Antonio Krüger Proceedings of the 27th international conference extended abstracts on Human factors in computing systems
        
The advantages of paper-based maps have been utilized in the field of mobile Augmented Reality (AR) in the last few years. Traditional paper-based maps provide high-resolution, large-scale information with zero power consumption. There are numerous implementations of magic lens interfaces that combine high-resolution paper maps with dynamic handheld displays. From an HCI perspective, the main challenge of magic lens interfaces is that users have to switch their attention between the magic lens and the information in the background. In this paper, we attempt to overcome this problem by using a lightweight mobile camera projector unit to augment the paper map directly with additional information. The "Map Torchlight" is tracked over a paper map and can precisely highlight points of interest, streets, and areas to give directions or other guidance for interacting with the map.

Workshop Papers

Let your body move: electrical muscle stimuli as haptics Pedro Lopes, Max Pfeiffer, Michael Rohs, Patrick Baudisch Let your body move - a tutorial on electrical muscle stimuli as haptics 2015
     
A Design Space for Electrical Muscle Stimulation Feedback for Free-Hand Interaction Max Pfeiffer, Stefan Schneegass, Florian Alt, Michael Rohs Workshop on Assistive Augmentation at CHI 2014
     
Free-hand interaction becomes a common technique for interacting with large displays. At the same time, providing haptic feedback for free-hand interaction is still a challenge, particularly feedback with different characteristics (i.e., strengths, patterns) to convey particular information. We see electrical muscle stimulation (EMS) as a well-suited technology for providing haptic feedback in this domain. The characteristics of EMS can be used to assist users in learning, manipulating, and perceiving virtual objects. One of the core challenges is to understand these characteristics and how they can be applied. As a step in this direction, this paper presents a design space that identifies different aspects of using EMS for haptic feedback. The design space is meant as a basis for future research investigating how particular characteristics can be exploited to provide specific haptic feedback.
Casual Interaction: Scaling Fidelity for Low-Engagement Interactions Henning Pohl, Michael Rohs, Roderick Murray-Smith Workshop on Peripheral Interaction: Shaping the Research and Design Space at CHI 2014
     
When interacting casually, users relinquish some control over their interaction to gain the freedom to devote their engagement elsewhere. This allows them to still interact even when they are encumbered, distracted, or engaging with others. With their focus on something else, casual interaction will often take place in the periphery---either spatially by, e.g., interacting laterally or with respect to attention, by interacting in the background.
Attjector: an Attention-Following Wearable Projector Sven Kratz, Michael Rohs, Felix Reitberger, Jörg Moldenhauer Kinect Workshop at Pervasive 2012
     
Mobile handheld projectors in small form factors, e.g., integrated into mobile phones, are getting more common. However, managing the projection puts a burden on the user as it requires holding the hand steady over an extended period of time and draws attention away from the actual task to solve. To address this problem, we propose a body worn projector that follows the user's locus of attention. The idea is to take the user's hand and dominant ngers as an indication of the current locus of attention and focus the projection on that area. Technically, a wearable and steerable camera-projector system positioned above the shoulder tracks the ngers and follows their movement. In this paper, we justify our approach and explore further ideas on how to apply steerable projection for wearable interfaces. Additionally, we describe a Kinect-based prototype of the wearable and steerable projector system we developed.
WorldCupinion: Experiences with an Android App for Real-Time Opinion Sharing during World Cup Soccer Games Michael Rohs, Sven Kratz, Robert Schleicher, Alireza Sahami, Albrecht Schmidt Research in the Large: Using App Stores, Markets and other wide distribution channels in UbiComp research. Workshop at Ubicomp 2010
     
Mobile devices are increasingly used in social networking applications. So far, there is little work on real-time emotion and opinion sharing in large loosely-coupled user communities. We present an Android app for giving realtime feedback during soccer games and to create ad hoc fan groups. We discuss our experiences with deploying this app over four weeks during 2010 soccer world cup. We highlight challenges and opportunities we faced and give recommendations for future work in this area.
A Tabletop System for supporting Paper Prototyping of Mobile Interfaces Benjamin Bähr, Michael Rohs, Sven Kratz PaperComp 2010: 1st International Workshop on Paper Computing. Workshop at Ubicomp 2010
     
We present a tabletop-based system that supports rapid paper-based prototyping for mobile applications. Our system combines the possibility of manually sketching interface screens on paper with the ability to define dynamic interface behavior through actions on the tabletop. This not only allows designers to digitize interface sketches for paper prototypes, but also enables the generation of prototype applications able to run on target devices. By making physical and virtual interface sketches interchangeable, our system greatly enhances and speeds up the development of mobile applications early in the interface design process.
Natural User Interfaces in Mobile Phone Interaction Sven Kratz, Fabian Hemmert, Michael Rohs Workshop on Natural User Interfaces at CHI 2010
     
User interfaces for mobile devices move away from mainly button- and menu-based interaction styles and towards more direct techniques, involving rich sensory input and output. The recently proposed concept of Natural User Interfaces (NUIs) provides a way to structure the discussion about these developments. We examine how two-sided and around-device interaction, gestural input, and shape- and weight-based output can be used to create NUIs for mobile devices. We discuss the applicability of NUI properties in the context of mobile interaction.
LittleProjectedPlanet: An Augmented Reality Game for Camera Projector Phones Markus Löchtefeld, Johannes Schöning, Michael Rohs, Antonio Krüger Workshop on Mobile Interaction with the Real World (MIRW at MobileHCI 2009), Bonn, Germany, September 15, 2009
     
With the miniaturization of projection technology the integration of tiny projection units, normally referred to as pico projectors, into mobile devices is not longer ction. Such integrated projectors in mobile devices could make mobile projection ubiquitous within the next few years. These phones soon will have the ability to project large-scale information onto any surfaces in the real world. By doing so the interaction space of the mobile device can be expanded to physical objects in the environment and this can support interaction concepts that are not even possible on modern desktop computers today. In this paper, we explore the possibilities of camera projector phones with a mobile adaption of the Playstation3 game LittleBigPlanet. The camera projector unit is used to augment the hand drawings of a user with an overlay displaying physical interaction of virtual objects with the real world. Players can sketch a 2D world on a sheet of paper or use an existing physical configuration of objects and let the physics engine simulate physical procedures in this world to achieve game goals.
Unobtrusive Tabletops: Linking Personal Devices with Regular Tables Sven Kratz, Michael Rohs Workshop Multitouch and Surface Computing at CHI'09
     
In this paper we argue that for wide deployment, interactive surfaces should be embedded in real environments as unobtrusively as possible. Rather than deploying dedicated interactive furniture, in environments such as pubs, cafés, or homes it is often more acceptable to augment existing tables with interactive functionality. One example is the use of robust camera-projector systems in real-world settings in combination with spatially tracked touch-enabled personal devices. This retains the normal usage of tabletop surfaces, solves privacy issues, and allows for storage of media items on the personal devices. Moreover, user input can easily be tracked with high precision and low latency and can be attributed to individual users.
Spatial Authentication on Large Interactive Multi-Touch Surfaces Johannes Schöning, Michael Rohs, Antonio Krüger Adjunct Proceedings of the 3rd IEEE Workshop on Tabletops and Interactive Surfaces (IEEE Tabletop 2008), Amsterdam, the Netherlands, October 1-3, 2008
     
The exploitation of finger and hand tracking technology based on infrared light, such as FTIR, Diffused Illumination (DI) or Diffused Surface Illumination (DSI) has enabled the construction of large-scale, low-cost, interactive multi-touch surfaces. In this context, access and security problems arise if larger teams operate theses surfaces with different access rights. The team members might have several levels of authority or specific roles, which determine what functions and objects they are allowed to access via the multi-touch surface. In this paper we present first concepts and strategies to authenticate and interact with subregions of a large-scale multi-touch wall.
A GPS Tracking Application with a Tilt- and Motion-Sensing Interface Michael Mock, Michael Rohs Workshop on Mobile and Embedded Interactive Systems (MEIS at Informatik 2008), Munich, Germany, September 11, 2008
     
Combining GPS tracks with semantic annotations is the basis for large data analysis tasks that give insight into the movement behavior of populations. In this paper, we present a first prototype implementation of a GPS tracking application that aims at subsuming GPS tracking and manual annotation on a standard mobile phone. The main purpose of this prototype is to investigate its usability, which is achieved by a tilt- and motion-sensing interface. We provide a GPS diary function that visualizes GPS trajectories on a map, allows annotating the trajectory, and navigating through the trajectory by moving and tilting the mobile phone. We present the design of our application and report on the very first user experiences.
Navigating Dynamically-Generated High Quality Maps on Tilt-Sensing Mobile Devices Sven Kratz, Michael Rohs Workshop on Mobile and Embedded Interactive Systems (MEIS at Informatik 2008), Munich, Germany, September 11, 2008
     
On mobile devices, navigating in high-resolution and high-density 2D information spaces, such as geographic maps, is a common and important task. In order to support this task, we expand on work done in the areas of tilt-based browsing on mobile devices and speed-dependent automatic zooming in the traditional desktop environment to create an efficient interface for browsing high-volume map data at a wide range of scales. We also discuss infrastructure aspects, such as streaming 2D content to the device and efficiently rendering it on the display, using standards such as Scalable Vector Graphics (SVG).
Mobile Interaction with the "Real World" Johannes Schöning, Michael Rohs, Antonio Krüger Workshop on Mobile Interaction with the Real World (MIRW at MobileHCI 2008), Amsterdam, The Netherlands, September 2, 2008
     
Real-world objects (and the world) are usually not at. It is unfortunate, then, that mobile augmented reality (AR) applications often concentrate on the interaction with 2D objects. Typically, 2D markers are required to track mobile devices relative to the real-world objects to be augmented, and the interaction with these objects is normally limited to the xed plane in which these markers are located. Using platonic solids, we show how to easily extend the interaction space to tangible 3D models. In particular, we present a proof-of-concept example in which users interact with a 3D paper globe using a mobile device that augments the globe with additional information. (In other words, mobile interaction with the "real world".) We believe that this particular 3D interaction with a paper globe can be very helpful in educational settings, as it allows pupils to explore our planet in an easy and intuitive way. An important aspect is that using the real shape of the world can help to correct many common geographic misconceptions that result from the projection of the earth's surface onto a 2D plane.
Photomap: Snap, Grab and Walk away with a "YOU ARE HERE" Map Keith Cheverst, Johannes Schöning, Antonio Krüger, Michael Rohs Workshop on Mobile Interaction with the Real World (MIRW at MobileHCI 2008), Amsterdam, The Netherlands, September 2, 2008
     
One compelling scenario for the use of GPS enabled phones is support for navigation, e.g. enabling a user to glance down at the screen of her mobile phone in order to be reassured that she is indeed located where she thinks she is. While service based approaches to support such navigation tasks are becoming increasingly available - whereby a user downloads (for a fee) a relevant map of her current area onto her GPS enabled phone, the approach is often far from ideal. Typically, the user is unsure as to the cost of downloading the map (especially when she is in a foreign country) and such maps are highly generalised and may not match the user's current activity and needs. For example, rather than requiring a standard map on a mobile device of the area, the user may simply require a map of a university campus with all departments or a map showing footpaths around the area in which she is currently trekking. Indeed, one will often see such specialised maps on public signs situated where they may be required (in a just-in-time sense) and it is interesting to consider how one might enable users to walk up to such situated signs and use their mobile phone to `take away' the map presented in order to use it to assist their ongoing navigation activity. In this paper, we are interested in a subset of this problem space in which the user `grabs' a map shown on a public display by taking a photograph of it and using it as a digital map on her mobile phone. We present two di erent scenarios for our new application called PhotoMaps: In the rst one we are having full control on the map design process (e.g. we are able to place markers etc., in the second scenario we use the map as it is and appropriate it for further navigation use.
Using Mobile Phones to Spontaneously Authenticate and Interact with Multi-Touch Surfaces Johannes Schöning, Michael Rohs, Antonio Krüger Proceedings of the Workshop on Designing Multi-Touch Interaction Techniques for Coupled Public and Private Displays (PPD at AVI 2008), Naples, Italy, May 31, 2008
     
The development of FTIR (Frustrated Total Internal Reflection) technology has enabled the construction of large-scale, low-cost, multi-touch displays. These displays—capable of sensing fingers, hands, and whole arms—have great potential for exploring complex data in a natural manner and easily scale in size and the number of simultaneous users. In this context, access and security problems arise if a larger team operates the surface with different access rights. The team members might have different levels of authority or specific roles, which determines what functions they are allowed to access via the multi-touch surface. In this paper we present first concepts and strategies to use a mobile phone to spontaneously authenticate and interact with sub-regions of a large-scale multi-touch wall.
Facilitating Opportunistic Interaction with Ambient Displays Christian Kray, Areti Galani, Michael Rohs Workshop on Designing and Evaluating Mobile Phone-Based Interaction with Public Displays at CHI 2008, Florence, Italy, April 5, 2008
     
Some public display systems provide information that is vital for people in their vicinity (such as departure times at airports and train stations) whereas other screens are more ambient (such as displays providing background information on exhibits in a museum). The question we are discussing in this paper is how to design interaction mechanisms for the latter, in particular how mobile phones can be used to enable opportunistic and leisurely interaction. We present results from an investigation into the use and perception of a public display in a café, and we derive some requirements for phone-based interaction with (ambient) public displays. Based on these requirements, we briefly evaluate three different interaction techniques.

Posters

Hands-on introduction to interactive electric muscle stimulation Pedro Lopes, Max Pfeiffer, Michael Rohs, Patrick Baudisch CHI '16 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '16
     
In this course, participants create their own prototypes using electrical-muscle stimulation. We provide a ready-to-use device and toolkit consisting of electrodes, microcontroller, and an off-the-shelve muscle stimulator that allows for programmatically actuating the user's muscles directly from mobile devices.
Follow the Force: Steering the Index Finger towards Targets using EMS Oliver Kaul, Max Pfeiffer, Michael Rohs CHI '16 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '16
     
In mobile contexts guidance towards objects is usually done through the visual channel. Sometimes this channel is overloaded or not appropriate. A practicable form of haptic feedback is challenging. Electrical muscle stimulation (EMS) can generate mobile force feedback but has a number of drawbacks. For complex movements several muscles need to be actuated in concert and a feedback loop is necessary to control movements. We present an approach that only requires the actuation of six muscles with four pairs of electrodes to guide the index finger to a 2D point and let the user perform mid-air disambiguation gestures. In our user study participants found invisible, static target positions on top of a physical box with a mean 2D deviation of 1.44 cm from the intended target.
CapCouch: Home Control With a Posture-Sensing Couch Henning Pohl, Markus Hettig, Oliver Karras, Hatice Ötztürk, Michael Rohs Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication - UbiComp '15 Adjunct
        
In relaxed living room settings, using a phone to control the room can be inappropriate or cumbersome. Instead of such explicit interactions, we enable implicit control via a posture-sensing couch. Users can then, e.g., automatically turn on the reading lights when sitting down.
Ergonomic Characteristics of Gestures for Front- and Back-of-tablets Interaction with Grasping Hands Katrin Wolf, Robert Schleicher, Michael Rohs Proceedings of the 16th International Conference on Human-computer Interaction with Mobile Devices - MobileHCI '14
     
The thumb and the fingers have different flexibility, and thus, gestures performed on the back of a held tablet are suggested to be different from ones performed on the touchscreen with the thumb of grasping hands. APIs for back-of-device gesture detection should consider that difference. In a user study, we recorded vectors for the four most common touch gestures. We found that drag, swipe, and press gestures are significantly differently when executed on the back versus on the front side of a held tablet. Corresponding values are provided that may be used to define gesture detection thresholds for back-of-tablet interaction.
Dynamic ambient lighting for mobile devices Qian Qin, Michael Rohs, Sven Kratz Proceedings of the 24th annual ACM symposium adjunct on User interface software and technology
        
The information a small mobile device can show via its display has been always limited by its size. In large information spaces, relevant information, such as important locations on a map can get clipped when a user starts zooming and panning. Dynamic ambient lighting allows mobile devices to visualize off-screen objects by illuminating the background without compromising valuable display space. The lighted spots can be used to show the direction and distance of such objects by varying the spot's position and intensity. Dynamic ambient lighting also provides a new way of displaying the state of a mobile device. Illumination is provided by a prototype rear of device shell which contains LEDs and requires the device to be placed on a surface, such as a table or desk.
CapWidgets: Tangile Widgets Versus Multi-touch Controls on Mobile Devices Sven Kratz, Tilo Westermann, Michael Rohs, Georg Essl CHI '11 Extended Abstracts on Human Factors in Computing Systems
        
We present CapWidgets, passive tangible controls for capacitive touch screens. CapWidgets bring back physical controls to off-the-shelf multi-touch surfaces as found in mobile phones and tablet computers. While the user touches the widget, the surface detects the capacitive marker on the widget's underside. We study the relative performance of this tangible interaction with direct multi-touch interaction and our experimental results show that user performance and preferences are not automatically in favor of tangible widgets and careful design is necessary to validate their properties.
Microphone as Sensor in Mobile Phone Performance Ananya Misra, Georg Essl, Michael Rohs Proceedings of the 8th International Conference on New Interfaces for Musical Expression (NIME 2008), Genova, Italy, June 5-7, 2008
     
Many mobile devices, specifically mobile phones, come equipped with a microphone. Microphones are high-fidelity sensors that can pick up sounds relating to a range of physical phenomena. Using simple feature extraction methods, parameters can be found that sensibly map to synthesis algorithms to allow expressive and interactive performance. For example blowing noise can be used as a wind instrument excitation source. Also other types of interactions can be detected via microphones, such as striking. Hence the microphone, in addition to allowing literal recording, serves as an additional source of input to the developing field of mobile phone performance.

Demos

A Wearable Force Feedback Toolkit with Electrical Muscle Stimulation Max Pfeiffer, Tim Duente, Michael Rohs CHI '16 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '16
     
Electrical muscle stimulation (EMS) is a promising wearable haptic output technology as it can be miniaturized and delivers a wide range of tactile and force output. However, prototyping EMS applications is currently challenging and requires detailed knowledge about EMS. We present a toolkit that simplifies prototyping with EMS and serves as a starting point for experimentation and user studies. It consists of (1) a hardware control module that uses off-the-shelf EMS devices as safe signal generators, (2) a simple communication protocol, and (3) a set of control applications for prototyping. The interactivity allows hands-on experimentation with our sample control applications.
Wrist Compression Feedback by Pneumatic Actuation Henning Pohl, Dennis Becke, Eugen Wagner, Maximilian Schrapel, Michael Rohs CHI '15 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '15
     
Most common forms of haptic feedback use vibration, which immediately captures the user's attention, yet is limited in the range of strengths it can achieve. Vibration feedback over extended periods also tends to be annoying. We present compression feedback, a form of haptic feedback that scales from very subtle to very strong and is able to provide sustained stimuli and pressure patterns. The demonstration may serve as an inspiration for further work in this area, applying compression feedback to generate subtle, intimate, as well as intense feedback.
Squeezing the Sandwich: A Mobile Pressure-Sensitive Two-Sided Multi-Touch Prototype Georg Essl, Michael Rohs, Sven Kratz Demonstration at the 22nd Annual ACM Symposium on User Interface Software and Technology (UIST), Victoria, BC, Canada
     
Two-sided pressure input is common in everyday interactions such as grabbing, sliding, twisting, and turning an object held between thumb and index finger. We describe and demonstrate a research prototype which allows for twosided multitouch sensing with continuous pressure input at interactive rates and we explore early ideas of interaction techniques that become possible with this setup. The advantage of a two-sided pressure interaction is that it enables high degree-of-freedom input locally. Hence rather complex, yet natural interactions can be designed using little finger motion and device space.