Publications

2017
HapticHead: A Spherical Vibrotactile Grid around the Head for 3D Guidance in Virtual and Augmented Reality Oliver Beren Kaul, Michael Rohs Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '17
Full Paper
        
Emotion Actuator: Embodied Emotional Feedback through Electroencephalography and Electrical Muscle Stimulation Mariam Hassib, Max Pfeiffer, Stefan Schneegass, Michael Rohs, Florian Alt Proc. of CHI 2017
Full Paper
        
The human body reveals emotional and bodily states through measurable signals, such as body language and electroencephalography. However, such manifestations are difficult to communicate to others remotely. We propose EmotionActuator, a proof-of-concept system to investigate the transmission of emotional states in which the recipient performs emotional gestures to understand and interpret the state of the sender.We call this kind of communication embodied emotional feedback, and present a prototype implementation. To realize our concept we chose four emotional states: amused, sad, angry, and neutral. We designed EmotionActuator through a series of studies to assess emotional classification via EEG, and create an EMS gesture set by comparing composed gestures from the literature to sign-language gestures. Through a final study with the end-to-end prototype interviews revealed that participants like implicit sharing of emotions and find the embodied output to be immersive, but want to have control over shared emotions and with whom. This work contributes a proof of concept system and set of design recommendations for designing embodied emotional feedback systems.
Squeezeback: Pneumatic Compression for Notifications Henning Pohl, Peter Brandes, Hung Ngo Quang, Michael Rohs Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '17
Full Paper
        
Current mobile devices commonly use vibration feedback to signal incoming notifications. However, vibration feedback exhibits strong attention capture, limiting its use to short periods and prominent notifications. Instead, we investigate the use of compression feedback for notifications, which scales from subtle stimuli to strong ones and can provide sustained stimuli over longer periods. Compression feedback utilizes inflatable straps around a user's limbs, a form factor allowing for easy integration into many common wearables. We explore technical aspects of compression feedback and investigate its psychophysical properties with several lab and in situ studies. Furthermore, we show how compression feedback enables reactive feedback. Here, deflation patterns are used to reveal further information on a user's query. We also compare compression and vibrotactile feedback and find that they have similar performance.
Zap++: A 20-Channel Electrical Muscle Stimulation System for Fine-Grained Wearable Force Feedback Tim Duente, Max Pfeiffer, Michael Rohs Proceedings of the 19th international conference on Human-computer interaction with mobile devices and services adjunct - MobileHCI '17 Adjunct
Full Paper
     
Electrical muscle stimulation (EMS) has been used successfully in HCI to generate force feedback and simple movements both in stationary and mobile settings. However, many natural limb movements require the coordinated actuation of multiple muscles. Off-the-shelf EMS devices are typically limited in their ability to generate fine-grained movements, because they only have a low number of channels and do not provide full control over the EMS parameters. More capable medical devices are not designed for mobile use or still have a lower number of channels and less control than is desirable for HCI research. In this paper we present the concept and a prototype of a 20-channel mobile EMS system that offers full control over the EMS parameters. We discuss the requirements of wearable multi-electrode EMS systems and present the design and technical evaluation of our prototype. We further outline several application scenarios and discuss safety and certification issues.
Inhibiting Freedom of Movement with Compression Feedback Henning Pohl, Franziska Hoheisel, Michael Rohs Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems - CHI EA '17
Poster
        
Compression feedback uses inflatable straps to create uniform pressure sensations around limbs. Lower-pressure stimuli are well suited as a feedback channel for, e.g., notifications. However, operating compression feedback systems at higher pressure levels allows to physically inhibit movement. Here, we describe this modality and present a pervasive jogging game that employs physical inhibition to push runners to reach checkpoints in time.
Beyond Just Text: Semantic Emoji Similarity Modeling to Support Expressive Communication đŸ‘« đŸ“Č 😃 Henning Pohl, Christian Domin, Michael Rohs ACM Transactions on Computer-Human Interaction
Other
        
Emoji, a set of pictographic Unicode characters, have seen strong uptake over the last couple of years. All common mobile platforms and many desktop systems now support emoji entry and users have embraced their use. Yet, we currently know very little about what makes for good emoji entry. While soft keyboards for text entry are well optimized, based on language and touch models, no such information exists to guide the design of emoji keyboards. In this article, we investigate of the problem of emoji entry, starting with a study of the current state of the emoji keyboard implementation in Android. To enable moving forward to novel emoji keyboard designs, we then explore a model for emoji similarity that is able to inform such designs. This semantic model is based on data from 21 million collected tweets containing emoji. We compare this model against a solely description-based model of emoji in a crowdsourced study. Our model shows good performance in capturing detailed relationships between emoji.
↑ ↑ Top ↑ ↑
2016
EmojiZoom: Emoji Entry via Large Overview Maps 😄 🔍 Henning Pohl, Dennis Stanke, Michael Rohs Proceedings of the 18th international conference on Human-computer interaction with mobile devices and services - MobileHCI '16
Full Paper
        
Current soft keyboards for emoji entry all present emoji in the same way: in long lists, spread over several categories. While categories limit the number of emoji in each individual list, the overall number is still so large, that emoji entry is a challenging task. The task takes particularly long if users pick the wrong category when searching for an emoji. Instead, we propose a new zooming keyboard for emoji entry. Here, users can see all emoji at once, aiding in building spatial memory where related emoji are to be found. We compare our zooming emoji keyboard against the Google keyboard and find that our keyboard allows for 18% faster emoji entry, reducing the required time for one emoji from 15.6s to 12.7s. A preliminary longitudinal evaluation with three participants showed that emoji entry time over the duration of the study improved at up to 60% to a final average of 7.5s.
ScatterWatch: Subtle Notifications via Indirect Illumination Scattered in the Skin Henning Pohl, Justyna Medrek, Michael Rohs Proceedings of the 18th international conference on Human-computer interaction with mobile devices and services - MobileHCI '16
Full Paper
        
With the increasing popularity of smartwatches over the last years, there has been a substantial interest in novel input methods for such small devices. However, feedback modalities for smartwatches have not seen the same level of interest. This is surprising, as one of the primary function of smartwatches is their use for notifications. It is the interrupting nature of current notifications on smartwatches that has also drawn some of the more critical responses to them. Here, we present a subtle notification mechanism for smartwatches that uses light scattering in a wearer's skin as a feedback modality. This does not disrupt the wearer in the same way as vibration feedback and also connects more naturally with the user's body.
Let Your Body Move: A Prototyping Toolkit for Wearable Force Feedback with Electrical Muscle Stimulation Max Pfeiffer, Tim Duente, Michael Rohs Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services
Full Paper
        
Electrical muscle stimulation (EMS) is a promising wearable haptic output technology as it can be miniaturized considerably and delivers a wide range of haptic output. However, prototyping EMS applications is challenging. It requires detailed knowledge and skills about hardware, software, and physiological characteristics. To simplify prototyping with EMS in mobile and wearable situations we present the Let Your Body Move toolkit. It consists of (1) a hardware control module with Bluetooth communication that uses off-the-shelf EMS devices as signal generators, (2) a simple communications protocol to connect mobile devices, and (3) a set of control applications as starting points for EMS prototyping. We describe EMS-specific parameters, electrode placements on the skin, and user calibration. The toolkit was evaluated in a workshop with 10 researchers in haptics. The results show that the toolkit allows to quickly generate non-trivial prototypes. The hardware schematics and software components are available as open source software.
Casual Interaction: Moving Between Peripheral and High Engagement Interactions Henning Pohl Peripheral Interaction: Challenges and Opportunities for HCI in the Periphery of Attention
Book Chapter
     
In what we call the focused-casual continuum, users pick how much control they want to have when interacting. Through offering several different ways for interaction, such interfaces can then be more appropriate for, e.g., use in some social situations, or use when exhausted. In a very basic example, an alarm clock could offer one interaction mode where an alarm can only be turned off, while in another, users can choose between different snooze responses. The first mode is more restrictive but could be controlled with one coarse gesture. Only when the user wishes to pick between several responses, more controlled and fine interaction is needed. Low control, more casual interactions can take place in the background or the periphery of the user, while focused interactions move into the foreground. Along the focused-casual continuum, a plethora of interaction techniques have their place. Currently, focused interaction techniques are often the default ones. In this chapter, we thus focus more closely on techniques for casual interaction, which offer ways to interact with lower levels of control. Presented use cases cover scenarios such as text entry, user recognition, tangibles, or steering tasks. Furthermore, in addition to potential benefits from applying casual interaction techniques during input, there is also a need for feedback which does not immediately grab our attention, but can scale from the periphery to the focus of our attention. Thus, we also cover several such feedback methods and show how the focused-casual continuum can encompass the whole interaction.
Wearable Head-mounted 3D Tactile Display Application Scenarios Oliver Beren Kaul, Michael Rohs Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct
Workshop Paper
        
On-skin Technologies for Muscle Sensing and Actuation Tim Duente, Max Pfeiffer, Michael Rohs Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct
Workshop Paper
        
Electromyography (EMG) and electrical muscle stimulation (EMS) are promising technologies for muscle sensing and actuation in wearable interfaces. The required electrodes can be manufactured to form a thin layer on the skin. We discuss requirements and approaches for EMG and EMS as on-skin technologies. In particular, we focus on fine-grained muscle sensing and actuation with an electrode grid on the lower arm. We discuss a prototype, scenarios, and open issues.
HapticHead: 3D Guidance and Target Acquisition Through a Vibrotactile Grid Oliver Beren Kaul, Michael Rohs Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems
Poster
        
Follow the Force: Steering the Index Finger towards Targets using EMS Oliver Beren Kaul, Max Pfeiffer, Michael Rohs Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems
Poster
        
In mobile contexts guidance towards objects is usually done through the visual channel. Sometimes this channel is overloaded or not appropriate. A practicable form of haptic feedback is challenging. Electrical muscle stimulation (EMS) can generate mobile force feedback but has a number of drawbacks. For complex movements several muscles need to be actuated in concert and a feedback loop is necessary to control movements. We present an approach that only requires the actuation of six muscles with four pairs of electrodes to guide the index finger to a 2D point and let the user perform mid-air disambiguation gestures. In our user study participants found invisible, static target positions on top of a physical box with a mean 2D deviation of 1.44 cm from the intended target.
Hands-on introduction to interactive electric muscle stimulation Pedro Lopes, Max Pfeiffer, Michael Rohs, Patrick Baudisch CHI '16 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '16
Poster
     
In this course, participants create their own prototypes using electrical-muscle stimulation. We provide a ready-to-use device and toolkit consisting of electrodes, microcontroller, and an off-the-shelve muscle stimulator that allows for programmatically actuating the user's muscles directly from mobile devices.
Multi-Level Interaction with an LED-Matrix Edge Display Henning Pohl, Bastian Krefeld, Michael Rohs Proceedings of the 18th international conference on Human-computer interaction with mobile devices and services adjunct - MobileHCI '16 Adjunct
Poster
        
Interaction with mobile devices currently requires close engagement with them. For example, users need to pick them up and unlock them, just to check whether the last notification was for an urgent message. But such close engagement is not always desirable, e.g., when working on a project with the phone just laying around on the table. Instead, we explore around-device interactions to bring up and control notifications. As users get closer to the device, more information is revealed and additional input options become available. This allows users to control how much they want to engage with the device. For feedback, we use a custom LED-matrix display prototype on the edge of the device. This allows for coarse, but bright, notifications in the periphery of attention, but scales up to allow for slightly higher resolution feedback as well.
Improving Plagiarism Detection in Coding Assignments by Dynamic Removal of Common Ground Christian Domin, Henning Pohl, Markus Krause CHI '16 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '16
Poster
        
Plagiarism in online learning environments has a detrimental effect on the trust of online courses and their viability. Automatic plagiarism detection systems do exist yet the specific situation in online courses restricts their use. To allow for easy automated grading, online assignments usually are less open and instead require students to fill in small gaps. Therefore solutions tend to be very similar, yet are then not necessarily plagiarized. In this paper we propose a new approach to detect code re-use that increases the prediction accuracy by dynamically removing parts in assignments which are part of almost every assignment—the so called common ground. Our approach shows significantly better F-measure and Cohen's Kappa results than other state of the art algorithms such as Moss or JPlag. The proposed method is also language agnostic to the point that training and test data sets can be taken from different programming languages.
A Wearable Force Feedback Toolkit with Electrical Muscle Stimulation Max Pfeiffer, Tim Duente, Michael Rohs CHI '16 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '16
Demo
     
Electrical muscle stimulation (EMS) is a promising wearable haptic output technology as it can be miniaturized and delivers a wide range of tactile and force output. However, prototyping EMS applications is currently challenging and requires detailed knowledge about EMS. We present a toolkit that simplifies prototyping with EMS and serves as a starting point for experimentation and user studies. It consists of (1) a hardware control module that uses off-the-shelf EMS devices as safe signal generators, (2) a simple communication protocol, and (3) a set of control applications for prototyping. The interactivity allows hands-on experimentation with our sample control applications.
↑ ↑ Top ↑ ↑
2015
Cruise Control for Pedestrians: Controlling Walking Direction using Electrical Muscle Stimulation Max Pfeiffer, Tim Duente, Stefan Schneegass, Florian Alt, Michael Rohs Proc. of CHI 2015
Full Paper
        
Pedestrian navigation systems require users to perceive, interpret, and react to navigation information. This can tax cognition as navigation information competes with information from the real world. We propose actuated navigation, a new kind of pedestrian navigation in which the user does not need to attend to the navigation task at all. An actuation signal is directly sent to the human motor system to influence walking direction. To achieve this goal we stimulate the sartorius muscle using electrical muscle stimulation. The rotation occurs during the swing phase of the leg and can easily be counteracted. The user therefore stays in control. We discuss the properties of actuated navigation and present a lab study on identifying basic parameters of the technique as well as an outdoor study in a park. The results show that our approach changes a user's walking direction by about 16 degree/m on average and that the system can successfully steer users in a park with crowded areas, distractions, obstacles, and uneven ground.
A Playful Game Changer: Fostering Student Retention in Online Education with Social Gamification Markus Krause, Marc Mogalle, Henning Pohl, Joseph Jay Williams Proceedings of the second ACM conference on Learning @ scale - L@S '15
Full Paper
        
Many MOOCs report high drop off rates for their students. Among the factors reportedly contributing to this picture are lack of motivation, feelings of isolation, and lack of interactivity in MOOCs. This paper investigates the potential of gamification with social game elements for increasing retention and learning success. Students in our experiment showed a significant increase of 25% in retention period (videos watched) and 23% higher average scores when the course interface was gamified. Social game elements amplify this effect significantly – students in this condition showed an increase of 50% in retention period and 40% higher average test scores.
3D Virtual Hand Pointing with EMS and Vibration Feedback Max Pfeiffer, Wolfgang Stuerzlinger 3DUI'15
Short Paper
     
One-Button Recognizer: Exploiting Button Pressing Behavior for User Differentiation Henning Pohl, Markus Krause, Michael Rohs Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing - UbiComp '15
Short Paper
        
We present a novel way to recognize users by the way they press a button. Our approach allows low-effort and fast interaction without the need for augmenting the user or controlling the environment. It eschews privacy concerns of methods such as fingerprint scanning. Button pressing behavior is sufficiently discriminative to allow distinguishing users within small groups. This approach combines recognition and action in a single step, e.g., getting and tallying a coffee can be done with one button press. We deployed our system for 5 users over a period of 4 weeks and achieved recognition rates of 95% in the last week. We also ran a larger scale but short-term evaluation to investigate effects of group size and found that our method degrades gracefully for larger groups.
Let your body move: electrical muscle stimuli as haptics Pedro Lopes, Max Pfeiffer, Michael Rohs, Patrick Baudisch Let your body move - a tutorial on electrical muscle stimuli as haptics 2015
Workshop Paper
     
3D Virtual Hand Pointing with EMS and Vibration Feedback Max Pfeiffer, Wolfgang Stuerzlinger CHI'15
Poster
     
CapCouch: Home Control With a Posture-Sensing Couch Henning Pohl, Markus Hettig, Oliver Karras, Hatice ÖtztĂŒrk, Michael Rohs Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication - UbiComp '15 Adjunct
Poster
        
In relaxed living room settings, using a phone to control the room can be inappropriate or cumbersome. Instead of such explicit interactions, we enable implicit control via a posture-sensing couch. Users can then, e.g., automatically turn on the reading lights when sitting down.
Wrist Compression Feedback by Pneumatic Actuation Henning Pohl, Dennis Becke, Eugen Wagner, Maximilian Schrapel, Michael Rohs CHI '15 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '15
Demo
     
Most common forms of haptic feedback use vibration, which immediately captures the user's attention, yet is limited in the range of strengths it can achieve. Vibration feedback over extended periods also tends to be annoying. We present compression feedback, a form of haptic feedback that scales from very subtle to very strong and is able to provide sustained stimuli and pressure patterns. The demonstration may serve as an inspiration for further work in this area, applying compression feedback to generate subtle, intimate, as well as intense feedback.
Casual Interaction: Scaling Interaction for Multiple Levels of Engagement Henning Pohl CHI '15 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '15
Other
     
In the focused-casual continuum, users are given a choice of how much they wish to engage with an interface. In situations where they are, e.g., physically encumbered, they may wish to trade some control for the convenience of interacting at all. Currently, most devices only offer focused interaction capabilities or restrict users to binary foreground/background interaction choices. In casual interactions, users consciously pick a way to interact that is suitable for their desired engagement level. Users will be expecting devices to offer several ways for control along the engagement scale.
↑ ↑ Top ↑ ↑
2014
Let Me Grab This : A Comparison of EMS and Vibration for Haptic Feedback in Free-Hand Interaction Max Pfeiffer, Stefan Schneegass, Florian Alt, Michael Rohs Augmented Human
Full Paper
        
Free-hand interaction with large displays is getting more common, for example in public settings and exertion games. Adding haptic feedback offers the potential for more realis- tic and immersive experiences. While vibrotactile feedback is well known, electrical muscle stimulation (EMS) has not yet been explored in free-hand interaction with large displays. EMS offers a wide range of different strengths and qualities of haptic feedback. In this paper we first systematically inves- tigate the design space for haptic feedback. Second, we ex- perimentally explore differences between strengths of EMS and vibrotactile feedback. Third, based on the results, we evaluate EMS and vibrotactile feedback with regard to differ- ent virtual objects (soft, hard) and interaction with different gestures (touch, grasp, punch) in front of a large display. The results provide a basis for the design of haptic feedback that is appropriate for the given type of interaction and the material.
Around-Device Devices: My Coffee Mug is a Volume Dial Henning Pohl, Michael Rohs Proceedings of the 16th international conference on Human-computer interaction with mobile devices and services - MobileHCI '14
Full Paper
        
For many people their phones have become their main everyday tool. While phones can fulfill many different roles they also require users to (1) make do with affordance not specialized for the specific task, and (2) closely engage with the device itself. We propose utilizing the space and objects around the phone to offer better task affordance and to create an opportunity for casual interactions. Such around-device devices are a class of interactors that do not require users to bring special tangibles, but repurpose items already found in the user's surroundings. In a survey study, we determine which places and objects are available to around-device devices. Furthermore, in an elicitation study, we observe what objects users would use for ten interactions.
Uncertain Text Entry on Mobile Devices Daryl Weir, Henning Pohl, Simon Rogers, Keith Vertanen, Per Ola Kristensson Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '14
Full Paper
        
Modern mobile devices typically rely on touchscreen keyboards for input. Unfortunately, users often struggle to enter text accurately on virtual keyboards. To address this, we present a novel decoder for touchscreen text entry that combines probabilistic touch models with a long-span language model. We investigate two touch models – one based on Gaussian Processes that implicitly models the inherent uncertainty of the touching process and a second that allows users to explicitly control the uncertainity via touch pressure. Using the first model we show that character error rate can be reduced by up to 7% over a baseline, and by up to 1.3% over a leading commercial keyboard. With the second model, we demonstrate that providing users with control over input certainty results in improved text entry rates for phrases containing out of vocabulary words.
A Design Space for Electrical Muscle Stimulation Feedback for Free-Hand Interaction Max Pfeiffer, Stefan Schneegass, Florian Alt, Michael Rohs Workshop on Assistive Augmentation at CHI 2014
Workshop Paper
     
Free-hand interaction becomes a common technique for interacting with large displays. At the same time, providing haptic feedback for free-hand interaction is still a challenge, particularly feedback with different characteristics (i.e., strengths, patterns) to convey particular information. We see electrical muscle stimulation (EMS) as a well-suited technology for providing haptic feedback in this domain. The characteristics of EMS can be used to assist users in learning, manipulating, and perceiving virtual objects. One of the core challenges is to understand these characteristics and how they can be applied. As a step in this direction, this paper presents a design space that identifies different aspects of using EMS for haptic feedback. The design space is meant as a basis for future research investigating how particular characteristics can be exploited to provide specific haptic feedback.
Casual Interaction: Scaling Fidelity for Low-Engagement Interactions Henning Pohl, Michael Rohs, Roderick Murray-Smith Workshop on Peripheral Interaction: Shaping the Research and Design Space at CHI 2014
Workshop Paper
     
When interacting casually, users relinquish some control over their interaction to gain the freedom to devote their engagement elsewhere. This allows them to still interact even when they are encumbered, distracted, or engaging with others. With their focus on something else, casual interaction will often take place in the periphery---either spatially by, e.g., interacting laterally or with respect to attention, by interacting in the background.
Ergonomic Characteristics of Gestures for Front- and Back-of-tablets Interaction with Grasping Hands Katrin Wolf, Robert Schleicher, Michael Rohs Proceedings of the 16th International Conference on Human-computer Interaction with Mobile Devices - MobileHCI '14
Poster
     
The thumb and the fingers have different flexibility, and thus, gestures performed on the back of a held tablet are suggested to be different from ones performed on the touchscreen with the thumb of grasping hands. APIs for back-of-device gesture detection should consider that difference. In a user study, we recorded vectors for the four most common touch gestures. We found that drag, swipe, and press gestures are significantly differently when executed on the back versus on the front side of a held tablet. Corresponding values are provided that may be used to define gesture detection thresholds for back-of-tablet interaction.
Imaginary Reality Basketball: A Ball Game Without a Ball Patrick Baudisch, Henning Pohl, Stefanie Reinicke, Emilia Wittmers, Patrick LĂŒhne, Marius Knaust, Sven Köhler, Patrick Schmidt, Christian Holz CHI '14 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '14
Demo
     
We present imaginary reality basketball, i.e., a ball game that mimics the respective real world sport, i.e., basketball, except that there is no visible ball. The ball is virtual and players learn about its position only from watching each other act and a small amount of occasional auditory feedback, e.g., when a person is receiving the ball. Imaginary reality games maintain many of the properties of physical sports, such as unencumbered play, physical exertion, and immediate social interaction between players. At the same time, they allow introducing game elements from video games, such as power-ups, non-realistic physics, and player balancing. Most importantly, they create a new game dynamic around the notion of the invisible ball.
Brave New Interactions: Performance-Enhancing Drugs for Human-Computer Interaction Henning Pohl CHI '14 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '14
Other
        
In the area of sports, athletes often resort to performance enhancing drugs to gain an advantage. Similarly, people use pharmaceutical drugs to aid learning, dexterity, or concentration. We investigate how pharmaceutical drugs could be used to enhance interactions. We envision that in the future, people might take pills along with their vitamins in the morning to improve how they can interact over the day. In addition to performance improvements this, e.g., could also include improvements in enjoyment or fatigue.
↑ ↑ Top ↑ ↑
2013
Tickle: A surface-independent interaction technique for grasp interfaces Katrin Wolf, Robert Schleicher, Sven Kratz, Michael Rohs Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction
Full Paper
        
We present a wearable interface that consists of motion sensors. As the interface can be worn on the user's fingers (as a ring) or fixed to it (with nail polish), the device controlled by finger gestures can be any generic object, provided they have an interface for receiving the sensor's signal. We implemented four gestures: tap, release, swipe, and pitch, all of which can be executed with a finger of the hand holding the device. In a user study we tested gesture appropriateness for the index finger at the back of a handheld tablet that offered three different form factors on its rear: flat, convex, and concave (undercut). For all three shapes, the gesture performance was equally good, however pitch performed better on all surfaces than swipe. The proposed interface is an example towards the idea of ubiquitous computing and the vision of seamless interactions with grasped objects. As an initial application scenario we implemented a camera control that allows the brightness to be configured using our tested gestures on a common SLR device.
Combining acceleration and gyroscope data for motion gesture recognition using classifiers with dimensionality constraints Sven Kratz, Michael Rohs, Georg Essl Proceedings of the 2013 international conference on Intelligent user interfaces
Full Paper
        
Motivated by the addition of gyroscopes to a large number of new smart phones, we study the effects of combining accelerometer and gyroscope data on the recognition rate of motion gesture recognizers with dimensionality constraints. Using a large data set of motion gestures we analyze results for the following algorithms: Protractor3D, Dynamic Time Warping (DTW) and Regularized Logistic Regression (LR). We chose to study these algorithms because they are relatively easy to implement, thus well suited for rapid prototyping or early deployment during prototyping stages. For use in our analysis, we contribute a method to extend Protractor3D to work with the 6D data obtained by combining accelerometer and gyroscope data. Our results show that combining accelerometer and gyroscope data is beneficial also for algorithms with dimensionality constraints and improves the gesture recognition rate on our data set by up to 4\u030603060025.
Imaginary Reality Gaming: Ball Games Without a Ball Patrick Baudisch, Henning Pohl, Stefanie Reinicke, Emilia Wittmers, Patrick LĂŒhne, Marius Knaust, Sven Köhler, Patrick Schmidt, Christian Holz Proceedings of the 26th annual ACM Symposium on User Interface Software and Technology - UIST '13
Full Paper
        
We present imaginary reality games, i.e., games that mimic the respective real world sport, such as basketball or soccer, except that there is no visible ball. The ball is virtual and players learn about its position only from watching each other act and a small amount of occasional auditory feedback, e.g., when a person is receiving the ball. Imaginary reality games maintain many of the properties of physical sports, such as unencumbered play, physical exertion, and immediate social interaction between players. At the same time, they allow introducing game elements from video games, such as power-ups, non-realistic physics, and player balancing. Most importantly, they create a new game dynamic around the notion of the invisible ball. To allow players to successfully interact with the invisible ball, we have created a physics engine that evaluates all plausible ball trajectories in parallel, allowing the game engine to select the trajectory that leads to the most enjoyable game play while still favoring skillful play.
Focused and Casual Interactions: Allowing Users to Vary Their Level of Engagement Henning Pohl, Roderick Murray-Smith Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '13
Full Paper
        
We describe the focused–casual continuum, a framework for describing interaction techniques according to the degree to which they allow users to adapt how much attention and effort they choose to invest in an interaction conditioned on their current situation. Casual interactions are particularly appropriate in scenarios where full engagement with devices is frowned upon socially, is unsafe, physically challenging or too mentally taxing. Novel sensing approaches which go beyond direct touch enable wider use of casual interactions, which will often be ‘around device’ interactions. We consider the degree to which previous commercial products and research prototypes can be considered as fitting the focused– casual framework, and describe the properties using control theoretic concepts. In an experimental study we observe that users naturally apply more precise and more highly engaged interaction techniques when faced with a more challenging task and use more relaxed gestures in easier tasks.
Designing Systems with Homo Ludens in the Loop Markus Krause Handbook of Human Computation
Book Chapter
  
Mobile Game User Research : The World as Your Lab ? Jan Smeddinck, Markus Krause GUR'13 Proceedings of the CHI Game User Experience Research Workshop
Workshop Paper
  
A Digital Game to Support Voice Treatment for Parkinson ’ s Disease Markus Krause, Jan Smeddnick, Ronald Meyer CHI'013 extended abstracts on Human factors in computing systems
Poster
  
It is about Time : Time Aware Quality Management for Interactive Systems with Humans in the Loop Markus Krause, Robert Porzel CHI'13 extended abstracts on Human factors in computing systems
Poster
  
Supporting interaction in public space with electrical muscle stimulation Max Pfeiffer, Stefan Schneegass, Florian Alt Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication
Demo
     
↑ ↑ Top ↑ ↑
2012
PalmSpace: Continuous Around-device Gestures vs. Multitouch for 3D Rotation Tasks on Mobile Devices Sven Kratz, Michael Rohs, Dennis Guse, Jörg MĂŒller, Gilles Bailly, Michael Nischt Proceedings of the International Working Conference on Advanced Visual Interfaces
Full Paper
        
Rotating 3D objects is a diffcult task on mobile devices, because the task requires 3 degrees of freedom and (multi-)touch input only allows for an indirect mapping. We propose a novel style of mobile interaction based on mid-air gestures in proximity of the device to increase the number of DOFs and alleviate the limitations of touch interaction with mobile devices. While one hand holds the device, the other hand performs mid-air gestures in proximity of the device to control 3D objects on the mobile device's screen. A at hand pose de nes a virtual surface which we refer to as the PalmSpace for precise and intuitive 3D rotations. We constructed several hardware prototypes to test our interface and to simulate possible future mobile devices equipped with depth cameras. Pilot tests show that PalmSpace hand gestures are feasible. We conducted a user study to compare 3D rotation tasks using the most promising two designs for the hand location during interaction - behind and beside the device - with the virtual trackball, which is the current state-of-art technique for orientation manipulation on touchscreens. Our results show that both variants of PalmSpace have signi cantly lower task completion times in comparison to the virtual trackball.
ShoeSense: A New Perspective on Gestural Interaction and Wearable Applications Gilles Bailly, Jörg MĂŒller, Michael Rohs, Daniel Wigdor, Sven Kratz Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Full Paper
        
When the user is engaged with a real-world task it can be inappropriate or difficult to use a smartphone. To address this concern, we developed ShoeSense, a wearable system consisting in part of a shoe-mounted depth sensor pointing upward at the wearer. ShoeSense recognizes relaxed and discreet as well as large and demonstrative hand gestures. In particular, we designed three gesture sets (Triangle, Radial, and Finger-Count) for this setup, which can be performed without visual attention. The advantages of ShoeSense are illustrated in five scenarios: (1) quickly performing frequent operations without reaching for the phone, (2) discreetly performing operations without disturbing others, (3) enhancing operations on mobile devices, (4) supporting accessibility, and (5) artistic performances. We present a proof-of-concept, wearable implementation based on a depth camera and report on a lab study comparing social acceptability, physical and mental demand, and user preference. A second study demonstrates a 94-99% recognition rate of our recognizers.
Design and Evaluation of Parametrizable Multi-Genre Game Mechanics Daniel Apken, Hendrik Landwehr, Marc Herrlich, Markus Krause, Dennis Paul, Rainer Malaka ICEC'12 Proceedings of the 11th Inernational Conference on Entertainment Computing
Full Paper
  
Human Computation – A new Aspect of Serious Games Markus Krause, Jan Smeddnick Handbook of Research on Serious Games as Educational, Business and Research Tools: Development and Design
Book Chapter
  
Sketch-a-TUI: Low Cost Prototyping of Tangible Interactions Using Cardboard and Conductive Ink Alexander Wiethoff, Hanna Schneider, Michael Rohs, Andreas Butz, Saul Greenberg Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction
Short Paper
        
Graspable tangibles are now being explored on the current generation of capacitive touch surfaces, such as the iPad and the Android tablet. Because the size and form factor is relatively new, early and low fidelity prototyping of these TUIs is crucial in getting the right design. The problem is that it is difficult for the average interaction designer to develop such physical prototypes. They require a substantial amount time and effort to physically model the tangibles, and expertise in electronics to instrument them. Thus prototyping is sometimes handed off to specialists, or is limited to only a few design iterations and alternative designs. Our solution contributes a low fidelity prototyping approach that is time and cost effective, and that requires no electronics knowledge. First, we supply non-specialists with cardboard forms to create tangibles. Second, we have them draw lines on it via conductive ink, which makes their objects recognizable by the capacitive touch screen. They can then apply routine programming to recognize these tangibles and thus iterate over various designs.
Attjector: an Attention-Following Wearable Projector Sven Kratz, Michael Rohs, Felix Reitberger, Jörg Moldenhauer Kinect Workshop at Pervasive 2012
Workshop Paper
     
Mobile handheld projectors in small form factors, e.g., integrated into mobile phones, are getting more common. However, managing the projection puts a burden on the user as it requires holding the hand steady over an extended period of time and draws attention away from the actual task to solve. To address this problem, we propose a body worn projector that follows the user's locus of attention. The idea is to take the user's hand and dominant ngers as an indication of the current locus of attention and focus the projection on that area. Technically, a wearable and steerable camera-projector system positioned above the shoulder tracks the ngers and follows their movement. In this paper, we justify our approach and explore further ideas on how to apply steerable projection for wearable interfaces. Additionally, we describe a Kinect-based prototype of the wearable and steerable projector system we developed.
Quantum Games: Ball Games Without a Ball Henning Pohl, Christian Holz, Stefanie Reinicke, Emilia Wittmers, Marvin Killing, Konstantin Kaefer, Max Plauth, Tobias Mohr, Stephanie Platz, Philipp Tessenow, Patrick Baudisch Workshop on Kinect in Pervasive Computing at Pervasive 2012
Workshop Paper
     
We present Quantum games, physical games that resemble corresponding real–world sports—except that the ball exists only in the players’ imagination. We demonstrate Quantum versions of team handball and air hockey. A computer system keeps score by tracking players using a Microsoft Kinect (air hockey) or a webcam (handball), simulates the physics of the ball, and reports ball interactions and scores back using auditory feedback. The key element that makes Quantum games playable is a novel type of physics engine that evaluates not one, but samples the set of all plausible ball trajectories in parallel. Before choosing a trajectory to realize, the engine massively increases the probability of outcomes that lead to enjoyable gameplay, such as goal shots, but also successful passes and intercepts that lead to fluid gameflow. The same mechanism allows giving a boost to inexpe- rienced players and implementing power–ups.
Predicting Crowd-based Translation Quality with Language-independent Feature Vectors Niklas Kilian, Markus Krause, Nina Runge, Jan Smeddinck HComp'12 Proceedings of the AAAI Workshop on Human Computation
Workshop Paper
  
Playful Surveys : Easing Challenges of Human Subject Research with Online Crowds Challenges of Human Subject Research with Markus Krause, Jan Smeddinck, Aneta Takhtamysheva, Velislav Markov, Nina Runge HComp'12 Proceedings of the AAAI Workshop on Human Computation
Workshop Paper
  
Did They Really Like the Game ? -- Challenges in Evaluating Exergames with Older Adults Jan Smeddinck, Marc Herrlich, Markus Krause, Kathrin M Gerling, Rainer Malaka GUR'12 Proceedings of the CHI Game User Experience Research Workshop
Workshop Paper
  
Exploring User Input Metaphors for Jump and Run Games on Mobile Devices Kolja Lubitz, Markus Krause ICEC'12 Proceedings of the 11th Inernational Conference on Entertainment Computing
Poster
  
GCI 2012 Harnessing Collective Intelligence with Games 1st International Workshop on Systems with Homo Ludens in the Loop Markus Krause, Roberta Cuel, Maja Vukovic ICEC'12 Proceedings of the 11th Inernational Conference on Entertainment Computing
Other
  
↑ ↑ Top ↑ ↑
2011
WorldCupinion Experiences with an Android App for Real-Time Opinion Sharing During Soccer World Cup Games Robert Schleicher, Alireza Sahami Shirazi, Michael Rohs, Sven Kratz, Albrecht Schmidt Int. J. Mob. Hum. Comput. Interact.
Journal Article
     
Mobile devices are increasingly used in social networking applications and research. So far, there is little work on real-time emotion or opinion sharing in large loosely coupled user communities. One potential area of application is the assessment of widely broadcasted television TV shows. The idea of connecting non-collocated TV viewers via telecommunication technologies is referred to as Social TV. Such systems typically include set-top boxes for supporting the collaboration. In this work the authors investigated whether mobile phones can be used as an additional channel for sharing opinions, emotional responses, and TV-related experiences in real-time. To gain insight into this area, an Android app was developed for giving real-time feedback during soccer games and to create ad hoc fan groups. This paper presents results on rating activity during games and discusses experiences with deploying this app over four weeks during soccer World Cup. In doing so, challenges and opportunities faced are highlighted and an outlook on future work in this area is given.
Advancing Large Interactive Surfaces for Use in the Real World Jens Teichert, Marc Herrlich, Benjamin Walther-franks, Lasse Schwarten, Sebastian Feige, Markus Krause, Rainer Malaka Formamente
Journal Article
  
A Taxonomy of Microinteractions: Defining Microgestures Based on Ergonomic and Scenario-dependent Requirements Katrin Wolf, Anja Naumann, Michael Rohs, Jörg MĂŒller Proceedings of the 13th IFIP TC 13 International Conference on Human-computer Interaction - Volume Part I
Full Paper
        
This paper explores how microinteractions such as hand gestures allow executing a secondary task, e.g. controlling mobile applications and devices, without interrupting the manual primary tasks, for instance driving a car. We asked sports- and physiotherapists for using props while interviewing these experts in order to iteratively design microgestures. The required gestures should be easily performable without interrupting the primary task, without needing high cognitive effort, and without taking the risk of being mixed up with natural movements. Resulting from the expert interviews we developed a taxonomy for classifying these gestures according to their use cases and assess their ergonomic and cognitive attributes, focusing on their primary task compatibility. We defined 21 hand gestures, which allow microinteractions within manual dual task scenarios. In expert interviews we evaluated their level of required motor or cognitive resources under the constraint of stable primary task performance. Our taxonomy poses a basis for designing microinteraction techniques.
Towards real-time monitoring and controlling of enterprise architectures using business software control centers Tobias BrĂŒckmann, Volker Gruhn, Max Pfeiffer Proceedings of the 5th European conference on Software architecture
Full Paper
  
Gestural interaction on the steering wheel: reducing the visual demand Tanja Döring, Dagmar Kern, Paul Marshall, Max Pfeiffer, Johannes Schöning, Volker Gruhn, Albrecht Schmidt Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Full Paper
     
Touch Input on Curved Surfaces Anne Roudaut, Henning Pohl, Patrick Baudisch Proceedings of the 2011 annual conference on Human factors in computing systems - CHI '11
Full Paper
        
Advances in sensing technology are currently bringing touch input to non-planar surfaces, ranging from spherical touch screens to prototypes the size and shape of a ping-pong ball. To help interface designers create usable interfaces on such devices, we determine how touch surface curvature affects targeting. We present a user study in which participants acquired targets on surfaces of different curvature and at locations of different slope. We find that surface convexity increases pointing accuracy, and in particular reduces the offset between the input point perceived by users and the input point sensed by the device. Concave surfaces, in contrast, are subject to larger error offsets. This is likely caused by how concave surfaces hug the user's finger, thus resulting in a larger contact area. The effect of slope on targeting, in contrast, is unexpected at first sight. Some targets located downhill from the user's perspective are subject to error offsets in the opposite direction from all others. This appears to be caused by participants acquiring these targets using a different finger posture that lets them monitor the position of their fingers more effectively.
WuppDi! – Supporting Physiotherapy of ParkinsonÂŽs Disease Patients via Motion-based Gaming Oliver Assad, Robert Hermann, Damian Lilla, Björn Mellies, Ronald Meyer, Liron Shevach, Sandra Siegel, Melanie Springer, Saranat Tiemkeo, Jens Voges, Jan Wieferich, Marc Herrlich, Markus Krause, Rainer Malaka Mensch & Computer
Full Paper
  
Serious Questionnaires in Playful Social Network Applications Aneta Takhtamysheva, Markus Krause, Jan Smeddnick ICEC'11 Proceedings of the 10th Inernational Conference on Entertainment Computing
Full Paper
  
Motion-Based Games for Parkinson's Disease Patients Oliver Assad, Robert Hermann, Damian Lilla, Björn Mellies, Ronald Meyer, Liron Shevach, Sandra Siegel, Melanie Springer, Saranat Tiemkeo, Jens Voges, Jan Wieferich, Marc Herrlich, Markus Krause, Rainer Malaka ICEC'11 Proceedings of the 10th Inernational Conference on Entertainment Computing
Full Paper
  
Interaction with Magic Lenses: Real-world Validation of a Fitts' Law Model Michael Rohs, Antti Oulasvirta, Tiia Suomalainen Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Short Paper
        
Rohs and Oulasvirta (2008) proposed a two-component Fitts' law model for target acquisition with magic lenses in mobile augmented reality (AR) with 1) a physical pointing phase, in which the target can be directly observed on the background surface, and 2) a virtual pointing phase, in which the target can only be observed through the device display. The model provides a good fit (R2=0.88) with laboratory data, but it is not known if it generalizes to real-world AR tasks. In the present outdoor study, subjects (N=12) did building-selection tasks in an urban area. The differences in task characteristics to the laboratory study are drastic: targets are three-dimensional and they vary in shape, size, z-distance, and visual context. Nevertheless, the model yielded an R2 of 0.80, and when using effective target width an R2 of 0.88 was achieved.
Real-time Nonverbal Opinion Sharing Through Mobile Phones During Sports Events Alireza Sahami Shirazi, Michael Rohs, Robert Schleicher, Sven Kratz, Alexander MĂŒller, Albrecht Schmidt Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Short Paper
        
Even with the rise of the World Wide Web, TV has remained the most pervasive entertainment medium and is nowadays often used together with other media, which allow for active participation. The idea of connecting non-collocated TV viewers via telecommunication technologies, referred to as Social TV, has recently received considerable attention. Such systems typically include set-top boxes for supporting collaboration. In this research we investigate if real-time opinion sharing about TV shows through a nonverbal (non-textual) iconic UI on mobile phones is reasonable. For this purpose we developed a mobile app, made it available to a large number of users through the Android Market, and conducted an uncontrolled user study in the wild during the soccer world cup 2010. The results of the study indicate that TV viewers who used the app had more fun and felt more connected to other viewers. We also show that by monitoring this channel it is possible to collect sentiments relevant to the broadcasted content in real-time. The collected data exemplify that the aggregated sentiments correspond to important moments, and hence can be used to generate a summary of the event.
Protractor3D: A Closed-form Solution to Rotation-invariant 3D Gestures Sven Kratz, Michael Rohs Proceedings of the 16th International Conference on Intelligent User Interfaces
Short Paper
        
Protractor 3D is a gesture recognizer that extends the 2D touch screen gesture recognizer Protractor to 3D gestures. It inherits many of Protractor's desirable properties, such as high recognition rate, low computational and low memory requirements, ease of implementation, ease of customization, and low number of required training samples. Protractor 3D is based on a closed-form solution to finding the optimal rotation angle between two gesture traces involving quaternions. It uses a nearest neighbor approach to classify input gestures. It is thus well-suited for application in resource-constrained mobile devices. We present the design of the algorithm and a study that evaluated its performance.
Human Computation Games: a Survey Markus Krause, Jan Smeddnick EUSIPCO'11 Proceedings of the 19th European Signal Processing Conference
Short Paper
  
Teaching Serious Games Marc Herrlich, Markus Krause, Rainer Malaka, Jan Smeddnick Mensch & Computer Workshop on „Game Development in der Hochschulinformatik“
Workshop Paper
  
Dynamic ambient lighting for mobile devices Qian Qin, Michael Rohs, Sven Kratz Proceedings of the 24th annual ACM symposium adjunct on User interface software and technology
Poster
        
The information a small mobile device can show via its display has been always limited by its size. In large information spaces, relevant information, such as important locations on a map can get clipped when a user starts zooming and panning. Dynamic ambient lighting allows mobile devices to visualize off-screen objects by illuminating the background without compromising valuable display space. The lighted spots can be used to show the direction and distance of such objects by varying the spot's position and intensity. Dynamic ambient lighting also provides a new way of displaying the state of a mobile device. Illumination is provided by a prototype rear of device shell which contains LEDs and requires the device to be placed on a surface, such as a table or desk.
CapWidgets: Tangile Widgets Versus Multi-touch Controls on Mobile Devices Sven Kratz, Tilo Westermann, Michael Rohs, Georg Essl CHI '11 Extended Abstracts on Human Factors in Computing Systems
Poster
        
We present CapWidgets, passive tangible controls for capacitive touch screens. CapWidgets bring back physical controls to off-the-shelf multi-touch surfaces as found in mobile phones and tablet computers. While the user touches the widget, the surface detects the capacitive marker on the widget's underside. We study the relative performance of this tangible interaction with direct multi-touch interaction and our experimental results show that user performance and preferences are not automatically in favor of tangible widgets and careful design is necessary to validate their properties.
Deploying an Experimental Study of the Emergence of Human Communication Systems as an Online Game. Jan Smeddnick, Markus Krause IK'2011 Procedings of the Interdisciplinary College
Poster
  
Motion-based Serious Games for Pakinson Patients Oliver Assad, Robert Hermann, Damian Lilla, Björn Mellies, Ronald Meyer, Liron Shevach, Sandra Siegel, Melanie Springer, Saranat Tiemkeo, Jens Voges, Jan Wieferich, Marc Herrlich, Markus Krause, Rainer Malaka IK'2011 Procedings of the Interdisciplinary College
Poster
  
WuppDi! – Motion-Based serious games for parkinson’s patients Oliver Assad, Robert Hermann, Damian Lilla, Björn Mellies, Ronald Meyer, Liron Shevach, Sandra Siegel, Melanie Springer, Saranat Tiemkeo, Jens Voges, Jan Wieferich, Marc Herrlich, Markus Krause, Rainer Malaka IK'2011 Procedings of the Interdisciplinary College
Poster
  
↑ ↑ Top ↑ ↑
2010
Advancing Large Interactive Surfaces for Use in the Real World Jens Teichert, Marc Herrlich, Benjamin Walther-franks, Lasse Schwarten, Sebastian Feige, Markus Krause, Rainer Malaka Advances in Human-Computer Interaction
Journal Article
     
User-defined gestures for connecting mobile phones, public displays, and tabletops Christian Kray, Daniel Nesbitt, John Dawson, Michael Rohs Proceedings of the 12th international conference on Human computer interaction with mobile devices and services
Full Paper
        
Gestures can offer an intuitive way to interact with a computer. In this paper, we investigate the question whether gesturing with a mobile phone can help to perform complex tasks involving two devices. We present results from a user study, where we asked participants to spontaneously produce gestures with their phone to trigger a set of different activities. We investigated three conditions (device configurations): phone-to-phone, phone-to-tabletop, and phone to public display. We report on the kinds of gestures we observed as well as on feedback from the participants, and provide an initial assessment of which sensors might facilitate gesture recognition in a phone. The results suggest that phone gestures have the potential to be easily understood by end users and that certain device configurations and activities may be well suited for gesture control.
Semi-automatic zooming for mobile map navigation Sven Kratz, Ivo Brodien, Michael Rohs Proceedings of the 12th international conference on Human computer interaction with mobile devices and services
Full Paper
        
In this paper we present a novel interface for mobile map navigation based on Semi-Automatic Zooming (SAZ). SAZ gives the user the ability to manually control the zoom level of an SDAZ interface, while retaining the automatic zooming characteristics of that interface at times when the user is not explicitly controlling the zoom level. In a user study conducted using a realistic mobile map with a wide scale space, we compare SAZ with existing map interface techniques, multi-touch and Speed-Dependent Automatic Zooming (SDAZ). We extend a dynamic state-space model for Speed-Dependent Automatic Zooming (SDAZ) to accept 2D tilt input for scroll rate and zoom level control and implement a dynamically zoomable map view with access to high-resolution map material for use in our study. The study reveals that SAZ performs significantly better than SDAZ and that SAZ is comparable in performance and usability to a standard multi-touch map interface. Furthermore, the study shows that SAZ could serve as an alternative to multi-touch as input technique for mobile map interfaces.
Characteristics of pressure-based input for mobile devices Craig Stewart, Michael Rohs, Sven Kratz, Georg Essl Proceedings of the 28th international conference on Human factors in computing systems
Full Paper
        
We conducted a series of user studies to understand and clarify the fundamental characteristics of pressure in user interfaces for mobile devices. We seek to provide insight to clarify a longstanding discussion on mapping functions for pressure input. Previous literature is conflicted about the correct transfer function to optimize user performance. Our study results suggest that the discrepancy can be explained by different signal conditioning circuitry and with improved signal conditioning the user-performed precision relationship is linear. We also explore the effects of hand pose when applying pressure to a mobile device from the front, the back, or simultaneously from both sides in a pinching movement. Our results indicate that grasping type input outperforms single-sided input and is competitive with pressure input against solid surfaces. Finally we provide an initial exploration of non-visual multimodal feedback, motivated by the desire for eyes-free use of mobile devices. The findings suggest that non-visual pressure input can be executed without degradation in selection time but suffers from accuracy problems.
Dance Pattern Recognition using Dynamic Time Warping Henning Pohl, Aristotelis Hadjakos Proceedings of the 7th Sound and Music Computing Conference (SMC 2010)
Full Paper
     
In this paper we describe a method to detect patterns in dance movements. Such patterns can be used in the context of interactive dance systems to allow dancers to influence computational systems with their body movements. For the detection of motion patterns, dynamic time warping is used to compute the distance between two given movements. A custom threshold clustering algorithm is used for subsequent unsupervised classification of movements. For the evaluation of the presented method, a wearable sensor system was built. To quantify the accuracy of the classification, a custom label space mapping was designed to allow comparison of sequences with disparate label sets.
Use the Force (or something) - Pressure and Pressure-Like Input for Mobile Music Performance Georg Essl, Michael Rohs, Sven Kratz Proceedings of the International Conference on New Interfaces for Musical Expression (NIME 2010)
Short Paper
     
Impact force is an important dimension for percussive musical instruments such as the piano. We explore three possible mechanisms how to get impact forces on mobile multi-touch devices: using built-in accelerometers, the pressure sensing capability of Android phones, and external force sensing resistors. We find that accelerometers are difficult to control for this purpose. Android's pressure sensing shows some promise, especially when combined with augmented playing technique. Force sensing resistors can offer good dynamic resolution but this technology is not currently offered in commodity devices and proper coupling of the sensor with the applied impact is difficult.
Extending the Virtual Trackball Metaphor to Rear Touch Input Sven Kratz, Michael Rohs Proceedings of the 2010 IEEE Symposium on 3D User Interfaces (3DUI 2010)
Short Paper
        
Interaction with 3D objects and scenes is becoming increasingly important on mobile devices. We explore 3D object rotation as a fundamental interaction task. We propose an extension of the virtual trackball metaphor, which is typically restricted to a half sphere and single-sided interaction, to actually use a full sphere. The extension is enabled by a hardware setup called the ÂżiPhone Sandwich,Âż which allows for simultaneous front-and-back touch input. This setup makes the rear part of the virtual trackball accessible for direct interaction and thus achieves the realization of the virtual trackball metaphor to its full extent. We conducted a user study that shows that a back-of-device virtual trackball is as effective as a front-of-device virtual trackball and that both outperform an implementation of tilt-based input.
A $3 gesture recognizer: simple gesture recognition for devices equipped with 3D acceleration sensors Sven Kratz, Michael Rohs Proceeding of the 14th international conference on Intelligent user interfaces
Short Paper
        
We present the $3 Gesture Recognizer, a simple but robust gesture recognition system for input devices featuring 3D acceleration sensors. The algorithm is designed to be implemented quickly in prototyping environments, is intended to be device-independent and does not require any special toolkits or frameworks. It relies solely on simple trigonometric and geometric calculations. A user evaluation of our system resulted in a correct gesture recognition rate of 80%, when using a set of 10 unique gestures for classification. Our method requires significantly less training data than other gesture recognizers and is thus suited to be deployed and to deliver results rapidly.
WorldCupinion: Experiences with an Android App for Real-Time Opinion Sharing during World Cup Soccer Games Michael Rohs, Sven Kratz, Robert Schleicher, Alireza Sahami, Albrecht Schmidt Research in the Large: Using App Stores, Markets and other wide distribution channels in UbiComp research. Workshop at Ubicomp 2010
Workshop Paper
     
Mobile devices are increasingly used in social networking applications. So far, there is little work on real-time emotion and opinion sharing in large loosely-coupled user communities. We present an Android app for giving realtime feedback during soccer games and to create ad hoc fan groups. We discuss our experiences with deploying this app over four weeks during 2010 soccer world cup. We highlight challenges and opportunities we faced and give recommendations for future work in this area.
A Tabletop System for supporting Paper Prototyping of Mobile Interfaces Benjamin BĂ€hr, Michael Rohs, Sven Kratz PaperComp 2010: 1st International Workshop on Paper Computing. Workshop at Ubicomp 2010
Workshop Paper
     
We present a tabletop-based system that supports rapid paper-based prototyping for mobile applications. Our system combines the possibility of manually sketching interface screens on paper with the ability to define dynamic interface behavior through actions on the tabletop. This not only allows designers to digitize interface sketches for paper prototypes, but also enables the generation of prototype applications able to run on target devices. By making physical and virtual interface sketches interchangeable, our system greatly enhances and speeds up the development of mobile applications early in the interface design process.
Natural User Interfaces in Mobile Phone Interaction Sven Kratz, Fabian Hemmert, Michael Rohs Workshop on Natural User Interfaces at CHI 2010
Workshop Paper
     
User interfaces for mobile devices move away from mainly button- and menu-based interaction styles and towards more direct techniques, involving rich sensory input and output. The recently proposed concept of Natural User Interfaces (NUIs) provides a way to structure the discussion about these developments. We examine how two-sided and around-device interaction, gestural input, and shape- and weight-based output can be used to create NUIs for mobile devices. We discuss the applicability of NUI properties in the context of mobile interaction.
Frontiers of a Paradigm - Exploring Human Computation with Digital Games Markus Krause, Aneta Takhtamysheva, Marion Wittstock, Rainer Malaka HComp'10 Proceedings of the ACM SIGKDD Workshop on Human Computation
Workshop Paper
  
Webpardy : Harvesting QA by HC Hidir Aras, Markus Krause, Andreas Haller, Rainer Malaka HComp'10 Proceedings of the ACM SIGKDD Workshop on Human Computation
Workshop Paper
  
A multi-touch enabled steering wheel: exploring the design space Max Pfeiffer, Dagmar Kern, Johannes Schöning, Tanja Döring, Antonio Kroeger, Albrecht Schmidt CHI '10 Extended Abstracts on Human Factors in Computing Systems
Poster
     
Human Computation in Action Markus Krause IK'2010 Procedings of the Interdisciplinary College
Poster
  
↑ ↑ Top ↑ ↑
2009
Bridging the gap between the Kodak and the Flickr generations: A novel interaction technique for collocated photo sharing Christian Kray, Michael Rohs, Jonathan Hook, Sven Kratz Int. J. Hum.-Comput. Stud.
Journal Article
     
Passing around stacks of paper photographs while sitting around a table is one of the key social practices defining what is commonly referred to as the ‘Kodak Generation’. Due to the way digital photographs are stored and handled, this practice does not translate well to the ‘Flickr Generation’, where collocated photo sharing often involves the (wireless) transmission of a photo from one mobile device to another. In order to facilitate ‘cross-generation’ sharing without enforcing either practice, it is desirable to bridge this gap in a way that incorporates familiar aspects of both. In this paper, we discuss a novel interaction technique that addresses some of the constraints introduced by current communication technology, and that enables photo sharing in a way, which resembles the passing of stacks of paper photographs. This technique is based on dynamically generated spatial regions around mobile devices and has been evaluated through two user studies. The results we obtained indicate that our technique is easy to learn and as fast, or faster than, current technology such as transmitting photos between devices using Bluetooth. In addition, we found evidence of different sharing techniques influencing social practice around photo sharing. The use of our technique resulted in a more inclusive and group-oriented behavior in contrast to Bluetooth photo sharing, which resulted in a more fractured setting composed of sub-groups.
Impact of item density on the utility of visual context in magic lens interactions Michael Rohs, Robert Schleicher, Johannes Schöning, Georg Essl, Anja Naumann, Antonio KrĂŒger Personal Ubiquitous Comput.
Journal Article
     
This article reports on two user studies investigating the effect of visual context in handheld augmented reality interfaces. A dynamic peephole interface (without visual context beyond the device display) was compared to a magic lens interface (with video see-through augmentation of external visual context). The task was to explore items on a map and look for a specific attribute. We tested different sizes of visual context as well as different numbers of items per area, i.e. different item densities. Hand motion patterns and eye movements were recorded. We found that visual context is most effective for sparsely distributed items and gets less helpful with increasing item density. User performance in the magic lens case is generally better than in the dynamic peephole case, but approaches the performance of the latter the more densely the items are spaced. In all conditions, subjective feedback indicates that participants generally prefer visual context over the lack thereof. The insights gained from this study are relevant for designers of mobile AR and dynamic peephole interfaces, involving spatially tracked personal displays or combined personal and public displays, by suggesting when to use visual context.
Interactivity for Mobile Music-Making Georg Essl, Michael Rohs Organised Sound
Journal Article
     
Mobile phones offer an attractive platform for interactive music performance. We provide a theoretical analysis of the sensor capabilities via a design space and show concrete examples of how different sensors can facilitate interactive performance on these devices. These sensors include cameras, microphones, accelerometers, magnetometers and multitouch screens. The interactivity through sensors in turn informs aspects of live performance as well as composition though persistence, scoring, and mapping to musical notes or abstract sounds.
PhotoMap: Using Spontaneously Taken Images of Public Maps for Pedestrian Navigation Tasks on Mobile Devices Johannes Schöning, Antonio KrĂŒger, Keith Cheverst, Michael Rohs, Markus Löchtefeld, Faisal Taher Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
Full Paper
        
In many mid- to large-sized cities public maps are ubiquitous. One can also find a great number of maps in parks or near hiking trails. Public maps help to facilitate orientation and provide special information to not only tourists but also to locals who just want to look up an unfamiliar place while on the go. These maps offer many advantages compared to mobile maps from services like Google Maps Mobile or Nokia Maps. They often show local landmarks and sights that are not shown on standard digital maps. Often these 'You are here' (YAH) maps are adapted to a special use case, e.g. a zoo map or a hiking map of a certain area. Being designed for a fashioned purpose these maps are often aesthetically well designed and their usage is therefore more pleasant. In this paper we present a novel technique and application called PhotoMap that uses images of 'You are here' maps taken with a GPS-enhanced mobile camera phone as background maps for on-the-fly navigation tasks. We discuss different implementations of the main challenge, namely helping the user to properly georeference the taken image with sufficient accuracy to support pedestrian navigation tasks. We present a study that discusses the suitability of various public maps for this task and we evaluate if these georeferenced photos can be used for navigation on GPS-enabled devices.
HoverFlow: Expanding the Design Space of Around-Device Interaction Sven Kratz, Michael Rohs Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
Full Paper
        
In this paper we explore the design space of around-device interaction (ADI). This approach seeks to expand the interaction possibilities of mobile and wearable devices beyond the confines of the physical device itself to include the space around it. This enables rich 3D input, comprising coarse movement-based gestures, as well as static position-based gestures. ADI can help to solve occlusion problems and scales down to very small devices. We present a novel around-device interaction interface that allows mobile devices to track coarse hand gestures performed above the device's screen. Our prototype uses infrared proximity sensors to track hand and finger positions in the device's proximity. We present an algorithm for detecting hand gestures and provide a rough overview of the design space of ADI-based interfaces.
Improving the Communication of Spatial Information in Crisis Response by Combining Paper Maps and Mobile Devices Johannes Schöning, Michael Rohs, Antonio KrĂŒger, Christoph Stasch Mobile Response
Full Paper
        
Efficient and effective communication between mobile units and the central emergency operation center is a key factor to respond successfully to the challenges of emergency management. Nowadays, the only ubiquitously available modality is a voice channel through mobile phones or radio transceivers. This makes it often very difficult to convey exact geographic locations and can lead to misconceptions with severe consequences, such as a fire brigade heading to the right street address in the wrong city. In this paper we describe a handheld augmented reality approach to support the communication of spatial information in a crisis response scenario. The approach combines mobile camera devices with paper maps to ensure a quick and reliable exchange of spatial information.
Impact of Item Density on Magic Lens Interactions Michael Rohs, Georg Essl, Johannes Schöning, Anja Naumann, Robert Schleicher, Antonio KrĂŒger Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
Short Paper
        
We conducted a user study to investigate the effect of visual context in handheld augmented reality interfaces. A dynamic peephole interface (without visual context beyond the device display) was compared to a magic lens interface (with video see-through augmentation of external visual context). The task was to explore objects on a map and look for a specific attribute shown on the display. We tested different sizes of visual context as well as different numbers of items per area, i.e. different item densities. We found that visual context is most effective for sparse item distributions and the performance benefit decreases with increasing density. User performance in the magic lens case approaches the performance of the dynamic peephole case the more densely spaced the items are. In all conditions, subjective feedback indicates that participants generally prefer visual context over the lack thereof. The insights gained from this study are relevant for designers of mobile AR and dynamic peephole interfaces by suggesting when external visual context is most beneficial.
Using Hands and Feet to Navigate and Manipulate Spatial Data Johannes Schöning, Florian Daiber, Antonio KrĂŒger, Michael Rohs Proceedings of the 27th international conference extended abstracts on Human factors in computing systems
Short Paper
        
We demonstrate how multi-touch hand gestures in combination with foot gestures can be used to perform navigation tasks in interactive systems. The geospatial domain is an interesting example to show the advantages of the combination of both modalities because the complex user interfaces of common Geographic Information System (GIS) requires a high degree of expertise from its users. Recent developments in interactive surfaces that enable the construction of low cost multi-touch displays and relatively cheap sensor technology to detect foot gestures allow the deep exploration of these input modalities for GIS users with medium or low expertise. In this paper, we provide a categorization of multitouch hand and foot gestures for the interaction with spatial data on a large-scale interactive wall. In addition we show with an initial evaluation how these gestures can improve the overall interaction with spatial information.
Map Torchlight: A Mobile Augmented Reality Camera Projector Unit Johannes Schöning, Michael Rohs, Sven Kratz, Markus Löchtefeld, Antonio KrĂŒger Proceedings of the 27th international conference extended abstracts on Human factors in computing systems
Short Paper
        
The advantages of paper-based maps have been utilized in the field of mobile Augmented Reality (AR) in the last few years. Traditional paper-based maps provide high-resolution, large-scale information with zero power consumption. There are numerous implementations of magic lens interfaces that combine high-resolution paper maps with dynamic handheld displays. From an HCI perspective, the main challenge of magic lens interfaces is that users have to switch their attention between the magic lens and the information in the background. In this paper, we attempt to overcome this problem by using a lightweight mobile camera projector unit to augment the paper map directly with additional information. The "Map Torchlight" is tracked over a paper map and can precisely highlight points of interest, streets, and areas to give directions or other guidance for interacting with the map.
LittleProjectedPlanet: An Augmented Reality Game for Camera Projector Phones Markus Löchtefeld, Johannes Schöning, Michael Rohs, Antonio KrĂŒger Workshop on Mobile Interaction with the Real World (MIRW at MobileHCI 2009), Bonn, Germany, September 15, 2009
Workshop Paper
     
With the miniaturization of projection technology the integration of tiny projection units, normally referred to as pico projectors, into mobile devices is not longer ction. Such integrated projectors in mobile devices could make mobile projection ubiquitous within the next few years. These phones soon will have the ability to project large-scale information onto any surfaces in the real world. By doing so the interaction space of the mobile device can be expanded to physical objects in the environment and this can support interaction concepts that are not even possible on modern desktop computers today. In this paper, we explore the possibilities of camera projector phones with a mobile adaption of the Playstation3 game LittleBigPlanet. The camera projector unit is used to augment the hand drawings of a user with an overlay displaying physical interaction of virtual objects with the real world. Players can sketch a 2D world on a sheet of paper or use an existing physical configuration of objects and let the physics engine simulate physical procedures in this world to achieve game goals.
Unobtrusive Tabletops: Linking Personal Devices with Regular Tables Sven Kratz, Michael Rohs Workshop Multitouch and Surface Computing at CHI'09
Workshop Paper
     
In this paper we argue that for wide deployment, interactive surfaces should be embedded in real environments as unobtrusively as possible. Rather than deploying dedicated interactive furniture, in environments such as pubs, cafés, or homes it is often more acceptable to augment existing tables with interactive functionality. One example is the use of robust camera-projector systems in real-world settings in combination with spatially tracked touch-enabled personal devices. This retains the normal usage of tabletop surfaces, solves privacy issues, and allows for storage of media items on the personal devices. Moreover, user input can easily be tracked with high precision and low latency and can be attributed to individual users.
TaxiMedia: An Interactive Context-Aware Entertainment and Advertising System Florian Alt, Alireza Sahami Shirazi, Max Pfeiffer, Paul Holleis, Albrecht Schmidt 2nd Pervasive Advertising Workshop at Informatics 2009
Workshop Paper
  
Games for Games Aneta Takhtamysheva, Robert Porzel, Markus Krause HComp'09 Proceedings of the ACM SIGKDD Workshop on Human Computation
Workshop Paper
  
Playful tagging: folksonomy generation using online games Markus Krause, Hidir Aras WWW '09 Proceedings of the 18th international conference on World wide web
Poster
  
Squeezing the Sandwich: A Mobile Pressure-Sensitive Two-Sided Multi-Touch Prototype Georg Essl, Michael Rohs, Sven Kratz Demonstration at the 22nd Annual ACM Symposium on User Interface Software and Technology (UIST), Victoria, BC, Canada
Demo
     
Two-sided pressure input is common in everyday interactions such as grabbing, sliding, twisting, and turning an object held between thumb and index finger. We describe and demonstrate a research prototype which allows for twosided multitouch sensing with continuous pressure input at interactive rates and we explore early ideas of interaction techniques that become possible with this setup. The advantage of a two-sided pressure interaction is that it enables high degree-of-freedom input locally. Hence rather complex, yet natural interactions can be designed using little finger motion and device space.
↑ ↑ Top ↑ ↑
2008
Group Coordination and Negotiation through Spatial Proximity Regions around Mobile Devices on Augmented Tabletops Christian Kray, Michael Rohs, Jonathan Hook, Sven Kratz Horizontal Interactive Human Computer Systems, 2008. TABLETOP 2008. 3rd IEEE International Workshop on
Journal Article
     
Negotiation and coordination of activities involving a number of people can be a difficult and time-consuming process, even when all participants are collocated. We propose the use of spatial proximity regions around mobile devices on a table to significantly reduce the effort of proposing and exploring content within a group of collocated people. In order to determine the location of devices on ordinary tables, we developed a tracking mechanism for a camera-projector system that uses dynamic visual markers displayed on the screen of a device. We evaluated our spatial proximity region based approach using a photo-sharing application for people sat around a table. The tabletop provides a frame of reference in which the spatial arrangement of devices signals the coordination state to the users. The results from the study indicate that the proposed approach facilitates coordination in several ways, for example, by allowing for simultaneous user activity and by reducing the effort required to achieve a common goal. Our approach reduced the task completion time by 43% and was rated as superior in comparison to other established techniques.
Designing Low-Dimensional Interaction for Mobile Navigation in 3D Audio Spaces Till SchÀfers, Michael Rohs, Sascha Spors, Alexander Raake, Jens Ahrens 34th International Conference of the Audio Engineering Society (AES 2008), Jeju Island, Korea, August 28-30, 2008
Full Paper
     
In this paper we explore spatial audio as a new design space for applications like teleconferencing and audio stream management on mobile devices. Especially in conjunction with input techniques using motion-tracking, the interaction has to be thoroughly designed in order to allow low-dimensional input devices like gyroscopic sensors to be used for controlling the rather complex spatial setting of the virtual audio space. We propose a new interaction scheme that allows the mapping of low-dimensional input data to navigation of a listener within the spatial setting.
Sensing-Based Interaction for Information Navigation on Handheld Displays Michael Rohs, Georg Essl Advances in Human-Computer Interaction Volume 2008 (2008)
Full Paper
        
Information navigation on handheld displays is characterized by the small display dimensions and limited input capabilities of today’s mobile devices. Special strategies are required to help users navigate to off-screen content and develop awareness of spatial layouts despite the small display. Yet, handheld devices offer interaction possibilities that desktop computers do not. Handheld devices can easily be moved in space and used as a movable window into a large virtual workspace. We investigate different information navigation methods for small-scale handheld displays using a range of sensor technologies for spatial tracking. We compare user performance in an abstract map navigation task and discuss the tradeoffs of the different sensor and visualization techniques.
Target Acquisition with Camera Phones when used as Magic Lenses Michael Rohs, Antti Oulasvirta Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Full Paper
        
When camera phones are used as magic lenses in handheld augmented reality applications involving wall maps or posters, pointing can be divided into two phases: (1) an initial coarse physical pointing phase, in which the target can be directly observed on the background surface, and (2) a fine-control virtual pointing phase, in which the target can only be observed through the device display. In two studies, we show that performance cannot be adequately modeled with standard Fitts' law, but can be adequately modeled with a two-component modification. We chart the performance space and analyze users' target acquisition strategies in varying conditions. Moreover, we show that the standard Fitts' law model does hold for dynamic peephole pointing where there is no guiding background surface and hence the physical pointing component of the extended model is not needed. Finally, implications for the design of magic lens interfaces are considered.
Improving Interaction with Virtual Globes Through Spatial Thinking: Helping Users Ask "Why?" Johannes Schöning, Brent Hecht, Martin Raubal, Antonio KrĂŒger, Meredith Marsh, Michael Rohs Proceedings of the 13th International Conference on Intelligent User Interfaces
Full Paper
        
Virtual globes have progressed from little-known technology to broadly popular software in a mere few years. We investigated this phenomenon through a survey and discovered that, while virtual globes are en vogue, their use is restricted to a small set of tasks so simple that they do not involve any spatial thinking. Spatial thinking requires that users ask "what is where" and "why"; the most common virtual globe tasks only include the "what". Based on the results of this survey, we have developed a multi-touch virtual globe derived from an adapted virtual globe paradigm designed to widen the potential uses of the technology by helping its users to inquire about both the "what is where" and "why" of spatial distribution. We do not seek to provide users with full GIS (geographic information system) functionality, but rather we aim to facilitate the asking and answering of simple "why" questions about general topics that appeal to a wide virtual globe user base.
The Design Space of Mobile Phone Input Techniques for Ubiquitous Computing Rafael Ballagas, Michael Rohs, Jennifer Sheridan, Jan Borchers In: Joanna Lumsden (Ed.): Handbook of Research on User Interface Design and Evaluation for Mobile Technologies. IGI Global, Hershey, PA, USA, 2008. ISBN: 978-1-59904-871-0
Book Chapter
  
The mobile phone is the first truly pervasive computer. In addition to its core communications functionality, it is increasingly used for interaction with the physical world. This chapter examines the design space of input techniques using established desktop taxonomies and design spaces to provide an indepth discussion of existing interaction techniques. A new five-part spatial classification is proposed for ubiquitous mobile phone interaction tasks discussed in our survey. It includes supported subtasks (position, orient, and selection), dimensionality, relative vs. absolute movement, interaction style (direct vs. indirect), and feedback from the environment (continuous vs. discrete). Key design considerations are identified for deploying these interaction techniques in real-world applications. Our analysis aims to inspire and inform the design of future smart phone interaction techniques.
Spatial Authentication on Large Interactive Multi-Touch Surfaces Johannes Schöning, Michael Rohs, Antonio KrĂŒger Adjunct Proceedings of the 3rd IEEE Workshop on Tabletops and Interactive Surfaces (IEEE Tabletop 2008), Amsterdam, the Netherlands, October 1-3, 2008
Workshop Paper
     
The exploitation of finger and hand tracking technology based on infrared light, such as FTIR, Diffused Illumination (DI) or Diffused Surface Illumination (DSI) has enabled the construction of large-scale, low-cost, interactive multi-touch surfaces. In this context, access and security problems arise if larger teams operate theses surfaces with different access rights. The team members might have several levels of authority or specific roles, which determine what functions and objects they are allowed to access via the multi-touch surface. In this paper we present first concepts and strategies to authenticate and interact with subregions of a large-scale multi-touch wall.
A GPS Tracking Application with a Tilt- and Motion-Sensing Interface Michael Mock, Michael Rohs Workshop on Mobile and Embedded Interactive Systems (MEIS at Informatik 2008), Munich, Germany, September 11, 2008
Workshop Paper
     
Combining GPS tracks with semantic annotations is the basis for large data analysis tasks that give insight into the movement behavior of populations. In this paper, we present a first prototype implementation of a GPS tracking application that aims at subsuming GPS tracking and manual annotation on a standard mobile phone. The main purpose of this prototype is to investigate its usability, which is achieved by a tilt- and motion-sensing interface. We provide a GPS diary function that visualizes GPS trajectories on a map, allows annotating the trajectory, and navigating through the trajectory by moving and tilting the mobile phone. We present the design of our application and report on the very first user experiences.
Navigating Dynamically-Generated High Quality Maps on Tilt-Sensing Mobile Devices Sven Kratz, Michael Rohs Workshop on Mobile and Embedded Interactive Systems (MEIS at Informatik 2008), Munich, Germany, September 11, 2008
Workshop Paper
     
On mobile devices, navigating in high-resolution and high-density 2D information spaces, such as geographic maps, is a common and important task. In order to support this task, we expand on work done in the areas of tilt-based browsing on mobile devices and speed-dependent automatic zooming in the traditional desktop environment to create an efficient interface for browsing high-volume map data at a wide range of scales. We also discuss infrastructure aspects, such as streaming 2D content to the device and efficiently rendering it on the display, using standards such as Scalable Vector Graphics (SVG).
Mobile Interaction with the "Real World" Johannes Schöning, Michael Rohs, Antonio KrĂŒger Workshop on Mobile Interaction with the Real World (MIRW at MobileHCI 2008), Amsterdam, The Netherlands, September 2, 2008
Workshop Paper
     
Real-world objects (and the world) are usually not at. It is unfortunate, then, that mobile augmented reality (AR) applications often concentrate on the interaction with 2D objects. Typically, 2D markers are required to track mobile devices relative to the real-world objects to be augmented, and the interaction with these objects is normally limited to the xed plane in which these markers are located. Using platonic solids, we show how to easily extend the interaction space to tangible 3D models. In particular, we present a proof-of-concept example in which users interact with a 3D paper globe using a mobile device that augments the globe with additional information. (In other words, mobile interaction with the "real world".) We believe that this particular 3D interaction with a paper globe can be very helpful in educational settings, as it allows pupils to explore our planet in an easy and intuitive way. An important aspect is that using the real shape of the world can help to correct many common geographic misconceptions that result from the projection of the earth's surface onto a 2D plane.
Photomap: Snap, Grab and Walk away with a "YOU ARE HERE" Map Keith Cheverst, Johannes Schöning, Antonio KrĂŒger, Michael Rohs Workshop on Mobile Interaction with the Real World (MIRW at MobileHCI 2008), Amsterdam, The Netherlands, September 2, 2008
Workshop Paper
     
One compelling scenario for the use of GPS enabled phones is support for navigation, e.g. enabling a user to glance down at the screen of her mobile phone in order to be reassured that she is indeed located where she thinks she is. While service based approaches to support such navigation tasks are becoming increasingly available - whereby a user downloads (for a fee) a relevant map of her current area onto her GPS enabled phone, the approach is often far from ideal. Typically, the user is unsure as to the cost of downloading the map (especially when she is in a foreign country) and such maps are highly generalised and may not match the user's current activity and needs. For example, rather than requiring a standard map on a mobile device of the area, the user may simply require a map of a university campus with all departments or a map showing footpaths around the area in which she is currently trekking. Indeed, one will often see such specialised maps on public signs situated where they may be required (in a just-in-time sense) and it is interesting to consider how one might enable users to walk up to such situated signs and use their mobile phone to `take away' the map presented in order to use it to assist their ongoing navigation activity. In this paper, we are interested in a subset of this problem space in which the user `grabs' a map shown on a public display by taking a photograph of it and using it as a digital map on her mobile phone. We present two di erent scenarios for our new application called PhotoMaps: In the rst one we are having full control on the map design process (e.g. we are able to place markers etc., in the second scenario we use the map as it is and appropriate it for further navigation use.
Using Mobile Phones to Spontaneously Authenticate and Interact with Multi-Touch Surfaces Johannes Schöning, Michael Rohs, Antonio KrĂŒger Proceedings of the Workshop on Designing Multi-Touch Interaction Techniques for Coupled Public and Private Displays (PPD at AVI 2008), Naples, Italy, May 31, 2008
Workshop Paper
     
The development of FTIR (Frustrated Total Internal Reflection) technology has enabled the construction of large-scale, low-cost, multi-touch displays. These displays—capable of sensing fingers, hands, and whole arms—have great potential for exploring complex data in a natural manner and easily scale in size and the number of simultaneous users. In this context, access and security problems arise if a larger team operates the surface with different access rights. The team members might have different levels of authority or specific roles, which determines what functions they are allowed to access via the multi-touch surface. In this paper we present first concepts and strategies to use a mobile phone to spontaneously authenticate and interact with sub-regions of a large-scale multi-touch wall.
Facilitating Opportunistic Interaction with Ambient Displays Christian Kray, Areti Galani, Michael Rohs Workshop on Designing and Evaluating Mobile Phone-Based Interaction with Public Displays at CHI 2008, Florence, Italy, April 5, 2008
Workshop Paper
     
Some public display systems provide information that is vital for people in their vicinity (such as departure times at airports and train stations) whereas other screens are more ambient (such as displays providing background information on exhibits in a museum). The question we are discussing in this paper is how to design interaction mechanisms for the latter, in particular how mobile phones can be used to enable opportunistic and leisurely interaction. We present results from an investigation into the use and perception of a public display in a café, and we derive some requirements for phone-based interaction with (ambient) public displays. Based on these requirements, we briefly evaluate three different interaction techniques.
Microphone as Sensor in Mobile Phone Performance Ananya Misra, Georg Essl, Michael Rohs Proceedings of the 8th International Conference on New Interfaces for Musical Expression (NIME 2008), Genova, Italy, June 5-7, 2008
Poster
     
Many mobile devices, specifically mobile phones, come equipped with a microphone. Microphones are high-fidelity sensors that can pick up sounds relating to a range of physical phenomena. Using simple feature extraction methods, parameters can be found that sensibly map to synthesis algorithms to allow expressive and interactive performance. For example blowing noise can be used as a wind instrument excitation source. Also other types of interactions can be detected via microphones, such as striking. Hence the microphone, in addition to allowing literal recording, serves as an additional source of input to the developing field of mobile phone performance.
User Detection for a Multi-touch Table via Proximity Sensors Jens Teichert, Marc Herrlich, Benjamin Walther-Franks, Lasse Schwarten, Markus Krause Proceedings of the IEEE Tabletops and Interactive Surfaces
Poster
  
Multitouch Motion Capturing Markus Krause, Marc Herrlich, Lasse Schwarten, Jens Teichert, Benjamin Walther-Franks Proceedings of the IEEE Tabletops and Interactive Surfaces
Poster
  
Multitouch Interface Metaphors for 3D Modeling Marc Herrlich, Markus Krause, Lasse Schwarten, Jens Teichert, Benjamin Walther-Franks Proceedings of the IEEE Tabletops and Interactive Surfaces
Poster
  
↑ ↑ Top ↑ ↑