Published Research

DynaButtons: Fast Interactive Soft Buttons with Analog Control

While mechanical buttons are ubiquitous, their haptic response is fixed, reducing interface flexibility and precluding an avenue for rich feedback. In this work, we describe a new type of dynamic button that can vary its visual and haptic response via rapid shape change. To achieve this, we made several advances in the performance of embedded electroosmotic pumps: we increased core pump speed >300% over prior work, demonstrated closed-loop control, and investigated analog output that varies in response to pressure and force inputs. Published at IEEE HAPTICS 2024.

Expressive, Scalable, Mid-air Haptics with Synthetic Jets

Non-contact, mid-air haptic devices have been utilized for a wide variety of experiences, including those in extended reality, public displays, medical, and automotive domains. In this work, we explore the use of synthetic jets as a promising and under-explored mid-air haptic feedback method. We show how synthetic jets can scale from compact, low-powered devices, all the way to large, long-range, and steerable devices. We built seven functional prototypes targeting different application domains to illustrate the broad applicability of our approach. These example devices are capable of rendering complex haptic effects, varying in both time and space. Published in TOCHI 2024; presented at CHI 2024.

WorldPoint: Finger Pointing as a Rapid and Natural Trigger for In-the-Wild Mobile Interactions

Pointing with one's finger is a natural and rapid way to denote an area or object of interest. In this work, we use the recent inclusion of wide-angle, rear-facing smartphone cameras, along with hardware-accelerated machine learning, to enable real-time, infrastructure-free, finger-pointing interactions on today's mobile phones. We envision users raising their hands to point in front of their phones as a "wake gesture". This can then be coupled with a voice command to trigger advanced functionality. For example, while composing an email, a user can point at a document on a table and say "attach". Published at ACM ISS 2023.

Fluid Reality: High-Resolution, Untethered Haptic Gloves using Electroosmotic Pump Arrays

We present a new approach to create high-resolution shape-changing fingerpad arrays with 20 haptic pixels/cm². Unlike prior pneumatic approaches, our actuators are low-profile (5mm thick), low-power (approximately 10mW/pixel), and entirely self-contained, with no tubing or wires running to external infrastructure. We show how multiple actuator arrays can be built into a five-finger, 160-actuator haptic glove that is untethered, lightweight (207g, including all drive electronics and battery), and has the potential to reach consumer price points at volume production. Published at ACM UIST 2023.

Pantœnna: Mouth Pose Estimation for VR/AR Headsets Using Low-Profile Antenna and Impedance Characteristic Sensing

In this work, we describe a new RF-based approach for capturing mouth pose using an antenna integrated into the underside of a VR/AR headset. Our approach side-steps privacy issues inherent in camera-based methods, while simultaneously supporting silent facial expressions that audio-based methods cannot. Further, compared to bio-sensing methods such as EMG and EIT, our method requires no contact with the wearer's body and can be fully self-contained in the headset, offering a high degree of physical robustness and user practicality. Published at ACM UIST 2023.

SmartPoser: Arm Pose Estimation with a Smartphone and Smartwatch Using UWB and IMU Data

The ability to track a user's arm pose could be valuable in a wide range of applications, including fitness, rehabilitation, augmented reality input, life logging, and context-aware assistants. Unfortunately, this capability is not readily available to consumers. Systems either require cameras, which carry privacy issues, or utilize multiple worn IMUs or markers. In this work, we describe how an off-the-shelf smartphone and smartwatch can work together to accurately estimate arm pose. Published at ACM UIST 2023.

IMUPoser: Full-Body Pose Estimation using IMUs in Phones, Watches, and Earbuds

Tracking body pose on-the-go could have powerful uses in fitness, mobile gaming, context-aware virtual assistants, and rehabilitation. However, users are unlikely to buy and wear special suits or sensor arrays to achieve this end. Instead, in this work, we explore the feasibility of estimating body pose using IMUs already in devices that many users own — namely smartphones, smartwatches, and earbuds. Our pipeline receives whatever subset of IMU data is available, potentially from just a single device, and produces a best-guess pose. We provide a comprehensive evaluation of our system, benchmarking it on both our own and existing IMU datasets. Published at ACM CHI 2023.

Flat Panel Haptics: Embedded Electroosmotic Pumps for Scalable Shape Displays

We present a new, miniaturizable type of shape-changing display using embedded electroosmotic pumps (EEOPs). Our pumps, controlled and powered directly by applied voltage, are 1.5mm in thickness, and allow complete stackups under 5mm. Nonetheless, they can move their entire volume’s worth of fluid in 1 second, and generate pressures of ±50kPa, enough to create dynamic, millimeter-scale tactile features on a surface that can withstand typical interaction forces (<1N). These are the requisite technical ingredients to enable, for example, a pop-up keyboard on a flat smartphone. We experimentally quantify the mechanical and psychophysical performance of our displays and conclude with a set of example interfaces. Published at ACM CHI 2023.

Surface I/O: Creating Devices with Functional Surface Geometry for Haptics and User Input

Surface I/O is a novel interface approach that functionalizes the exterior surface of devices to provide haptic and touch sensing without dedicated mechanical components. Achieving this requires a unique combination of surface features spanning the macro-scale (5cm~1mm), meso-scale (1mm~200um), and micro-scale (less than 200um). This approach simplifies interface creation, allowing designers to iterate on form geometry, haptic feeling, and sensing functionality without the limitations of mechanical mechanisms. While we prototyped our designs using 3D printers and laser cutters, our technique is applicable to mass production methods, including injection molding and stamping, enabling passive goods with new levels of interactivity. Published at CHI 2023.

DynaTags: Low-Cost Fiducial Marker Mechanisms

Printed fiducial markers are inexpensive, easy to deploy, robust and deservedly popular. However, their data payload is also static, unable to express any state beyond being present. For this reason, more complex electronic tagging technologies exist, which can sense and change state, but either require special equipment to read or are orders of magnitude more expensive than printed markers. In this work, we explore an approach between these two extremes: one that retains the simple, low-cost nature of printed markers, yet has some of the expressive capabilities of dynamic tags. Our “DynaTags” are simple mechanisms constructed from paper that express multiple payloads, allowing practitioners and researchers to create new and compelling physical-digital experiences. Published at ICMI 2022.

Pull Gestures with Coordinated Graphics on Dual-Screen Devices

A new class of dual-touchscreen device is beginning to emerge, either constructed as two screens hinged together, or as a single display that can fold. The interactive experience on these devices is simply that of two 2D touchscreens, with little to no synergy between the interactive areas. In this work, we consider how this unique, emerging form factor creates an interesting 3D niche, in which out-of-plane interactions on one screen can be supported with coordinated graphics in the other orthogonal screen. Following insights from an elicitation study, we focus on "pull gestures", a multimodal interaction combining on-screen touch input with in air movement. Published at ICMI 2022.

DiscoBand: Multiview Depth-Sensing Smartwatch Strap for Hand, Body and Environment Tracking

Real-time tracking of a user’s hands, arms and environment is valuable in a wide variety of HCI applications, from context awareness to virtual reality input. In response, we developed DiscoBand, a smartwatch strap not exceeding 1 cm in thickness and capable of advanced sensing. Our strap utilizes eight distributed depth sensors imaging the hand from different viewpoints, creating a sparse 3D point cloud. An additional eight depth sensors image outwards from the band to track the user’s body and surroundings. In addition to evaluating arm and hand pose tracking, we also describe a series of supplemental applications powered by our band's data, including held object recognition and environment mapping. Published at UIST 2022.

EtherPose: Continuous Hand Pose Tracking with Wrist-Worn Antenna Impedance Characteristic Sensing

EtherPose is a continuous hand pose tracking system employing two wrist-worn antennas, from which we measure the real-time dielectric loading resulting from different hand geometries (i.e., poses). Unlike worn camera-based methods, our RF approach is more robust to occlusion from clothing and avoids capturing potentially sensitive imagery. Through a series of simulations and empirical studies, we designed a proof-of-concept, worn implementation built around compact vector network analyzers. Sensor data is then interpreted by a machine learning backend, which outputs a fully-posed 3D hand. We also studied wrist angle and micro-gesture tracking. In the future, our approach could be miniaturized and extended to include more and different types of antennas. Published at UIST 2022.

SAMoSA: Sensing Activities with Motion and Subsampled Audio

Despite advances in audio- and motion-based human activity recognition (HAR) systems, a practical, power-efficient, and privacy-sensitive activity recognition approach has remained elusive. State-of-the-art activity recognition systems generally require power-hungry and privacy-invasive audio data. This is especially challenging for resource-constrained wearables, such as smartwatches. In contrast, our approach Senses Activities using Motion and Subsampled Audio (SAMoSA) using a smartwatch. We use audio rates ≤1kHz, rendering spoken content unintelligible, while also reducing power consumption. Our multimodal deep learning model achieves a recognition accuracy of 92.2% across 26 daily activities in four indoor environments. Published at IMWUT/Ubicomp 2022.

ControllerPose: Inside-Out Body Capture with VR Controller Cameras

We present a new and practical method for capturing user body pose in virtual reality experiences: integrating cameras into handheld controllers, where batteries, computation and wireless communication already exist. By virtue of the hands operating in front of the user during many VR interactions, our controller-borne cameras can capture a superior view of the body for digitization. We developed a series of demo applications illustrating the potential of our approach and more leg-centric interactions, such as balancing games and kicking soccer balls. Published at CHI 2022.

Mouth Haptics in VR using a Headset Ultrasound Phased Array

Today’s consumer virtual reality systems offer limited haptic feedback via vibration motors in handheld controllers. Rendering haptics to other parts of the body is an open challenge, especially in a practical and consumer-friendly manner. The mouth is of particular interest, as it is a close second in tactile sensitivity to the fingertips. In this research, we developed a thin, compact, beamforming array of ultrasonic transducers, which can render haptic effects onto the mouth. Importantly, all components are integrated into the VR headset, meaning the user does not need to wear an additional accessory or place any external infrastructure in their room. Our haptic sensations can be felt on the lips, teeth, and tongue, which can be incorporated into new and interesting VR experiences. Published at CHI 2022.

TriboTouch: Micro-Patterned Surfaces for Low-Latency Touchscreens

Touchscreen tracking latency, often 50ms or more, creates a rubber-banding effect in everyday direct manipulation tasks such as dragging, scrolling, and drawing. In this research, we demonstrate how the addition of a thin, 2D micro-patterned surface with 5 micron spaced features can be used to reduce motor-visual touchscreen latency. When a finger, stylus, or tangible is translated across this textured surface frictional forces induce acoustic vibrations which naturally encode sliding velocity. This high-speed 1D acoustic signal is fused with conventional low-speed, but high-spatial-accuracy 2D touch position data to reduce touchscreen latency. Published at CHI 2022.

ElectriPop: Low-Cost, Shape-Changing Displays Using Electrostatically Inflated Mylar Sheets

We describe how sheets of metalized mylar can be cut and then “inflated” into complex 3D forms with electrostatic charge for use in digitally-controlled, shape-changing displays. Our technique is compatible with industrial and hobbyist cutting processes, from die and laser cutting to handheld exacto-knives and scissors. Given that mylar film costs <$1 per square meter, we can create self-actuating 3D objects for just a few cents, opening new uses in low-cost consumer goods. Published at CHI 2022.

LRAir: Non-contact Haptics Using Synthetic Jets

LRAir is a new scalable, non-contact haptic actuation technique based on a speaker in a ported enclosure which can deliver air pulses to the skin. The technique is low cost, low voltage, and uses existing electronics. We detail a prototype device's design and construction, and validate a multiple domain impedance model with current, voltage, and pressure measurements. A non-linear phenomenon at the port creates pulsed zero-net-mass-flux flows, so-called "synthetic jets". Our prototype is capable of 10 mN time averaged thrusts at an air velocity of 10.4 m/s (4.3W input power). A perception study reveals that tactile effects can be detected 25 mm away with only 380 mVrms applied voltage, and 19 mWrms input power. Published at Haptics Symposium 2022.

FarOut Touch: Extending the Range of ad hoc Touch Sensing with Depth Cameras

The ability to co-opt everyday surfaces for touch interactivity has been an area of HCI research for several decades. In the past, advances in depth sensors and computer vision led to step-function improvements in ad hoc touch tracking. However, progress has slowed in recent years. We surveyed the literature and found that the very best ad hoc touch sensing systems are able to operate at ranges up to around 1.5 m. This limited range means that sensors must be carefully positioned in an environment to enable specific surfaces for interaction. Furthermore, the size of the interactive area is more table-scale than room-scale. In this research, we set ourselves the goal of doubling the sensing range of the current state of the art system. Published at SUI 2021.

EyeMU Interactions: Gaze + IMU Gestures on Mobile Devices

As smartphone screens have grown in size, single-handed use has become more cumbersome. Interactive targets that are easily seen can be hard to reach, particularly notifications and upper menu bar items. Users must either adjust their grip to reach distant targets, or use their other hand. In this research, we show how gaze estimation using a phone’s user-facing camera can be paired with IMU-tracked motion gestures to enable a new, intuitive, and rapid interaction technique on handheld phones. We describe our proof-of-concept implementation and gesture set, built on state-of-the-art techniques and capable of self-contained execution on a smartphone. In our user study, we found a mean euclidean gaze error of 1.7 cm and a seven-class motion gesture classification accuracy of 97.3%. Published at ICMI 2021.

Retargeted Self-Haptics for Increased Immersion in VR without Instrumentation

Today’s consumer virtual reality (VR) systems offer immersive graphics and audio, but haptic feedback is rudimentary – delivered through controllers with vibration feedback or is non-existent (i.e., the hands operating freely in the air). In this research, we explore an alternative, highly mobile and controller-free approach to haptics, where VR applications utilize the user’s own body to provide physical feedback. To achieve this, we warp (retarget) the locations of a user’s hands such that one hand serves as a physical surface or prop for the other hand. For example, a hand holding a virtual nail can serve as a physical backstop for a hand that is virtually hammering, providing a sense of impact in an air-borne and uninstrumented experience. Published at UIST 2021.

3D Hand Pose Estimation on Conventional Capacitive Touchscreens

Contemporary touch-interface devices capture the X/Y position of finger tips on the screen, and pass these coordinates to applications as though the input were points in space. Of course, human hands are much more sophisticated, able to form rich 3D poses capable of far more complex interactions than poking at a screen. In this paper, we describe how conventional capacitive touchscreens can be used to estimate 3D hand pose, enabling rich interaction opportunities. Our approach requires no new sensors, and could be deployed to existing devices with a simple software update. After describing our software pipeline, we report findings from our user study, which shows our 3D joint tracking accuracy is competitive with even external sensing techniques. Published at MobileHCI 2021.

Pose-on-the-Go: Approximating User Pose with Smartphone Sensor Fusion

Pose-on-the-Go is a full-body pose estimation system that uses sensors already found in today’s smartphones. This stands in contrast to prior systems, which require worn or external sensors. We achieve this result via extensive sensor fusion, leveraging a phone's front and rear cameras, the user-facing depth camera, touchscreen, and IMU. Even still, we are missing data about a user's body (e.g., angle of the elbow joint), and so we use inverse kinematics to estimate and animate probable body poses. We provide a detailed evaluation of our system, benchmarking it against a professional-grade Vicon tracking system. A series of demonstration applications underscore the unique potential of our approach, which could be enabled on many modern smartphones with a software update. Published at CHI 2021.

Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition

Millimeter wave (mmWave) Doppler radar is a new and promising sensing approach for human activity recognition, offering signal richness approaching that of microphones and cameras, but without many of the privacy-invading downsides. However, unlike audio and computer vision approaches that can draw from huge libraries of videos for training deep learning models, Doppler radar has no existing large datasets, holding back this otherwise promising sensing modality. In response, we set out to create a software pipeline that converts videos of human activities into realistic, synthetic Doppler radar data. Our approach is an important stepping stone towards reducing the burden of training human sensing systems, and could help bootstrap uses in human-computer interaction. Published at CHI 2021.

Vibrosight++: City-Scale Sensing Using Existing Retroreflective Signs

Today's smart cities use thousands of physical sensors distributed across the urban landscape to support decision making in areas such as infrastructure monitoring, public health, and resource management. These weather-hardened devices require power and connectivity, and often cost thousands just to install, let alone maintain. We show how long-range laser vibrometry can be used for low-cost, city-scale sensing. Although typically limited to just a few meters of sensing range, the use of retroreflective markers can boost this to 1km or more. Fortuitously, cities already make extensive use of retroreflective materials for street signs, construction barriers, and many other markings. Our system can co-opt these existing markers at very long ranges and use them as unpowered accelerometers. Published at CHI 2021.

Super-Resolution Capacitive Touchscreens

Capacitive touchscreens are near-ubiquitous in today's touch-driven devices, such as smartphones and tablets. By using rows and columns of electrodes, specialized touch controllers are able to capture a 2D image of capacitance at the surface of a screen. For over a decade, capacitive "pixels" have been around 4mm in size – a surprisingly low resolution that precludes a wide range of interesting applications. In this research, we show how super-resolution techniques, long used in fields such as biology and astronomy, can be applied to capacitive touchscreen data. This opens the door to passive tangibles with higher-density fiducials and also recognition of every-day metal objects, such as keys and coins. Published at CHI 2021.

Classroom Digital Twins with Instrumentation-Free Gaze Tracking

Classroom sensing is an important and active area of research with great potential to improve instruction. Complementing professional observers - the current best practice - automated pedagogical professional development systems can attend every class and capture fine-grained details of all occupants. Unfortunately, prior classroom gaze-sensing systems have limited accuracy and often require specialized external or worn sensors. In this research, we developed a new computer-vision-driven system that powers a 3D “digital twin” of the classroom and enables whole-class, 6DOF head gaze vector estimation without instrumenting any of the occupants. Published at CHI 2021.

BodySLAM: Opportunistic Body Tracking in Multi-User AR/VR Experiences

In this work, we take advantage of an emerging use case: co-located, multi-user AR/VR experiences. In such contexts, participants are often able to see each other’s bodies, hands, mouths, apparel, and other visual facets, even though they generally do not see their own bodies. Using the existing outwards-facing cameras on AR/VR headsets, these visual dimensions can be opportunistically captured and digitized, and then relayed back to their respective users in real time. Our system name was inspired by SLAM (simultaneous localization and mapping) approaches to mapping unknown environments. In a similar vein, BodySLAM uses disparate camera views from many participants to reconstruct the geometric arrangement of users in an environment, as well body pose and appearance. Published at SUI 2020.

Direction-of-Voice (DoV) Estimation for Intuitive Speech Interaction

In addition to receiving and processing spoken commands, we propose that computing devices also infer the Direction of Voice (DoV). Such DoV estimation innately enables voice commands with addressability, in a similar way to visual gaze, but without the need for cameras. This allows users to easily and naturally interact with diverse ecosystems of voice-enabled devices, whereas today’s voice interactions suffer from multi-device confusion. With DoV estimation providing a disambiguation mechanism, a user can speak to a particular device and have it respond; e.g., a user could ask their smartphone for the time, laptop to play music, smartspeaker for the weather, and TV to play a show. Published at UIST 2020.

VibroComm: Using Commodity Gyroscopes for Vibroacoustic Data Reception

Inertial Measurement Units (IMUs) with gyroscopic sensors are standard in today’s mobile devices. We show that these sensors can be co-opted for vibroacoustic data reception. Our approach, called VibroComm, requires direct physical contact to a transmitting (i.e., vibrating) surface. This makes interactions targeted and explicit in nature, making it well suited for contexts with many targets or requiring and intent. It also offers an orthogonal dimension of physical security to wireless technologies like Bluetooth and NFC. We achieve a transfer rate over 2000 bits/sec with less than 5% packet loss – an order of magnitude faster than prior IMU-based approaches at a quarter of the loss rate. Published at MobileHCI 2020.

Listen Learner: Automated Class Discovery and One-Shot Interactions for Acoustic Activity Recognition

Acoustic activity recognition has emerged as a foundational element for imbuing devices with context-driven capabilities, enabling richer, more assistive, and more accommodating computational experiences. Traditional approaches rely either on custom models trained in situ, or general models pre-trained on preexisting data, with each approach having accuracy and user burden implications. We present Listen Learner, a technique for activity recognition that gradually learns events specific to a deployed environment while minimizing user burden. More specifically, we built an end-to-end system for self-supervised learning of events labelled through one-shot voice interactions. Published at CHI 2020.

Wireality: Complex Tangible Geometries in VR with Worn Multi-String Haptics

Today's virtual reality (VR) systems allow users to explore immersive new worlds and experiences through sight. Unfortunately, most VR systems lack haptic feedback, and even high-end consumer systems use only basic vibration motors. This clearly precludes realistic physical interactions with virtual objects. Larger obstacles, such as walls, railings, and furniture are not simulated at all. In response, we developed Wireality, a self-contained worn system that allows for individual joints on the hands to be accurately arrested in 3D space through the use of retractable wires that can be programmatically locked. This allows for convincing tangible interactions with complex geometries, such as wrapping fingers around a railing. Published at CHI 2020.

Digital Ventriloquism: Giving Voice to Everyday Objects

Smart speakers with voice agents have seen rapid adoption in recent years. These devices use traditional speaker coils, which means the agent’s voice always emanates from the device itself, even when that information might be more contextually and spatially relevant elsewhere. We describe our work on Digital Ventriloquism, which allows a single smart speaker to render sounds onto passive objects in the environment. Not only can these items speak, but also make other sounds, such as notification chimes. Importantly, objects need not be modified in any way: the only requirement is line of sight to our speaker. As smart speaker microphones are omnidirectional, it is possible to have interactive conversations with totally passive objects, such as doors and plants. Published at CHI 2020.

Enhancing Mobile Voice Assistants with WorldGaze

Contemporary voice assistants, such as Siri, require that objects of interest be specified in spoken commands. WorldGaze is a software-only method for smartphones that tracks the real-world gaze of a user, which voice agents can utilize for rapid, natural, and precise interactions. We achieve this by simultaneously opening the front and rear cameras of a smartphone. The front-facing camera is used to track the head in 3D, including estimating its direction vector. As the geometry of the front and back cameras are fixed and known, we can raycast the head vector into the 3D world scene as captured by the rear-facing camera. This allows the user to intuitively define an object or region of interest using their head gaze. Published at CHI 2020.

Sozu: Self-Powered Radio Tags for Building-Scale Activity Sensing

Robust, wide-area sensing of human environments has been a long-standing research goal. We present Sozu, a new low-cost sensing system that can detect a wide range of events wirelessly, through walls and without line of sight, at whole-building scale. To achieve this in a battery-free manner, Sozu tags convert energy from activities that they sense into RF broadcasts, acting like miniature self-powered radio stations. We describe the results from a series of iterative studies, culminating in a deployment study with 30 instrumented objects. Results show that Sozu is very accurate, with true positive event detection exceeding 99%, with almost no false positives. Published at UIST 2019.

LightAnchors: Appropriating Point Lights for Spatially-Anchored Augmented Reality Interfaces

LightAnchors is a new method to display spatially-anchored data in augmented reality applications. Unlike most prior tracking methods, which instrument objects with markers (often large and/or obtrusive), we take advantage of point lights already found in many objects and environments. For example, most electrical appliances now feature small (LED) status lights, and light bulbs are common in indoor and outdoor settings. In addition to leveraging these point lights for in-view anchoring (i.e., attaching information and interfaces to specific objects), we also co-opt these lights for data transmission, blinking them rapidly to encode binary data. Devices need only an inexpensive microcontroller with the ability to blink a LED to enable new experiences in AR. Published at UIST 2019.

ActiTouch: Precise Touch Segmentation for On-Skin VR/AR Interfaces

Contemporary AR/VR systems use in-air gestures or handheld controllers for interactivity. This overlooks the skin as a convenient surface for tactile, touch-driven interactions, which are generally more accurate and comfortable than free space interactions. In response, we developed ActiTouch, a new electrical method that enables precise on-skin touch segmentation by using the body as an RF waveguide. We combine this method with computer vision, enabling a system with both high tracking precision and robust touch detection. We quantify the accuracy of our approach through a user study and demonstrate how it can enable touchscreen-like interactions on the skin. Published at UIST 2019.

MeCap: Whole-Body Digitization for Low-Cost VR/AR Headsets

Low-cost, smartphone-powered VR/AR headsets are becoming more popular. These basic devices, little more than plastic or cardboard shells, lack advanced features such as controllers for the hands, limiting their interactive capability. Moreover, even high-end consumer headsets lack the ability to track the body and face. We introduce MeCap, which enables commodity VR headsets to be augmented with powerful motion capture (“MoCap”) and user-sensing capabilities at very low cost (under $5). Using only a pair of hemi-spherical mirrors and the existing rear-facing camera of a smartphone, MeCap provides real-time estimates of a wearer’s 3D body pose, hand pose, facial expression, physical appearance and surrounding environment. Published at UIST 2019.

Exploring the Efficacy of Sparse, General-Purpose Sensor Constellations for Wide-Area Ubiquitous Sensing

Future smart homes, offices, stores and many other environments will increasingly be monitored by distributed sensors, supporting rich, context-sensitive applications. There are two opposing instrumentation approaches. On one end is full sensor saturation, where every object of interest is tagged with a sensor. On the other end, we can imagine a hypothetical, omniscient sensor capable of detecting events throughout an entire building from one location. Neither approach is currently practical, and thus we explore the middle ground between these two extremes: a sparse constellation of sensors working together to provide the benefits of full saturation, but without the social, aesthetic, maintenance and financial drawbacks. Published at IMWUT/UbiComp 2019.

EduSense: Practical Classroom Sensing at Scale

EduSense is a comprehensive sensing system that produces a plethora of theoretically-motivated visual and audio features correlated with effective instruction, which could feed professional development tools in much the same way as a Fitbit sensor reports step count to an end user app. Although previous systems have demonstrated some of our features in isolation, EduSense is the first to unify them into a cohesive, real-time, in-the-wild evaluated, and practically-deployable system. Our two studies quantify where contemporary machine learning techniques are robust, and where they fall short, illuminating where future work remains to bring the vision of automated classroom analytics to reality. Published at IMWUT/UbiComp 2019.

Sensing Fine-Grained Hand Activity with Smartwatches

Capturing fine-grained hand activity could make computational experiences more powerful and contextually aware. Indeed, philosopher Immanuel Kant argued, "the hand is the visible part of the brain." However, most prior work has focused on detecting whole-body activities, such as walking, running and bicycling. In this work, we explore the feasibility of sensing hand activities from commodity smartwatches, which are the most practical vehicle for achieving this vision. Our investigations started with a 50 participant, in-the-wild study, which captured hand activity labels over nearly 1000 worn hours. We conclude with a second, in-lab study that evaluates our classification stack, demonstrating 95.2% accuracy across 25 hand activities. Published at CHI 2019.

Interferi: Gesture Sensing using On-Body Acoustic Interferometry

Interferi uses ultrasonic transducers resting on the skin to create acoustic interference patterns inside the wearer’s body, which interact with anatomical features in complex, yet characteristic ways. We focus on two areas of the body with great expressive power: the hands and face. For each, we built and tested a series of worn sensor configurations, which we used to identify useful transducer arrangements and machine learning features. We created final prototypes for the hand and face, which our study results show can support eleven- and nine-class gestures sets at 93.4% and 89.0% accuracy, respectively. We also evaluated our system in four continuous tracking tasks, including smile intensity and weight estimation, which never exceed 9.5% error. Published at CHI 2019.

SurfaceSight: A New Spin on Touch, User, and Object Sensing for IoT Experiences

SurfaceSight is an approach that enriches IoT experiences with rich touch and object sensing, offering a complementary input channel and increased contextual awareness for "smart" devices. For sensing, we incorporate LIDAR into the base of IoT devices, providing an expansive, ad hoc plane of sensing just above the surface on which devices rest. We can recognize and track a wide array of objects, including finger input and hand gestures. We can also track people and estimate which way they are facing. We evaluate the accuracy of these new capabilities and illustrate how they can be used to power novel and contextually-aware interactive experiences. Published at CHI 2019.

BeamBand: Hand Gesture Sensing with Ultrasonic Beamforming

BeamBand is a wrist-worn system that uses ultrasonic beamforming for hand gesture sensing. Using an array of small transducers, arranged on the wrist, we can ensemble acoustic wavefronts to project acoustic energy at specified angles and focal lengths. This allows us to interrogate the surface geometry of the hand with inaudible sound in a raster-scan-like manner, from multiple viewpoints. We use the resulting, characteristic reflections to recognize hand pose. In our user study, we found that BeamBand supports a six-class hand gesture set at 94.6% accuracy. We describe our software and hardware, and future avenues for integration into devices such as smartwatches and VR controllers. Published at CHI 2019.

The HCI Innovator's Dilemma

Generating ideas and solutions, evaluating and refining them, and hopefully putting them into practice is the essence of an HCI professional’s life, whether it be in software or hardware. As inventors and problem solvers, innovation is our game, and for this reason it is interesting to consider the lifecycle of HCI ideas. At first we might think that HCI, as a fundamentally technology-driven field, would largely follow along with other technology trends. Fortunately, many people have written extensively on the business and history of technology, creating a cornucopia of curves and charts to draw upon. In this article, I pull together a few notable ones to make the case that HCI does not follow a standard innovation lifecycle. Published in ACM Interactions, October 2018, cover story.

Ubicoustics: Plug-and-Play Acoustic Activity Recognition

Despite sound being a rich source of information, computing devices with microphones do not leverage audio to glean useful insights about their physical and social context. Ubicoustics is a novel, real-time, sound-based activity recognition system that uses data from professional sound effect libraries traditionally used in the entertainment industry. These well-labeled and high-quality sounds are the perfect atomic unit for data augmentation, allowing us to exponentially grow our deep learning tuning data in realistic ways. We quantify the performance of our approach across a range of environments and device categories and show that microphone-equipped computing devices already have the requisite capability to unlock real-time activity recognition comparable to human accuracy. Published at UIST 2018.

Vibrosight: Long-Range Vibrometry for Smart Environment Sensing

Vibrosight is a new approach to sense activities across entire rooms using long-range laser vibrometry. Unlike a microphone, our approach can sense physical vibrations at one specific point, making it robust to interference from other activities and noisy environments. This property enables detection of simultaneous activities, which has proven challenging in prior work. Through a series of evaluations, we show that Vibrosight can offer high accuracies at long range, allowing our sensor to be placed in an inconspicuous location. We also explore a range of additional uses, including data transmission, sensing user input and modes of appliance operation, and detecting human movement and activities on work surfaces. Published at UIST 2018.

EyeSpyVR: Interactive Eye Sensing Using Off-the-Shelf, Smartphone-Based VR Headsets

Low cost virtual reality headsets powered by smartphones are becoming ubiquitous. Their unique position on the user's face opens interesting opportunities for interactive sensing. We describe EyeSpyVR, a software-only eye sensing approach for smartphone-based VR, which uses a phone's front facing camera as a sensor and its display as a passive illuminator. Our proof-of-concept system, using a commodity smartphone, enables four sensing modalities: detecting when the VR headset is worn, detecting blinks, recognizing the wearer's identity, and coarse gaze tracking - features typically found in high-end or specialty VR headsets. We describe our implementation and results from a 70 participant user study. Published at UbiComp 2018.

Crowd-AI Camera Sensing in the Real World

Smart appliances with built-in cameras, such as the Nest Cam and Amazon Echo Look, are becoming pervasive. They hold the promise of bringing high fidelity, contextually rich sensing into our homes, workplaces and other environments. Despite recent advances, computer vision systems are still limited in the types of questions they can answer. In response, researchers have investigated hybrid crowd- and AI-powered methods that collect human labels to bootstrap automatic processes. We describe our iterative development of Zensors++, a full-stack crowd-AI camera-based sensing system that moves significantly beyond prior work in terms of scale, question diversity, accuracy, latency, and economic feasibility. Published at UbiComp 2018.

Wall++: Room-Scale Interactive and Context-Aware Sensing

Walls make up a majority of readily accessible indoor surface area, and yet they are static – their primary function is to be a wall, separating spaces and hiding infrastructure. We present Wall++, a low-cost sensing approach that allows walls to become a smart infrastructure. Instead of merely separating spaces, walls can now enhance rooms with sensing and interactivity. Our wall treatment and sensing hardware can track users’ touch and gestures, as well as estimate body pose if they are close. By capturing airborne electromagnetic noise, we can also detect what appliances are active and where they are located. Through a series of evaluations, we demonstrate Wall++ can enable robust room-scale interactive and context-aware applications. Published at CHI 2018.

LumiWatch: On-Arm Projected Graphics and Touch Input

Compact, worn computers with projected, on-skin touch interfaces have been a long-standing yet elusive goal, largely written off as science fiction. In this work, we present the first, fully-functional and self-contained projection smartwatch implementation, containing the requisite compute, power, projection and touch-sensing capabilities. Our watch offers roughly 40 square centimeters of interactive surface area – more than five times that of a typical smartwatch display. We demonstrate continuous 2D finger tracking with interactive, rectified graphics, transforming the arm into a touchscreen. We discuss our hardware and software implementation, as well as evaluation results regarding touch accuracy and projection visibility. Published at CHI 2018.

Pulp Nonfiction: Low-Cost Touch Tracking for Paper

Paper continues to be a versatile and indispensable material in the 21st century. Of course, paper is a passive medium with no inherent interactivity, precluding us from computationally-enhancing a wide variety of paper-based activities. In this work, we present a new technical approach for bringing the digital and paper worlds closer together, by enabling paper to track finger input and also drawn input with writing implements. Importantly, for paper to still be considered paper, our method had to be very low cost. This necessitated research into materials, fabrication methods and sensing techniques. We describe the outcome of our investigations and show that our method can be sufficiently low-cost and accurate to enable new interactive opportunities with this pervasive and venerable material. Published at CHI 2018.

Supporting Responsive Cohabitation Between Virtual Interfaces and Physical Objects on Everyday Surfaces

Systems for providing mixed physical-virtual interaction on desktop surfaces have been proposed for decades, though no such systems have achieved widespread use. One major factor contributing to this lack of acceptance may be that these systems are not designed for the variety and complexity of actual work surfaces, which are often in flux and cluttered with physical objects. In this project, we use an elicitation study and interviews to synthesize a list of ten interactive behaviors that desk-bound, digital interfaces should implement to support responsive cohabitation with physical objects. As a proof of concept, we implemented these interactive behaviors in a working augmented desk system, demonstrating their imminent feasibility. Published at EICS 2017.

Deus EM Machina: On-Touch Contextual Functionality for Smart IoT Appliances

Homes, offices and many other environments will be increasingly saturated with connected, computational appliances, forming the “Internet of Things” (IoT). At present, most of these devices rely on mechanical inputs, webpages, or smartphone apps for control. However, as IoT devices proliferate, these existing interaction methods will become increasingly cumbersome. We propose an approach where users simply tap a smartphone to an appliance to discover and rapidly utilize contextual functionality. To achieve this, our prototype smartphone recognizes physical contact with uninstrumented appliances, and summons appliance-specific interfaces and contextually relevant functionality. Published at CHI 2017.

Electrick: Low-Cost Touch Sensing Using Electric Field Tomography

Electrick is a low-cost and versatile sensing technique that enables touch input on a wide variety of objects and surfaces, whether small or large, flat or irregular. This is achieved by using electric field tomography in concert with an electrically conductive material, which can be easily and cheaply added to objects and surfaces. We show that our technique is compatible with commonplace manufacturing methods, such as spray/brush coating, vacuum forming, and casting/molding – enabling a wide range of possible uses and outputs. Our technique can also bring touch interactivity to rapidly fabricated objects, including those that are laser cut or 3D printed. Published at CHI 2017.

Synthetic Sensors: Towards General-Purpose Sensing

The promise of smart environments and the Internet of Things (IoT) relies on robust sensing of diverse environmental facets. Traditional approaches rely on direct or distributed sensing, most often by measuring one particular aspect of an environment with special-purpose sensors. In this work, we explore the notion of general-purpose sensing, wherein a single, highly capable sensor can indirectly monitor a large context, without direct instrumentation of objects. Further, through what we call Synthetic Sensors, we can virtualize raw sensor data into actionable feeds, whilst simultaneously mitigating immediate privacy issues. We used a series of structured, formative studies to inform the development of new sensor hardware and accompanying information architecture. Published at CHI 2017.

Thumprint: Socially-Inclusive Local Group Authentication Through Shared Secret Knocks

Small, local groups who share resources have unmet authentication needs. For these groups, existing authentication strategies either create unnecessary social divisions, do not identify individuals, do not equitably distribute security responsibility, or make it difficult to share or revoke access. To explore an alternative, we designed Thumprint: inclusive group authentication with shared secret knocks. All group members share one secret knock, but individual expressions of the secret are discernible. We evaluated the usability and security of our concept, which suggest that individuals who enter the same shared thumprint are distinguishable from one another, that people can enter thumprints consistently over time, and that thumprints are resilient to casual adversaries. Published at CHI 2017.

CapCam: Enabling Rapid, Ad-Hoc, Position-Tracked Interactions Between Devices

CapCam is a novel technique that enables smartphones (and similar devices) to establish quick, ad-hoc connections with a host touchscreen device, simply by pressing a device to the screen’s surface. Pairing data, used to bootstrap a conventional wireless connection, is transmitted optically to the phone’s rear camera. This approach utilizes the near-ubiquitous rear camera on smart devices, making it applicable to a wide range of devices, both new and old. CapCam also tracks devices' physical position on the host capacitive touchscreen without any instrumentation, enabling a wide range of targeted and spatial interactions. We quantify the communication performance of our pairing approach and demonstrate data transmission rates up to four times faster than prior camera-based techniques. Published at ISS 2016.

DIRECT: Making Touch Tracking on Ordinary Surfaces Practical with Hybrid Depth-Infrared Sensing

Many research systems have demonstrated that depth cameras, combined with projectors for output, can turn nearly any reasonably flat surface into an ad hoc, touch-sensitive display. However, even with the latest generation of depth cameras, it has been difficult to obtain sufficient sensing fidelity across a table-sized surface to get much beyond a proof-of-concept demonstration. In this research, we present DIRECT, a novel touch-tracking algorithm that merges depth and infrared imagery captured by a commodity sensor. Our results show that our technique boosts touch detection accuracy by 15% and reduces positional error by 55% compared to the next best-performing technique in the literature. Published at ISS 2016.

ViBand: High-Fidelity Bio-Acoustic Sensing Using Commodity Smartwatch Accelerometers

Smartwatches and wearables are unique in that they reside on the body, presenting great potential for always-available input and interaction. Additionally, their position on the wrist makes them ideal for capturing bio-acoustic signals. We developed a custom smartwatch kernel that boosts the sampling rate of a smartwatch’s existing accelerometer, enabling many new applications. For example, we can use bio-acoustic data to classify hand gestures such as flicks, claps, scratches, and taps. Bio-acoustic sensing can also detect the vibrations of grasped mechanical or motor-powered objects, enabling object recognition. Finally, we can generate structured vibrations using a transducer, and show that data can be transmitted through the human body. Published at UIST 2016.

AuraSense: Enabling Expressive Around-Smartwatch Interactions with Electric Field Sensing

AuraSense enables rich, around-device, smartwatch interactions using electric field sensing. To explore how this sensing approach could enhance smartwatch interactions, we considered different antenna configurations and how they could enable useful interaction modalities. We identified four configurations that can support six well-known modalities of particular interest and utility, including gestures above the watchface and touchscreen-like finger tracking on the skin. We quantify the feasibility of these input modalities in a series of user studies, which suggest that AuraSense can be low latency and robust across both users and environments. Published at UIST 2016.

Advancing Hand Gesture Recognition with High Resolution Electrical Impedance Tomography

We recently used Electrical Impedance Tomography (EIT) to detect hand gestures using an instrumented smartwatch (see Tomo Project). This prior work demonstrated great promise for non-invasive, high accuracy recognition of gestures for interactive control. In this research, we introduce a new system that offers improved sampling speed and resolution. This, in turn, enables superior interior reconstruction and gesture recognition. More importantly, we use our new system as a vehicle for experimentation – we compare two EIT sensing methods and three different electrode resolutions. Results from in-depth empirical evaluations and a user study shed light on the future feasibility of EIT for sensing human input. Published at UIST 2016.

SkinTrack: Using the Body as an Electrical Waveguide for Continuous Finger Tracking on the Skin

SkinTrack is a wearable system that enables continuous touch tracking on the skin. It consists of a ring, which emits a continuous high frequency AC signal, and a sensing wristband with multiple electrodes. Due to the phase delay inherent in a high-frequency AC signal propagating through the body, a phase difference can be observed between pairs of electrodes. SkinTrack measures these phase differences to compute a 2D finger touch coordinate. Our approach can segment touch events at 99% accuracy, and resolve the 2D location of touches with a mean error of 7.6mm. As our approach is compact, non-invasive, low-cost and low-powered, we envision the technology being integrated into future smartwatches, supporting rich touch interactions beyond the confines of the small touchscreen. Published at CHI 2016.

SweepSense: Ad Hoc Configuration Sensing Using Reflected Swept-Frequency Ultrasonics

Devices can be made more intelligent if they have the ability to sense their surroundings and physical configuration. However, adding extra, special purpose sensors increases size, price and build complexity. Instead, we use speakers and microphones already present in a wide variety of devices to open new sensing opportunities. Our technique sweeps through a range of inaudible frequencies and measures the intensity of reflected sound to deduce information about the immediate environment, chiefly the materials and geometry of proximate surfaces. We offer several example uses, two of which we implemented as self-contained demos, and conclude with an evaluation that quantifies their performance and demonstrates high accuracy. Published at IUI 2016.

Estimating 3D Finger Angle on Commodity Touchscreens

We describe a new method that estimates a finger’s angle relative to the screen. The angular vector is described using two angles – altitude and azimuth – more colloquially referred to as pitch and yaw. Our approach works in tandem with conventional multitouch finger tracking, offering two additional analog degrees of freedom for a single touch point. Uniquely, our approach only needs data provided by commodity touchscreen devices, requiring no additional hardware or sensors. We prototyped our solution on two platforms – a smartphone and smartwatch – each fully self-contained and operating in real-time. We quantified the accuracy of our technique through a user study, and explored the feasibility of our approach through example applications and interactions. Published at ITS 2015.

CapAuth: Identifying and Differentiating User Handprints on Commodity Capacitive Touchscreens

User identification and differentiation have implications in many application domains, including security, personalization, and co-located multiuser systems. In response, dozens of approaches have been developed, from fingerprint and retinal scans, to hand gestures and RFID tags. We propose CapAuth, a technique that uses existing, low-level touchscreen data, combined with machine learning classifiers, to provide real-time authentication and even identification of users. As a proof-of-concept, we ran our software on an off-the-shelf Nexus 5 smartphone. Our user study demonstrates twenty-participant authentication accuracies of 99.6%. For twenty-user identification, our software achieved 94.0% accuracy and 98.2% on groups of four, simulating family use. Published at ITS 2015.

Quantifying the Targeting Performance Benefit of Electrostatic Haptic Feedback on Touchscreens

Touchscreens with dynamic electrostatic friction are a compelling, low-latency and solid-state haptic feedback technology. Work to date has focused on minimum perceptual difference, texture rendering, and fingertip-surface models. However, no work to date has quantified how electrostatic feedback can be used to improve user performance, in particular targeting, where virtual objects rendered on touchscreens can offer tactile feedback. Our results show that electrostatic haptic feedback can improve targeting speed by 7.5% compared to conventional flat touchscreens. Published at ITS 2015.

Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions

Gaze interaction is particularly well suited to rapid, coarse, absolute pointing, but lacks natural and expressive mechanisms to support modal actions. Conversely, free space hand gesturing is slow and imprecise for pointing, but has unparalleled strength in gesturing, which can be used to trigger a wide variety of interactive functions. Thus, these two modalities are highly complementary. By fusing gaze and gesture into a unified and fluid interaction modality, we can enable rapid, precise and expressive free-space interactions that mirror natural use. Moreover, although both approaches are independently poor for pointing tasks, combining them can achieve pointing performance superior to either method alone. This opens new interaction opportunities for gaze and gesture systems alike. Published at ICMI 2015.

Tomo: Wearable, Low-Cost, Electrical Impedance Tomography for Hand Gesture Recognition

Tomo is a wearable, low-cost system using Electrical Impedance Tomography (EIT) to recover the interior impedance geometry of a user’s arm. This is achieved by measuring the cross-sectional impedances between all pairs of eight electrodes resting on a user’s skin. Our approach is sufficiently compact and low-powered that we integrated the technology into a prototype wrist- and armband, which can monitor and classify hand gestures in real-time. We ultimately envision this technique being integrated into future smartwatches, allowing hand gestures and direct touch manipulation to work synergistically to support interactive tasks on small screens. Published at UIST 2015.

EM-Sense: Touch Recognition of Uninstrumented, Electrical and Electromechanical Objects

Most everyday electrical and electromechanical objects emit small amounts of electromagnetic (EM) noise during regular operation. When a user makes physical contact with such an object, this EM signal propagates through the user, owing to the conductivity of the human body. By modifying a small, low-cost, software-defined radio, we can detect and classify these signals in real-time, enabling robust on-touch object detection. Unlike prior work, our approach requires no instrumentation of objects or the environment; our sensor is self-contained and can be worn unobtrusively on the body. We call our technique EM-Sense and built a proof-of-concept smartwatch implementation. Our studies show that discrimination between dozens of objects is feasible, independent of wearer, time and local environment. Published at UIST 2015.

3D Printed Hair: Fused Deposition Modeling of Soft Strands, Fibers and Bristles

We introduce a technique for 3D printed hair, fibers and bristles, by exploiting the stringing phenomena inherent in 3D printers using fused deposition modeling. Our approach offers a range of design parameters for controlling the properties of single strands and also of hair bundles. We further detail a list of post-processing techniques for refining the behavior and appearance of printed strands. We provide several examples of output, demonstrating the immediate feasibility of our approach using a low cost, commodity printer. Overall, this technique extends the capabilities of 3D printing in a new and interesting way, without requiring any new hardware. Published at UIST 2015.

Zensors: Adaptive, Rapidly Deployable, Human-Intelligent Sensor Feeds

The promise of “smart” homes, workplaces, schools, and other environments has long been championed. Unattractive, however, has been the cost to run wires and install sensors. More critically, raw sensor data tends not to align with the types of questions humans wish to ask, e.g., do I need to restock my pantry? In response, we built Zensors, a new sensing approach that fuses real-time human intelligence from online crowd workers with automatic approaches to provide robust, adaptive, and readily deployable intelligent sensors. With Zensors, users can go from question to live sensor feed in less than 60 seconds. Through our API, Zensors can enable a variety of rich end-user applications and moves us closer to the vision of responsive, intelligent environments. Published at CHI 2015.

Acoustruments: Passive, Acoustically-Driven, Interactive Controls for Handheld Devices

Acoustruments are low-cost, passive, and powerless mechanisms, made from plastic, that can bring rich, tangible functionality to handheld devices. Through a structured exploration, we identified an expansive vocabulary of design primitives, providing building blocks for the construction of tangible interfaces utilizing smartphones’ existing audio functionality. By combining design primitives, familiar physical mechanisms can all be constructed. On top of these, we can create end-user applications with rich, tangible interactive functionalities. Acoustruments adds a new method to the toolbox HCI practitioners and researchers can draw upon, while introducing a cheap and passive method for adding interactive controls to consumer products. Published at CHI 2015.

3D Printing Pneumatic Device Controls with Variable Activation Force Capabilities

We explore 3D printing physical controls whose tactile response can be manipulated programmatically through pneumatic actuation. In particular, by manipulating the internal air pressure of various pneumatic elements, we can create mechanisms that require different levels of actuation force and can also change their shape. We introduce and discuss a series of example 3D printed pneumatic controls, which demonstrate the feasibility of our approach. This includes conventional controls, such as buttons, knobs and sliders, but also extends to domains such as toys and deformable interfaces. We describe the challenges that we faced and the methods that we used to overcome some of the limitations of current 3D printing technology. Published at CHI 2015.

Skin Buttons: Cheap, Small, Low-Power and Clickable Fixed-Icon Laser Projections

Smartwatches are a promising new interactive platform, but their small size makes even basic actions cumbersome. Hence, there is a great need for approaches that expand the interactive envelope around smartwatches, allowing human input to escape the small physical confines of the device. We propose using tiny projectors integrated into the smart- watch to render icons on the user’s skin. These icons can be made touch sensitive, significantly expanding the interactive region without increasing device size. Through a series of experiments, we show that these “skin buttons” can have high touch accuracy and recognizability, while being low cost and power-efficient. Published at UIST 2014.

Air+Touch: Interweaving Touch & In-Air Gestures

Air+Touch is a new class of interactions that interweave touch events with in-air gestures, offering a unified input modality with expressiveness greater than each input modality alone. We demonstrate how air and touch are highly complementary: touch is used to designate targets and segment in-air gestures, while in-air gestures add expressivity to touch events. For example, a user can draw a circle in the air and tap to trigger a context menu, do a finger 'high jump' between two touches to select a region of text, or drag and in-air ‘pigtail’ to copy text to the clipboard. Published at UIST 2014.

Toffee: Enabling Ad Hoc, Around-Device Interaction with Acoustic Time-of-Arrival Correlation

Toffee is a sensing approach that extends touch interaction beyond the small confines of a mobile device and onto ad hoc adjacent surfaces, most notably tabletops. This is achieved using a novel application of acoustic time differences of arrival (TDOA) correlation. Our approach requires only a hard tabletop and gravity – the latter acoustically couples mobile devices to surfaces. We conducted an evaluation, which shows that Toffee can accurately resolve the bearings of touch events (mean error of 4.3° with a laptop prototype). This enables radial interactions in an area many times larger than a mobile device; for example, virtual buttons that lie above, below and to the left and right. Published at MobileHCI 2014.

Around-Body Interaction: Sensing & Interaction Techniques for Proprioception-Enhanced Input with Mobile Devices

The space around the body provides a large interaction volume that can allow for 'big' interactions on 'small' mobile devices. Prior work has primarily focused on distributing information in the space around a user's body. We extend this by demonstrating three new types of around-body interaction: canvas, modal and context-aware. We also present a sensing solution that uses standard smartphone hardware: a phone's front camera, accelerometer and inertial measurement units. Published at MobileHCI 2014.

Implications of Location and Touch for On-Body Projected Interfaces

Research into on-body projected interfaces has primarily focused on the fundamental question of whether or not it was technologically possible. Although considerable work remains, these systems are no longer artifacts of science fiction — prototypes have been successfully demonstrated and tested on hundreds of people. Our aim in this work is to begin shifting the question away from how, and towards where. To better understand and explore this expansive design space, we employed a mixed-methods research process involving more than two thousand individuals. This started with high-resolution, but low-detail crowdsourced data. We then combined this with rich, expert interviews, exploring aspects ranging from aesthetics to kinesthetics. Published at DIS 2014.

Expanding the Input Expressivity of Smartwatches with Mechanical Pan, Twist, Tilt and Click

We propose using the face of a smartwatch as a multi-degree-of-freedom mechanical interface. This enables rich interaction without occluding the screen with fingers, and can operate in concert with touch interaction and physical buttons. We built a proof-of-concept smartwatch that supports continuous 2D panning and twist, as well as binary tilt and click. To illustrate the potential of our approach, we developed a series of example applications, many of which are cumbersome – or even impossible – on today’s smartwatch devices. Published at CHI 2014.

TouchTools: Leveraging Familiarity and Skill with Physical Tools to Augment Touch Interaction

The average person can skillfully manipulate a plethora of tools, from hammers to tweezers. However, despite this remarkable dexterity, gestures on today’s touch devices are simplistic, relying primarily on the chording of fingers: one-finger pan, two-finger pinch, four-finger swipe and similar. We propose that touch gesture design be inspired by the manipulation of physical tools from the real world. In this way, we can leverage user familiarity and fluency with such tools to build a rich set of gestures for touch interaction. With only a few minutes of training on a proof-of-concept system, users were able to summon a variety of virtual tools by replicating their corresponding real-world grasps. Published at CHI 2014.

Probabilistic Palm Rejection Using Spatiotemporal Touch Features and Iterative Classification

Tablet computers are often called upon to emulate classical pen-and-paper input. However, touchscreens typically lack the means to distinguish between legitimate stylus and finger touches and touches with the palm or other parts of the hand. This forces users to rest their palms elsewhere or hover above the screen, resulting in ergonomic and usability problems. We present a probabilistic touch filtering approach that uses the temporal evolution of touch contacts to reject palms. Our system improves upon previous approaches, reducing accidental palm inputs to 0.016 per pen stroke, while correctly passing 98% of stylus inputs. Published at CHI 2014.

Lumitrack: Low Cost, High Precision and High Speed Tracking with Projected m-Sequences

Lumitrack is a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive subsequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed (500+ FPS), high precision (sub-millimeter), and low-cost motion tracking for a wide range of interactive applications. Published at UIST 2013.

WorldKit: Ad Hoc Interactive Applications on Everyday Surfaces

WorldKit makes use of a paired depth camera and projector to make ordinary surfaces instantly interactive. Using this system, touch-based interactivity can, without prior calibration, be placed on nearly any unmodified surface literally with a wave of the hand, as can other new forms of sensed interaction. From a user perspective, such interfaces are easy enough to instantiate that they could, if desired, be recreated or modified “each time we sit down” by “painting” them next to us. From the programmer’s perspective, our system encapsulates these capabilities in a simple set of abstractions that make the creation of interfaces quick and easy. Published at CHI 2013.

Zoomboard: A Diminutive QWERTY Keyboard for Ultra-Small Devices

The proliferation of touchscreen devices has made soft keyboards a routine part of life. However, ultra-small computing platforms like the Sony SmartWatch and Apple iPod Nano lack a means of text entry. This limits their potential, despite the fact they are capable computers. We present a soft keyboard interaction technique called ZoomBoard that enables text entry on ultra-small devices. Our approach uses iterative zooming to enlarge otherwise impossibly tiny keys to comfortable size. We ran a text entry experiment on a keyboard measuring just 16 x 6mm – smaller than a US penny. Users achieved roughly 10 words per minute, allowing users to enter short phrases both quickly and quietly. Published at CHI 2013.

Capacitive Fingerprinting: User Differentiation Through Capacitive Sensing

At present, touchscreens can differentiate multiple points of contact, but not who is touching the device. We propose a novel sensing approach based on Swept Frequency Capacitive Sensing that enables touchscreens to attribute touch events to a particular user. This is achieved by measuring the impedance of a user to the environment (earth ground) across a range of AC frequencies. Natural variation in bone density, muscle mass and other biological factors, as well as clothing, impact a users' impedance profile. This is often sufficiently unique to enable user differentiation. This project has significant implications in the design of touch-centric games and collaborative applications. Published at UIST 2012.

Acoustic Barcodes: Passive, Durable and Inexpensive Notched Identification Tags

We present Acoustic Barcodes, structured patterns of physical notches that, when swiped with e.g., a fingernail, produce a complex sound that can be resolved to a binary ID. A single, inexpensive contact microphone attached to a surface or object is used to capture the waveform. We present our method for decoding sounds into IDs, which handles variations in swipe velocity and other factors. Acoustic Barcodes could be used for information retrieval or to triggering interactive functions. They are passive, durable and inexpensive to produce. Further, they can be applied to a wide range of materials and objects, including plastic, wood, glass and stone. Published at UIST 2012.

Touché: Enhancing Touch Interaction on Humans, Screens, Liquids, and Everyday Objects

Touché proposes a novel form of capacitive touch sensing that we call Swept Frequency Capacitive Sensing (SFCS). This technology can infuse rich touch and gesture sensitivity into a variety of analogue and digital objects. For example, Touché can not only detect touch events, but also recognize complex configurations of the hands and body. Such contextual information can enhance a broad range of applications, from conventional touchscreens to unique contexts and materials, including the human body and liquids. Finally, Touché is inexpensive, safe, low power and compact; it can be easily embedded or temporarily attached anywhere touch and gesture sensitivity is desired. Published at CHI 2012.

Using Shear as a Supplemental Input Channel for Rich Touchscreen Interaction

Touch input is constrained, typically only providing finger X/Y coordinates. We suggest augmenting touchscreens with a largely unutilized input dimension: shear (force tangential to a screen’s surface). Similar to pressure, shear can be used in concert with conventional finger positional input. However, unlike pressure, shear provides a rich, analog 2D input space, which has many powerful uses. We put forward five classes of advanced interaction that considerably expands the envelope of interaction possible on touchscreens. Published at CHI 2012.

Unlocking the Expressivity of Point Lights

Since the advent of the electronic age, devices have incorporated small point lights for communication purposes. This has afforded devices a simple, but reliable communication channel without the complication or expense of e.g., a screen. For example, a simple light can let a user know their stove is on, a car door is ajar, the alarm system is active, or that a battery has finished charging. Unfortunately, very few products seem to take full advantage of the expressive capability simple lights can provide. The most commonly encountered light behaviors are quite simple: light on, light off, and light blinking. Not only is this vocabulary incredibly small, but these behaviors are not particularly iconic. Published at CHI 2012.

Phone as a Pixel: Enabling Ad-Hoc, Large-Scale Displays Using Mobile Devices

Phone as a Pixel is a scalable, synchronization-free, platform-independent system for creating large, ad-hoc displays from a collection of smaller devices, such as smartphones. To participate, device need only a web browser. We employ a color-transition scheme to identify and locate displays. This approach has several advantages: devices can be arbitrarily arranged (i.e., not in a grid) and infrastructure consists of a single conventional camera. Further, additional devices can join at any time without re-calibration. These are desirable properties to enable collective displays in contexts like sporting events, concerts and political rallies. Published at CHI 2012.

Ultrasonic Doppler Sensing in HCI

Researchers and practitioners can now draw upon a large suite of sensing technologies for their work. Relying on thermal, chemical, electromagnetic, optical, acoustic, mechanical, and other means, these sensors can detect faces, hand gestures, humidity, blood pressure,proximities, and many other aspects of our state and environment. We present an overview of our work on an ultrasonic Doppler sensor. This technique has unique qualities that we believe make it a valuable addition to the suite of sensing approaches HCI researchers and practitioners should consider in their applications. Published in IEEE Pervasive.

On-Body Interaction: Armed and Dangerous

We consider how the arms and hands can be used to enhance on-body interactions, which is typically finger input centric. To explore this opportunity, we developed Armura, a novel interactive on-body system, supporting both input and graphical output. Using this platform as a vehicle for exploration, we prototyped a series of applications and interactions. This helped to confirm chief use modalities, identify fruitful interaction approaches, and in general, better understand how interfaces operate on the body. This paper is the first to consider and prototype how conventional interaction issues, such as cursor control and clutching, apply to the on-body domain. Additionally, we bring to light several new and unique interaction techniques. Published at TEI 2012.

TapSense: Enhancing Finger Interaction on Touch Surfaces

TapSense is an enhancement to touch interaction that allows conventional screens to identify how the finger is being used for input. Our system can recognize different finger locations – including the tip, pad, nail and knuckle – without the user having to wear any electronics. This opens several new and powerful interaction opportunities for touch input, especially in mobile devices, where input bandwidth is limited due to small screens and fat fingers. For example, a knuckle tap could serve as a “right click” for mobile device touch interaction, effectively doubling input bandwidth. Published at UIST 2011.

OmniTouch: Wearable Multitouch Interaction Everywhere

OmniTouch is a novel wearable system that enables graphical, interactive, multitouch input on arbitrary, everyday surfaces. Our body-worn implementation allows users to manipulate interfaces projected onto the environment (e.g., walls, tables), held objects (e.g., notepads, books), and their own bodies (e.g., hands, lap). A key contribution is our depth-driven fuzzy template matching and clustering approach to multitouch finger tracking. This enables on-the-go interactive capabilities, with no calibration, training or instrumentation of the environment or the user, creating an always-available projected multitouch interface. Published at UIST 2011.

PocketTouch: Through-Fabric Capacitive Touch Input

Modern mobile devices are sophisticated computing platforms, enabling people to handle phone calls, listen to music, surf the web, reply to emails, compose text messages, and much more. These devices are often stored in pockets or bags, requiring users to remove them in order to access even basic functionality. This demands a high level of attention - both cognitively and visually - and is often socially disruptive. Further, physically retrieving the device incurs a non-trivial time cost, and can constitute a significant fraction of a simple operation’s total time. We developed a novel method for through-pocket interaction called PocketTouch. Published at UIST 2011.

A New Angle on Cheap LCDs: Making Positive Use of Optical Distortion

When viewing LCD monitors from an oblique angle, it is not uncommon to witness a dramatic color shift. Engineers and designers have sought to reduce these effects for more than two decades. We take an opposite stance, embracing these optical peculiarities, and consider how they can be used in productive ways. Our paper discusses how a special palette of colors can yield visual elements that are invisible when viewed straight-on, but visible at oblique angles. In essence, this allows conventional, unmodified LCD screens to output two images simultaneously – a feature normally only available in far more complex setups. Published at UIST 2011.

Kineticons: Using Iconographic Motion in Graphical User Interface Design

In this paper, we define a new type of iconographic scheme for graphical user interfaces based on motion. We refer to these “kinetic icons” as kineticons. In contrast to static graphical icons and icons with animated graphics, kineticons do not alter the visual content of a graphical element. Although kineticons are not new – indeed, they are seen in several popular systems – we formalize their scope and utility. One powerful quality is their ability to be applied to GUI elements of varying size and shape – from a something as small as a close button, to something as large as dialog box or even the entire desktop. This allows a suite of system-wide kinetic behaviors to be reused for a variety of uses. Published at CHI 2011.

SurfaceMouse: Supplementing Multi-Touch Interaction with a Virtual Mouse

SurfaceMouse is a virtual mouse implementation for multi-touch surfaces. A key design objective was to leverage as much pre-existing knowledge (and potentially muscle memory) regarding mice as possible, making interactions immediately familiar. To invoke SurfaceMouse, a user simply places their hand on an interactive surface as if there was a mouse present (see images on right). The system recognizes this characteristic gesture and renders a virtual mouse under the hand, which can be used like a real mouse. In addition to two-dimensional movement (X and Y axes), our proof-of-concept implementation supports left and right clicking, as well as up/down scrolling. Published at TEI 2011.

Pediluma: Motivating Physical Activity Through Contextual Information and Social Influence

We developed Pediluma, a shoe accessory designed to encourage opportunistic physical activity. It features a light that brightens the more the wearer walks and slowly dims when the wearer remains stationary. This interaction was purposely simple so as to remain lightweight, both visually and cognitively. Even simple, personal pedometers have been shown to promote walking. Pediluma takes this a step further, attempting to engage people around the wearer to elicit social effects. Although lights have been previously incorporated into shoes, this is the first time they have been used to display motivational information. Published at TEI 2011.

TeslaTouch: Electrovibration for Touch Surfaces

TeslaTouch infuses finger-driven interfaces with physical feedback. The technology is based on the electrovibration principle, which can programmatically vary the electrostatic friction between fingers and a touch panel. Importantly, there are no moving parts, unlike most tactile feedback technologies, which typically use mechanical actuators. This allows for different fingers to feel different sensations. When combined with an interactive graphical display, TeslaTouch enables the design of a wide variety of interfaces that allow the user to feel virtual elements through touch. Published at UIST 2010.

Appropriated Interaction Surfaces

Devices with significant computational power and capability can now be easily carried with us. These devices have tremendous potential to bring the power of information, creation, and communication to a wider audience and to more aspects of our lives. However, with this potential comes new challenges for interaction design. For example, we have yet to figure out a good way to miniaturize devices without simultaneously shrinking their interactive surface area. This has lead to diminutive screens, cramped keyboards, and tiny jog wheels - all of which diminishes usability and prevents us from realizing the full potential of mobile computing. Published in IEEE Computer Magazine.

Skinput: Appropriating the Body as an Input Surface

Skinput is a technology that appropriates the human body for acoustic transmission, allowing the skin to be used as a finger input surface. In particular, we resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body. We collect these signals using a novel array of sensors worn as an armband. This approach provides an always-available, naturally-portable, and on-body interactive surface. To illustrate the potential of our approach, we developed several proof-of-concept applications on top of our sensing and classification system. Published at CHI 2010.

Minput: Enabling Interaction on Small Mobile Devices with High-Precision, Low-Cost, Multipoint Optical Tracking

Minput is a sensing and input method that enables intuitive and accurate interaction on very small devices – ones too small for practical touch screen use and with limited space to accommodate physical buttons. We achieve this by adding two, inexpensive and high-precision optical sensors (like those found in optical mice) to the underside of the device. This allows the entire device to be used as an input mechanism, instead of the screen, avoiding occlusion by fingers. In addition to x/y translation, our system also captures twisting motion, enabling many interesting interaction opportunities typically found in larger and far more complex systems. Published at CHI 2010.

Faster Progress Bars: Manipulating Perceived Duration with Visual Augmentations

Human perception of time is fluid, and can be manipulated in purposeful and productive ways. We propose and evaluate variations on two visual designs for progress bars that alter users’ perception of time passing, and “appear” faster when in fact they are not. In a series of direct comparison tests, we are able to rank how different augmentations compare to one another. We then show that these designs yield statistically significantly shorter perceived durations than conventional progress bars. Progress bars with animated ribbing that move backwards in a decelerating manner proved to have the strongest effect. We measure the effect of this particular progress bar design and show it reduces the perceived duration by 11%. Published at CHI 2010.

Cord Input: An Intuitive, High-Accuracy, Multi-Degree-of-Freedom Input Method for Mobile Devices

A cord, although simple in form, has many interesting physical affordances that make it powerful as an input device. Not only can a length of cord be grasped in different locations, but also pulled, twisted and bent — four distinct and expressive dimensions that could potentially act in concert. Such an input mechanism could be readily integrated into headphones, backpacks, and clothing. Once grasped in the hand, a cord can be used in an eyes-free manner to control mobile devices, which often feature small screens and cramped buttons. We built a proof-of-concept cord-based sensor, which senses three of the four input dimensions we propose. Published at CHI 2010.

Evaluation of Progressive Image Loading Schemes

Although network bandwidth has increased dramatically, high-resolution images often take several seconds to load, and considerably longer on mobile devices over wireless connections. Progressive image loading techniques allow for some visual content to be displayed prior to the whole file being downloaded. In this note, we present an empirical evaluation of popular progressive image loading methods, and derive one novel technique from our findings. Results suggest a spiral variation of bilinear interlacing can yield an improvement in content recognition time. Published at CHI 2010.

Achieving Ubiquity: The New Third Wave

Mark Weiser envisioned a third wave of computing, one with “hundreds of wireless computers in every office,” which would come about as the cost of electronics fell. Two decades later, some in the Ubiquitous Computing community point to the pervasiveness of microprocessors as a realization of this dream. Without a doubt, many of the objects we interact with on a daily basis are digitally augmented – they contain microchips, buttons and even screens. But is this the one-to-many relationship of people–to-computers that Weiser envisioned? Published in IEEE Multimedia.

Whack Gestures: Inexact and Inattentive Interaction with Mobile Devices

Whack Gestures seeks to provide a simple means to interact with devices with minimal attention from the user – in particular, without the use of fine motor skills or detailed visual attention (requirements found in nearly all conventional interaction techniques). For mobile devices, this could enable interaction without “getting it out,” grasping, or even glancing at the device. Users can simply interact with a device by striking it with open palm or heel of the hand. Published at TEI 2010.

Abracadabra: Wireless, High-Precision, and Unpowered Finger Input for Very Small Mobile Devices

We developed a magnetically-driven input approach that makes use of the (larger) space around a (very small) device. Our technique provides robust, inexpensive, and wireless input from fingers, without requiring powered external components. By extending the input area to many times the size of the device’s screen, our approach is able to offer a high C-D gain, enabling fine motor control. Additionally, screen occlusion can be reduced by moving interaction off of the display and into unused space around the device. Published at UIST 2009.

Stacks on the Surface: Resolving Physical Order with Masked Fiducial Markers

We present a method for identifying the order of stacked items on interactive surfaces. This is achieved using conventional, passive fiducial markers, which in addition to reflective regions, also incorporate structured areas of transparency. This allows particular orderings to appear as unique marker patterns. We discuss how such markers are encoded and fabricated, and include relevant mathematics. To motivate our approach, we comment on various scenarios where stacking could be especially useful. We conclude with details from our proof-of-concept implementation, built on Microsoft Surface. Published at ITS 2009.

Providing Dynamically Changeable Physical Buttons on a Visual Display

Physical buttons have the unique ability to provide low-attention and vision-free interactions through their intuitive tactile clues. Unfortunately, the physicality of these interfaces makes them static, limiting the number and types of user interfaces they can support. On the other hand, touch screen technologies provide the ultimate interface flexibility, but offer no inherent tactile qualities. In this paper, we describe a technique that seeks to occupy the space between these two extremes – offering some of the flexibility of touch screens, while retaining the beneficial tactile properties of physical interfaces. Published at CHI 2009.

Texture Displays: A Passive Approach to Tactile Presentation

Despite touch being a rich sensory channel, tactile output is almost exclusively vibrotactile in nature. We explore a different type of tactile display, one that can assume several different textural states (e.g., sticky, bumpy, smooth, course, gritty). In contrast to conventional vibrotactile approaches, these displays provide information passively. Only when they are explicitly handled by the user, either with intent to inquire about the information, or in the course of some other action, can state be sensed. This inherently reduces their attention demand and intrusiveness. We call this class of devices texture displays. Published at CHI 2009.

Where to Locate Wearable Displays? Reaction Time Performance of Visual Alerts from Tip to Toe

On-body interfaces will need to notify users about new emails, upcoming meetings, changes in weather forecast, stock market fluctuations, excessive caloric intake, and other aspects of our lives worn computers will be able to monitor directly or have access to. One way to draw a wearer’s attention is through visual notifications. However, there has been little research into their optimal body placement, despite being an unobtrusive, lightweight, and low-powered information delivery mechanism. Furthermore, visual stimuli have the added benefit of being able to work alone or in concert with conventional methods, including auditory and vibrotactile alerts. Published at CHI 2009.

Pseudo-3D Video Conferencing with a Generic Webcam

When conversing with someone via video conference, you are provided with a virtual window into their space. However, this currently remains both flat and fixed, limiting its immersiveness. We present a method for producing a pseudo-3D experience using only a single generic webcam at each end. This means nearly any computer currently able to video conference can use our technique, making it readily adoptable. Although using comparatively simple techniques, the 3D result is convincing. Published at ISM 2008.

Scratch Input: Creating Large, Inexpensive, Unpowered and Mobile finger Input Surfaces

We present Scratch Input, an acoustic-based input technique that relies on the unique sound produced when a fingernail is dragged over the surface of a textured material, such as wood, fabric, or wall paint. We employ a simple sensor that can be easily coupled with existing surfaces, such as walls and tables, turning them into large, unpowered and ad hoc finger input surfaces. Our sensor is sufficiently small that it could be incorporated into a mobile device, allowing any suitable surface on which it rests to be appropriated as a gestural input surface. Published at UIST 2008.

Lightweight Material Detection for Placement-Aware Mobile Computing

Numerous methods have been proposed that allow mobile devices to determine where they are located (e.g., home or office) and in some cases, predict what activity the user is currently engaged in (e.g., walking, sitting, or driving). While useful, this sensing currently only tells part of a much richer story. To allow devices to act most appropriately to the situation they are in, it would also be very helpful to know about their placement – for example whether they are sitting on a desk, hidden in a drawer, placed in a pocket, or held in one’s hand – as different device behaviors may be called for in each of these situations. Published at UIST 2008.

3D Head Tracking With a Generic Webcam

Using OpenCV's Haar cascade face detection functionality, I was able to throw together a real time head tracking program. I coupled this with several 3D demos, including a Johnny Lee style targets demo. Performance is surprisingly good. On my 2.16 Ghz MacBook I am tracking at 25 frames per second - sufficient for real time interaction with no obtrusive reaction delays. Additionally, I've optimized the code so that only 50% of a single processor core is consumed (and that's on my relatively slow laptop).

Lean and Zoom: Proximity-Aware User Interface and Content Magnification

The size and resolution of computer displays has increased dramatically, allowing more information than ever to be rendered on-screen. However, items can now be so small or screens so cluttered that users need to lean forward to properly examine them. This behavior may be detrimental to a user’s posture and eyesight. Our Lean and Zoom system detects a user’s proximity to the display using a camera and magnifies the on-screen content proportionally. Results from a user study indicate people find the technique natural and intuitive. Most participants found on-screen content easier to read, and believed the technique would improve both their performance and comfort. Published at CHI 2008.

Rethinking the Progress Bar

Numerous factors cause progress bars to proceed at non-linear rates (e.g. pauses, acceleration, deceleration). Additionally, humans perceive time in a non-linear way. The combination of these effects produces a highly variable perception of how long it takes progress bars to complete. Results from a user study indicate there are several factors that can be manipulated to make progress bars appear faster, when in fact they are not, or potentially even slower. Published at UIST 2007.

Masters Projects

CollaboraTV: Asynchronous Social Television

Television was once championed as the “electronic hearth” which would bring people together. Indeed, television shows provide a common experience, often affording even total strangers a social connection on which to initiate conversation. However, a fundamental shift in how we consume media is degrading such social interactions significantly – an increasing number of people are no longer watching television shows as they broadcast. Instead, users are favoring non-live media sources, such as Digital Video Recorders, Video-On-Demand services , and even rented physical media (e.g. Netflix). Published at UXTV 2008.

iEPG: An Ego-Centric Electronic Program Guide and Recommendation Interface

Conventional program guides present television shows in a list view, with metadata displayed in a separate window. However, this linear presentation style prevents users from fully exploring and utilizing the diverse, descriptive, and highly connected data associated with television programming. Additionally, despite the fact that program guides are the primary selection interface for television shows, few include integrated recommendation data to help users decide what to watch. iEPG presents a novel interface concept for navigating the multidimensional information space associated with television programming, as well as an effective visualization for displaying complex ratings data. Published at UXTV 2008.

Kronosphere: Temporal File Navigation

Hierarchical file systems mirror the way people organize data in the physical world. However, this method of organization is often inadequate in managing the immense number of files that populate users’ hard drives. Systems that employ a time-centric or content-driven approach have proven to be compelling alternatives. Kronosphere offers a powerful time and content-based navigation and management paradigm that has been specifically designed to leverage universal cognitive abilities.

The Sling in Medieval Europe

The simple sling is often neglected when reviewing the long history of ranged warfare. Scholars typically focus on the simple thrown spear, atlatl, throwing axe, bow, and crossbow. However, in experienced hands, the sling was arguably the most effective personal projectile weapon until the 15th century, surpassing the accuracy and deadliness of the bow and even of early firearms. Published in the Bulletin of Primitive Technology.

Image Clustering

Using the RCBIS package developed by Stacey Kuznetsov and myself, Jeff Borden put together a image clustering package. The application was created using the Prefuse information visualization toolkit and a K-means clustering method. The latter used a distance metric the RCBIR engine calculates when given two images.

Search by Sketch

After several months of intense development alongside Stacey Kuznetsov, the Rapid Content-Based Image Search (RCBIS) engine was ready for use in user oriented programs. The most obvious and appealing project was to develop an image search engine which took not text, but a sketch as input.

Speech Enhanced Commenting

Ineffective and insufficient commenting of code is a major problem in software development. Coders find commenting frustrating and time consuming. Often, programmers will write a section of code and come back later to comment, potentially reducing the accuracy of the information. Speech Enhanced Commenting is a system that takes advantage of an unused human output - voice. This system allows users to speak their comments as they program, which is natural and can happen in parallel to the user typing the code.

Visual Hash

Checking passwords often requires a complex algorithm, but computation time is generally small. For users on their local machines, it's insignificant. However, if the password needs to be checked by a remote login server, things become more expensive. Secure connections have be negotiated and databases have to be queried. Throw in a few thousand users and your are looking at a serious load problem. Visual hashes can reduce the burden on login servers by eliminating some unnecessary login attempts.

Aphrodisias Regional Survey

During the 2006 and 2007 summers, I spent eleven weeks in Turkey assisting an archaeological expedition at a Greco-Roman city named Aphrodisias. A small Geographical Information Systems (GIS) team worked on producing maps to assist the project, both in surveying the region and artifact collection. Creating site level contour maps was a major effort in the first season. I primarily worked on developing methods for predicting the location of ancient sites, such as iron mines and forts, and the routes of aqueducts and ancient roads.

'A' PPC Compiler

Compilers are quite possibly the programmer's most frequently used tool, and yet often overlooked. Over the course of 2 months, I coded up a compiler in C++ for a medium-sized language dubbed 'A'. The compiler supports all the goodies, including: arrays, pointers, function calls, recursion, type checking, static scoping, and more. The output: executable PPC assembly.

Gesture Based Process Scheduling

People place different levels of importance on applications they use. This importance is often transient. For example, when the user is in a rush and wants to quickly check their email before they leave, the email application should not have to wait on other applications. However, other times, the user might accept a longer launch time because they also value background processes, like a downloading a file or transferring pictures from their digital camera. Anticipating user needs is practically impossible. However, users provide a wealth of feedback in the form of gestures.


Over the 2005 summer, I had the pleasure of working at IBM's Almaden Research Center in San Jose, California, under the leadership of John Barton and Stephen Farrell. The development team consisted of Chris Parker, Meng Mao and myself. We worked on a unique application which blended aspects of personal information management, social networking, and content management. The project was dubbed Enki, after the Sumerian God of Knowledge.

Undergraduate Projects

A Comparative Analysis of Archaic Peoples and Early Dutch Settlers Living in The New York Region

In 1624, the Dutch established their first outpost about 150 miles north of Manhattan, up the Hudson River, called Fort Orange. It was located near the modern city of Albany, NY. Interestingly, the environment was similar to that of prehistoric Manhattan. The plant life was similar, with spruce, fir, birch, and poplar dominating the area around Fort Orange. Because archaic groups and a fledgling European outpost relied so heavily on the local environment to survive, there are many commonalities in how they lived. Using this similar environmental context, one can explore how each group, with their different technologies and social traditions, adapted and survived.


This page is dedicated to describing my work for the OS Bakeoff, which was the finale to a semester long advanced operating systems class. Students started by building their own shell, and later added support for booting the OS (on X86 hardware), paging (protected memory management), multiple program environments, scheduling, and preemptive multitasking to name a few.

An Investigation of Graham’s Scan and Jarvis’ March

With the advent of computers, which could perform millions of mathematical calculations per second, new topics and problems in geometry began to emerge and the field of computational geometry was born. As the subject evolved, a number of new applications became apparent, ranging from computer vision and geographical data analysis, to collision detection for robotics and molecular biology. This paper discusses perhaps one of the oldest and most celebrated computational geometry problems; the convex hull.


A multithreaded server capable of pipelining HTTP requests from multiple concurrent clients. Features an LRU cache to store frequently used files in main memory and supports a full compliment of MIME types via an external config file. All in about 500 lines of Java code...

High School Projects

Temperature Regulated Charge Algorithms

This page is dedicated to research I conducted in my high school's science research class. I presented the work at several science competitions and ultimately filed a patent for the technology during my freshman year in university. This page is very out of date.

BaLiS (Bacterium Life Simulator)

BaLiS was my first attempt at designing an AI (high school). It uses a priority matrix that builds as a virtual bacterium explores. Different items are encountered, such as food. The bacterium exhibits certain behaviors based on what it finds, and adjusts it's memory accordingly.


I owe my academic success to many influential individuals who steered me towards science and engineering at an early age, and in later life, towards Human-Computer Interaction. I am particularly grateful to my parents, as well as Gene and Karen Brewer, who put up with my endless questions and fed my curiosity. As a teenager, Ernest Vesce and John DuBuque gave me the benefit of the doubt and real jobs. The professional skills I picked up under their guidance greatly influenced my research style. At the same time, Nancy Williams and Wilfredo Chaluisant opened my eyes to research and gave me the confidence to pursue it. During my time at New York University, Dennis Shasha and Brian Amento showed me how to unite my interests in design and computer science, and convinced me to get a Ph.D. in Human-Computer Interaction. I also thank my Ph.D. advisor, Scott E. Hudson, for his six years of support, mentorship and friendship. From day one, he gave me the freedom – both intellectually and financially – to pursue research topics that I was passionate about. I must also acknowledge two unofficial advisors who had a significant impact on my thought process, research trajectory and skill set: Ivan Poupyrev and Desney Tan. Thank you! Many organizations helped to fund my graduate education, for which I am humbled and grateful. These include The National Science Foundation, Microsoft Research, Google, Qualcomm, and Disney Research.

© Chris Harrison