Latest Research

Skin Buttons: Cheap, Small, Low-Power and Clickable Fixed-Icon Laser Projections

Smartwatches are a promising new interactive platform, but their small size makes even basic actions cumbersome. Hence, there is a great need for approaches that expand the interactive envelope around smartwatches, allowing human input to escape the small physical confines of the device. We propose using tiny projectors integrated into the smart- watch to render icons on the user’s skin. These icons can be made touch sensitive, significantly expanding the interactive region without increasing device size. Through a series of experiments, we show that these “skin buttons” can have high touch accuracy and recognizability, while being low cost and power-efficient. Published at UIST 2014.

Air+Touch: Interweaving Touch & In-Air Gestures

Air+Touch is a new class of interactions that interweave touch events with in-air gestures, offering a unified input modality with expressiveness greater than each input modality alone. We demonstrate how air and touch are highly complementary: touch is used to designate targets and segment in-air gestures, while in-air gestures add expressivity to touch events. For example, a user can draw a circle in the air and tap to trigger a context menu, do a finger 'high jump' between two touches to select a region of text, or drag and in-air ‘pigtail’ to copy text to the clipboard. Published at UIST 2014.

Toffee: Enabling Ad Hoc, Around-Device Interaction with Acoustic Time-of-Arrival Correlation

Toffee is a sensing approach that extends touch interaction beyond the small confines of a mobile device and onto ad hoc adjacent surfaces, most notably tabletops. This is achieved using a novel application of acoustic time differences of arrival (TDOA) correlation. Our approach requires only a hard tabletop and gravity – the latter acoustically couples mobile devices to surfaces. We conducted an evaluation, which shows that Toffee can accurately resolve the bearings of touch events (mean error of 4.3° with a laptop prototype). This enables radial interactions in an area many times larger than a mobile device; for example, virtual buttons that lie above, below and to the left and right. Published at MobileHCI 2014.

Around-Body Interaction: Sensing & Interaction Techniques for Proprioception-Enhanced Input with Mobile Devices

The space around the body provides a large interaction volume that can allow for 'big' interactions on 'small' mobile devices. Prior work has primarily focused on distributing information in the space around a user's body. We extend this by demonstrating three new types of around-body interaction: canvas, modal and context-aware. We also present a sensing solution that uses standard smartphone hardware: a phone's front camera, accelerometer and inertial measurement units. Published at MobileHCI 2014.

Implications of Location and Touch for On-Body Projected Interfaces

Research into on-body projected interfaces has primarily focused on the fundamental question of whether or not it was technologically possible. Although considerable work remains, these systems are no longer artifacts of science fiction — prototypes have been successfully demonstrated and tested on hundreds of people. Our aim in this work is to begin shifting the question away from how, and towards where. To better understand and explore this expansive design space, we employed a mixed-methods research process involving more than two thousand individuals. This started with high-resolution, but low-detail crowdsourced data. We then combined this with rich, expert interviews, exploring aspects ranging from aesthetics to kinesthetics. Published at DIS 2014.

Expanding the Input Expressivity of Smartwatches with Mechanical Pan, Twist, Tilt and Click

We propose using the face of a smartwatch as a multi-degree-of-freedom mechanical interface. This enables rich interaction without occluding the screen with fingers, and can operate in concert with touch interaction and physical buttons. We built a proof-of-concept smartwatch that supports continuous 2D panning and twist, as well as binary tilt and click. To illustrate the potential of our approach, we developed a series of example applications, many of which are cumbersome – or even impossible – on today’s smartwatch devices. Published at CHI 2014.

TouchTools: Leveraging Familiarity and Skill with Physical Tools to Augment Touch Interaction

The average person can skillfully manipulate a plethora of tools, from hammers to tweezers. However, despite this remarkable dexterity, gestures on today’s touch devices are simplistic, relying primarily on the chording of fingers: one-finger pan, two-finger pinch, four-finger swipe and similar. We propose that touch gesture design be inspired by the manipulation of physical tools from the real world. In this way, we can leverage user familiarity and fluency with such tools to build a rich set of gestures for touch interaction. With only a few minutes of training on a proof-of-concept system, users were able to summon a variety of virtual tools by replicating their corresponding real-world grasps. Published at CHI 2014.

Probabilistic Palm Rejection Using Spatiotemporal Touch Features and Iterative Classification

Tablet computers are often called upon to emulate classical pen-and-paper input. However, touchscreens typically lack the means to distinguish between legitimate stylus and finger touches and touches with the palm or other parts of the hand. This forces users to rest their palms elsewhere or hover above the screen, resulting in ergonomic and usability problems. We present a probabilistic touch filtering approach that uses the temporal evolution of touch contacts to reject palms. Our system improves upon previous approaches, reducing accidental palm inputs to 0.016 per pen stroke, while correctly passing 98% of stylus inputs. Published at CHI 2014.

Lumitrack: Low Cost, High Precision and High Speed Tracking with Projected m-Sequences

Lumitrack is a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive subsequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed (500+ FPS), high precision (sub-millimeter), and low-cost motion tracking for a wide range of interactive applications. Published at UIST 2013.

WorldKit: Ad Hoc Interactive Applications on Everyday Surfaces

WorldKit makes use of a paired depth camera and projector to make ordinary surfaces instantly interactive. Using this system, touch-based interactivity can, without prior calibration, be placed on nearly any unmodified surface literally with a wave of the hand, as can other new forms of sensed interaction. From a user perspective, such interfaces are easy enough to instantiate that they could, if desired, be recreated or modified “each time we sit down” by “painting” them next to us. From the programmer’s perspective, our system encapsulates these capabilities in a simple set of abstractions that make the creation of interfaces quick and easy. Published at CHI 2013.

Zoomboard: A Diminutive QWERTY Keyboard for Ultra-Small Devices

The proliferation of touchscreen devices has made soft keyboards a routine part of life. However, ultra-small computing platforms like the Sony SmartWatch and Apple iPod Nano lack a means of text entry. This limits their potential, despite the fact they are capable computers. We present a soft keyboard interaction technique called ZoomBoard that enables text entry on ultra-small devices. Our approach uses iterative zooming to enlarge otherwise impossibly tiny keys to comfortable size. We ran a text entry experiment on a keyboard measuring just 16 x 6mm – smaller than a US penny. Users achieved roughly 10 words per minute, allowing users to enter short phrases both quickly and quietly. Published at CHI 2013.

Capacitive Fingerprinting: User Differentiation Through Capacitive Sensing

At present, touchscreens can differentiate multiple points of contact, but not who is touching the device. We propose a novel sensing approach based on Swept Frequency Capacitive Sensing that enables touchscreens to attribute touch events to a particular user. This is achieved by measuring the impedance of a user to the environment (earth ground) across a range of AC frequencies. Natural variation in bone density, muscle mass and other biological factors, as well as clothing, impact a users' impedance profile. This is often sufficiently unique to enable user differentiation. This project has significant implications in the design of touch-centric games and collaborative applications. Published at UIST 2012.

Acoustic Barcodes: Passive, Durable and Inexpensive Notched Identification Tags

We present Acoustic Barcodes, structured patterns of physical notches that, when swiped with e.g., a fingernail, produce a complex sound that can be resolved to a binary ID. A single, inexpensive contact microphone attached to a surface or object is used to capture the waveform. We present our method for decoding sounds into IDs, which handles variations in swipe velocity and other factors. Acoustic Barcodes could be used for information retrieval or to triggering interactive functions. They are passive, durable and inexpensive to produce. Further, they can be applied to a wide range of materials and objects, including plastic, wood, glass and stone. Published at UIST 2012.

Touché: Enhancing Touch Interaction on Humans, Screens, Liquids, and Everyday Objects

Touché proposes a novel form of capacitive touch sensing that we call Swept Frequency Capacitive Sensing (SFCS). This technology can infuse rich touch and gesture sensitivity into a variety of analogue and digital objects. For example, Touché can not only detect touch events, but also recognize complex configurations of the hands and body. Such contextual information can enhance a broad range of applications, from conventional touchscreens to unique contexts and materials, including the human body and liquids. Finally, Touché is inexpensive, safe, low power and compact; it can be easily embedded or temporarily attached anywhere touch and gesture sensitivity is desired. Published at CHI 2012.

Using Shear as a Supplemental Input Channel for Rich Touchscreen Interaction

Touch input is constrained, typically only providing finger X/Y coordinates. We suggest augmenting touchscreens with a largely unutilized input dimension: shear (force tangential to a screen’s surface). Similar to pressure, shear can be used in concert with conventional finger positional input. However, unlike pressure, shear provides a rich, analog 2D input space, which has many powerful uses. We put forward five classes of advanced interaction that considerably expands the envelope of interaction possible on touchscreens. Published at CHI 2012.

Unlocking the Expressivity of Point Lights

Since the advent of the electronic age, devices have incorporated small point lights for communication purposes. This has afforded devices a simple, but reliable communication channel without the complication or expense of e.g., a screen. For example, a simple light can let a user know their stove is on, a car door is ajar, the alarm system is active, or that a battery has finished charging. Unfortunately, very few products seem to take full advantage of the expressive capability simple lights can provide. The most commonly encountered light behaviors are quite simple: light on, light off, and light blinking. Not only is this vocabulary incredibly small, but these behaviors are not particularly iconic. Published at CHI 2012.

Phone as a Pixel: Enabling Ad-Hoc, Large-Scale Displays Using Mobile Devices

Phone as a Pixel is a scalable, synchronization-free, platform-independent system for creating large, ad-hoc displays from a collection of smaller devices, such as smartphones. To participate, device need only a web browser. We employ a color-transition scheme to identify and locate displays. This approach has several advantages: devices can be arbitrarily arranged (i.e., not in a grid) and infrastructure consists of a single conventional camera. Further, additional devices can join at any time without re-calibration. These are desirable properties to enable collective displays in contexts like sporting events, concerts and political rallies. Published at CHI 2012.

Ultrasonic Doppler Sensing in HCI

Researchers and practitioners can now draw upon a large suite of sensing technologies for their work. Relying on thermal, chemical, electromagnetic, optical, acoustic, mechanical, and other means, these sensors can detect faces, hand gestures, humidity, blood pressure,proximities, and many other aspects of our state and environment. We present an overview of our work on an ultrasonic Doppler sensor. This technique has unique qualities that we believe make it a valuable addition to the suite of sensing approaches HCI researchers and practitioners should consider in their applications. Published in IEEE Pervasive.

On-Body Interaction: Armed and Dangerous

We consider how the arms and hands can be used to enhance on-body interactions, which is typically finger input centric. To explore this opportunity, we developed Armura, a novel interactive on-body system, supporting both input and graphical output. Using this platform as a vehicle for exploration, we prototyped a series of applications and interactions. This helped to confirm chief use modalities, identify fruitful interaction approaches, and in general, better understand how interfaces operate on the body. This paper is the first to consider and prototype how conventional interaction issues, such as cursor control and clutching, apply to the on-body domain. Additionally, we bring to light several new and unique interaction techniques. Published at TEI 2012.

TapSense: Enhancing Finger Interaction on Touch Surfaces

TapSense is an enhancement to touch interaction that allows conventional screens to identify how the finger is being used for input. Our system can recognize different finger locations – including the tip, pad, nail and knuckle – without the user having to wear any electronics. This opens several new and powerful interaction opportunities for touch input, especially in mobile devices, where input bandwidth is limited due to small screens and fat fingers. For example, a knuckle tap could serve as a “right click” for mobile device touch interaction, effectively doubling input bandwidth. Published at UIST 2011.

OmniTouch: Wearable Multitouch Interaction Everywhere

OmniTouch is a novel wearable system that enables graphical, interactive, multitouch input on arbitrary, everyday surfaces. Our body-worn implementation allows users to manipulate interfaces projected onto the environment (e.g., walls, tables), held objects (e.g., notepads, books), and their own bodies (e.g., hands, lap). A key contribution is our depth-driven fuzzy template matching and clustering approach to multitouch finger tracking. This enables on-the-go interactive capabilities, with no calibration, training or instrumentation of the environment or the user, creating an always-available projected multitouch interface. Published at UIST 2011.

PocketTouch: Through-Fabric Capacitive Touch Input

Modern mobile devices are sophisticated computing platforms, enabling people to handle phone calls, listen to music, surf the web, reply to emails, compose text messages, and much more. These devices are often stored in pockets or bags, requiring users to remove them in order to access even basic functionality. This demands a high level of attention - both cognitively and visually - and is often socially disruptive. Further, physically retrieving the device incurs a non-trivial time cost, and can constitute a significant fraction of a simple operation’s total time. We developed a novel method for through-pocket interaction called PocketTouch. Published at UIST 2011.

A New Angle on Cheap LCDs: Making Positive Use of Optical Distortion

When viewing LCD monitors from an oblique angle, it is not uncommon to witness a dramatic color shift. Engineers and designers have sought to reduce these effects for more than two decades. We take an opposite stance, embracing these optical peculiarities, and consider how they can be used in productive ways. Our paper discusses how a special palette of colors can yield visual elements that are invisible when viewed straight-on, but visible at oblique angles. In essence, this allows conventional, unmodified LCD screens to output two images simultaneously – a feature normally only available in far more complex setups. Published at UIST 2011.

Kineticons: Using Iconographic Motion in Graphical User Interface Design

In this paper, we define a new type of iconographic scheme for graphical user interfaces based on motion. We refer to these “kinetic icons” as kineticons. In contrast to static graphical icons and icons with animated graphics, kineticons do not alter the visual content of a graphical element. Although kineticons are not new – indeed, they are seen in several popular systems – we formalize their scope and utility. One powerful quality is their ability to be applied to GUI elements of varying size and shape – from a something as small as a close button, to something as large as dialog box or even the entire desktop. This allows a suite of system-wide kinetic behaviors to be reused for a variety of uses. Published at CHI 2011.

SurfaceMouse: Supplementing Multi-Touch Interaction with a Virtual Mouse

SurfaceMouse is a virtual mouse implementation for multi-touch surfaces. A key design objective was to leverage as much pre-existing knowledge (and potentially muscle memory) regarding mice as possible, making interactions immediately familiar. To invoke SurfaceMouse, a user simply places their hand on an interactive surface as if there was a mouse present (see images on right). The system recognizes this characteristic gesture and renders a virtual mouse under the hand, which can be used like a real mouse. In addition to two-dimensional movement (X and Y axes), our proof-of-concept implementation supports left and right clicking, as well as up/down scrolling. Published at TEI 2011.

Pediluma: Motivating Physical Activity Through Contextual Information and Social Influence

We developed Pediluma, a shoe accessory designed to encourage opportunistic physical activity. It features a light that brightens the more the wearer walks and slowly dims when the wearer remains stationary. This interaction was purposely simple so as to remain lightweight, both visually and cognitively. Even simple, personal pedometers have been shown to promote walking. Pediluma takes this a step further, attempting to engage people around the wearer to elicit social effects. Although lights have been previously incorporated into shoes, this is the first time they have been used to display motivational information. Published at TEI 2011.

TeslaTouch: Electrovibration for Touch Surfaces

TeslaTouch infuses finger-driven interfaces with physical feedback. The technology is based on the electrovibration principle, which can programmatically vary the electrostatic friction between fingers and a touch panel. Importantly, there are no moving parts, unlike most tactile feedback technologies, which typically use mechanical actuators. This allows for different fingers to feel different sensations. When combined with an interactive graphical display, TeslaTouch enables the design of a wide variety of interfaces that allow the user to feel virtual elements through touch. Published at UIST 2010.

Appropriated Interaction Surfaces

Devices with significant computational power and capability can now be easily carried with us. These devices have tremendous potential to bring the power of information, creation, and communication to a wider audience and to more aspects of our lives. However, with this potential comes new challenges for interaction design. For example, we have yet to figure out a good way to miniaturize devices without simultaneously shrinking their interactive surface area. This has lead to diminutive screens, cramped keyboards, and tiny jog wheels - all of which diminishes usability and prevents us from realizing the full potential of mobile computing. Published in IEEE Computer Magazine.

Skinput: Appropriating the Body as an Input Surface

Skinput is a technology that appropriates the human body for acoustic transmission, allowing the skin to be used as a finger input surface. In particular, we resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body. We collect these signals using a novel array of sensors worn as an armband. This approach provides an always-available, naturally-portable, and on-body interactive surface. To illustrate the potential of our approach, we developed several proof-of-concept applications on top of our sensing and classification system. Published at CHI 2010.

Minput: Enabling Interaction on Small Mobile Devices with High-Precision, Low-Cost, Multipoint Optical Tracking

Minput is a sensing and input method that enables intuitive and accurate interaction on very small devices – ones too small for practical touch screen use and with limited space to accommodate physical buttons. We achieve this by adding two, inexpensive and high-precision optical sensors (like those found in optical mice) to the underside of the device. This allows the entire device to be used as an input mechanism, instead of the screen, avoiding occlusion by fingers. In addition to x/y translation, our system also captures twisting motion, enabling many interesting interaction opportunities typically found in larger and far more complex systems. Published at CHI 2010.

Faster Progress Bars: Manipulating Perceived Duration with Visual Augmentations

Human perception of time is fluid, and can be manipulated in purposeful and productive ways. We propose and evaluate variations on two visual designs for progress bars that alter users’ perception of time passing, and “appear” faster when in fact they are not. In a series of direct comparison tests, we are able to rank how different augmentations compare to one another. We then show that these designs yield statistically significantly shorter perceived durations than conventional progress bars. Progress bars with animated ribbing that move backwards in a decelerating manner proved to have the strongest effect. We measure the effect of this particular progress bar design and show it reduces the perceived duration by 11%. Published at CHI 2010.

Cord Input: An Intuitive, High-Accuracy, Multi-Degree-of-Freedom Input Method for Mobile Devices

A cord, although simple in form, has many interesting physical affordances that make it powerful as an input device. Not only can a length of cord be grasped in different locations, but also pulled, twisted and bent — four distinct and expressive dimensions that could potentially act in concert. Such an input mechanism could be readily integrated into headphones, backpacks, and clothing. Once grasped in the hand, a cord can be used in an eyes-free manner to control mobile devices, which often feature small screens and cramped buttons. We built a proof-of-concept cord-based sensor, which senses three of the four input dimensions we propose. Published at CHI 2010.

Evaluation of Progressive Image Loading Schemes

Although network bandwidth has increased dramatically, high-resolution images often take several seconds to load, and considerably longer on mobile devices over wireless connections. Progressive image loading techniques allow for some visual content to be displayed prior to the whole file being downloaded. In this note, we present an empirical evaluation of popular progressive image loading methods, and derive one novel technique from our findings. Results suggest a spiral variation of bilinear interlacing can yield an improvement in content recognition time. Published at CHI 2010.

Achieving Ubiquity: The New Third Wave

Mark Weiser envisioned a third wave of computing, one with “hundreds of wireless computers in every office,” which would come about as the cost of electronics fell. Two decades later, some in the Ubiquitous Computing community point to the pervasiveness of microprocessors as a realization of this dream. Without a doubt, many of the objects we interact with on a daily basis are digitally augmented – they contain microchips, buttons and even screens. But is this the one-to-many relationship of people–to-computers that Weiser envisioned? Published in IEEE Multimedia.

Whack Gestures: Inexact and Inattentive Interaction with Mobile Devices

Whack Gestures seeks to provide a simple means to interact with devices with minimal attention from the user – in particular, without the use of fine motor skills or detailed visual attention (requirements found in nearly all conventional interaction techniques). For mobile devices, this could enable interaction without “getting it out,” grasping, or even glancing at the device. Users can simply interact with a device by striking it with open palm or heel of the hand. Published at TEI 2010.

Abracadabra: Wireless, High-Precision, and Unpowered Finger Input for Very Small Mobile Devices

We developed a magnetically-driven input approach that makes use of the (larger) space around a (very small) device. Our technique provides robust, inexpensive, and wireless input from fingers, without requiring powered external components. By extending the input area to many times the size of the device’s screen, our approach is able to offer a high C-D gain, enabling fine motor control. Additionally, screen occlusion can be reduced by moving interaction off of the display and into unused space around the device. Published at UIST 2009.

Stacks on the Surface: Resolving Physical Order with Masked Fiducial Markers

We present a method for identifying the order of stacked items on interactive surfaces. This is achieved using conventional, passive fiducial markers, which in addition to reflective regions, also incorporate structured areas of transparency. This allows particular orderings to appear as unique marker patterns. We discuss how such markers are encoded and fabricated, and include relevant mathematics. To motivate our approach, we comment on various scenarios where stacking could be especially useful. We conclude with details from our proof-of-concept implementation, built on Microsoft Surface. Published at ITS 2009.

Providing Dynamically Changeable Physical Buttons on a Visual Display

Physical buttons have the unique ability to provide low-attention and vision-free interactions through their intuitive tactile clues. Unfortunately, the physicality of these interfaces makes them static, limiting the number and types of user interfaces they can support. On the other hand, touch screen technologies provide the ultimate interface flexibility, but offer no inherent tactile qualities. In this paper, we describe a technique that seeks to occupy the space between these two extremes – offering some of the flexibility of touch screens, while retaining the beneficial tactile properties of physical interfaces. Published at CHI 2009.

Texture Displays: A Passive Approach to Tactile Presentation

Despite touch being a rich sensory channel, tactile output is almost exclusively vibrotactile in nature. We explore a different type of tactile display, one that can assume several different textural states (e.g., sticky, bumpy, smooth, course, gritty). In contrast to conventional vibrotactile approaches, these displays provide information passively. Only when they are explicitly handled by the user, either with intent to inquire about the information, or in the course of some other action, can state be sensed. This inherently reduces their attention demand and intrusiveness. We call this class of devices texture displays. Published at CHI 2009.

Where to Locate Wearable Displays? Reaction Time Performance of Visual Alerts from Tip to Toe

On-body interfaces will need to notify users about new emails, upcoming meetings, changes in weather forecast, stock market fluctuations, excessive caloric intake, and other aspects of our lives worn computers will be able to monitor directly or have access to. One way to draw a wearer’s attention is through visual notifications. However, there has been little research into their optimal body placement, despite being an unobtrusive, lightweight, and low-powered information delivery mechanism. Furthermore, visual stimuli have the added benefit of being able to work alone or in concert with conventional methods, including auditory and vibrotactile alerts. Published at CHI 2009.

Pseudo-3D Video Conferencing with a Generic Webcam

When conversing with someone via video conference, you are provided with a virtual window into their space. However, this currently remains both flat and fixed, limiting its immersiveness. We present a method for producing a pseudo-3D experience using only a single generic webcam at each end. This means nearly any computer currently able to video conference can use our technique, making it readily adoptable. Although using comparatively simple techniques, the 3D result is convincing. Published at ISM 2008.

Scratch Input: Creating Large, Inexpensive, Unpowered and Mobile finger Input Surfaces

We present Scratch Input, an acoustic-based input technique that relies on the unique sound produced when a fingernail is dragged over the surface of a textured material, such as wood, fabric, or wall paint. We employ a simple sensor that can be easily coupled with existing surfaces, such as walls and tables, turning them into large, unpowered and ad hoc finger input surfaces. Our sensor is sufficiently small that it could be incorporated into a mobile device, allowing any suitable surface on which it rests to be appropriated as a gestural input surface. Published at UIST 2008.

Lightweight Material Detection for Placement-Aware Mobile Computing

Numerous methods have been proposed that allow mobile devices to determine where they are located (e.g., home or office) and in some cases, predict what activity the user is currently engaged in (e.g., walking, sitting, or driving). While useful, this sensing currently only tells part of a much richer story. To allow devices to act most appropriately to the situation they are in, it would also be very helpful to know about their placement – for example whether they are sitting on a desk, hidden in a drawer, placed in a pocket, or held in one’s hand – as different device behaviors may be called for in each of these situations. Published at UIST 2008.

3D Head Tracking With a Generic Webcam

Using OpenCV's Haar cascade face detection functionality, I was able to throw together a real time head tracking program. I coupled this with several 3D demos, including a Johnny Lee style targets demo. Performance is surprisingly good. On my 2.16 Ghz MacBook I am tracking at 25 frames per second - sufficient for real time interaction with no obtrusive reaction delays. Additionally, I've optimized the code so that only 50% of a single processor core is consumed (and that's on my relatively slow laptop).

Lean and Zoom: Proximity-Aware User Interface and Content Magnification

The size and resolution of computer displays has increased dramatically, allowing more information than ever to be rendered on-screen. However, items can now be so small or screens so cluttered that users need to lean forward to properly examine them. This behavior may be detrimental to a user’s posture and eyesight. Our Lean and Zoom system detects a user’s proximity to the display using a camera and magnifies the on-screen content proportionally. Results from a user study indicate people find the technique natural and intuitive. Most participants found on-screen content easier to read, and believed the technique would improve both their performance and comfort. Published at CHI 2008.

Rethinking the Progress Bar

Numerous factors cause progress bars to proceed at non-linear rates (e.g. pauses, acceleration, deceleration). Additionally, humans perceive time in a non-linear way. The combination of these effects produces a highly variable perception of how long it takes progress bars to complete. Results from a user study indicate there are several factors that can be manipulated to make progress bars appear faster, when in fact they are not, or potentially even slower. Published at UIST 2007.



Masters Projects

The Sling in Medieval Europe

The simple sling is often neglected when reviewing the long history of ranged warfare. Scholars typically focus on the simple thrown spear, atlatl, throwing axe, bow, and crossbow. However, in experienced hands, the sling was arguably the most effective personal projectile weapon until the 15th century, surpassing the accuracy and deadliness of the bow and even of early firearms. Published in the Bulletin of Primitive Technology.

Image Clustering

Using the RCBIS package developed by Stacey Kuznetsov and myself, Jeff Borden put together a image clustering package. The application was created using the Prefuse information visualization toolkit and a K-means clustering method. The latter used a distance metric the RCBIR engine calculates when given two images.

Search by Sketch

After several months of intense development alongside Stacey Kuznetsov, the Rapid Content-Based Image Search (RCBIS) engine was ready for use in user oriented programs. The most obvious and appealing project was to develop an image search engine which took not text, but a sketch as input.

CollaboraTV: Asynchronous Social Television

Television was once championed as the “electronic hearth” which would bring people together. Indeed, television shows provide a common experience, often affording even total strangers a social connection on which to initiate conversation. However, a fundamental shift in how we consume media is degrading such social interactions significantly – an increasing number of people are no longer watching television shows as they broadcast. Instead, users are favoring non-live media sources, such as Digital Video Recorders, Video-On-Demand services , and even rented physical media (e.g. Netflix). Published at UXTV 2008.

iEPG: An Ego-Centric Electronic Program Guide and Recommendation Interface

Conventional program guides present television shows in a list view, with metadata displayed in a separate window. However, this linear presentation style prevents users from fully exploring and utilizing the diverse, descriptive, and highly connected data associated with television programming. Additionally, despite the fact that program guides are the primary selection interface for television shows, few include integrated recommendation data to help users decide what to watch. iEPG presents a novel interface concept for navigating the multidimensional information space associated with television programming, as well as an effective visualization for displaying complex ratings data. Published at UXTV 2008.

Kronosphere: Temporal File Navigation

Hierarchical file systems mirror the way people organize data in the physical world. However, this method of organization is often inadequate in managing the immense number of files that populate users’ hard drives. Systems that employ a time-centric or content-driven approach have proven to be compelling alternatives. Kronosphere offers a powerful time and content-based navigation and management paradigm that has been specifically designed to leverage universal cognitive abilities.

Speech Enhanced Commenting

Ineffective and insufficient commenting of code is a major problem in software development. Coders find commenting frustrating and time consuming. Often, programmers will write a section of code and come back later to comment, potentially reducing the accuracy of the information. Speech Enhanced Commenting is a system that takes advantage of an unused human output - voice. This system allows users to speak their comments as they program, which is natural and can happen in parallel to the user typing the code.

Visual Hash

Checking passwords often requires a complex algorithm, but computation time is generally small. For users on their local machines, it's insignificant. However, if the password needs to be checked by a remote login server, things become more expensive. Secure connections have be negotiated and databases have to be queried. Throw in a few thousand users and your are looking at a serious load problem. Visual hashes can reduce the burden on login servers by eliminating some unnecessary login attempts.

Aphrodisias Regional Survey

During the 2006 and 2007 summers, I spent eleven weeks in Turkey assisting an archeological expedition at a Greco-Roman city named Aphrodisias. A small Geographical Information Systems (GIS) team worked on producing maps to assist the project, both in surveying the region and artifact collection. Creating site level contour maps was a major effort in the first season. I primarily worked on developing methods for predicting the location of ancient sites, such as iron mines and forts, and the routes of aqueducts and ancient roads.

'A' PPC Compiler

Compilers are quite possibly the programmer's most frequently used tool, and yet often overlooked. Over the course of 2 months, I coded up a compiler in C++ for a medium-sized language dubbed 'A'. The compiler supports all the goodies, including: arrays, pointers, function calls, recursion, type checking, static scoping, and more. The output: executable PPC assembly.

Gesture Based Process Scheduling

People place different levels of importance on applications they use. This importance is often transient. For example, when the user is in a rush and wants to quickly check their email before they leave, the email application should not have to wait on other applications. However, other times, the user might accept a longer launch time because they also value background processes, like a downloading a file or transferring pictures from their digital camera. Anticipating user needs is practically impossible. However, users provide a wealth of feedback in the form of gestures.

Enki

Over the 2005 summer, I had the pleasure of working at IBM's Almaden Research Center in San Jose, California, under the leadership of John Barton and Stephen Farrell. The development team consisted of Chris Parker, Meng Mao and myself. We worked on a unique application which blended aspects of personal information management, social networking, and content management. The project was dubbed Enki, after the Sumerian God of Knowledge.



Undergraduate

A Comparative Analysis of Archaic Peoples and Early Dutch Settlers Living in The New York Region

In 1624, the Dutch established their first outpost about 150 miles north of Manhattan, up the Hudson River, called Fort Orange. It was located near the modern city of Albany, NY. Interestingly, the environment was similar to that of prehistoric Manhattan. The plant life was similar, with spruce, fir, birch, and poplar dominating the area around Fort Orange. Because archaic groups and a fledgling European outpost relied so heavily on the local environment to survive, there are many commonalities in how they lived. Using this similar environmental context, one can explore how each group, with their different technologies and social traditions, adapted and survived.

ChriOS

This page is dedicated to describing my work for the OS Bakeoff, which was the finale to a semester long advanced operating systems class. Students started by building their own shell, and later added support for booting the OS (on X86 hardware), paging (protected memory management), multiple program environments, scheduling, and preemptive multitasking to name a few.

An Investigation of Graham’s Scan and Jarvis’ March

With the advent of computers, which could perform millions of mathematical calculations per second, new topics and problems in geometry began to emerge and the field of computational geometry was born. As the subject evolved, a number of new applications became apparent, ranging from computer vision and geographical data analysis, to collision detection for robotics and molecular biology. This paper discusses perhaps one of the oldest and most celebrated computational geometry problems; the convex hull.

JServer

A multithreaded server capable of pipelining HTTP requests from multiple concurrent clients. Features an LRU cache to store frequently used files in main memory and supports a full compliment of MIME types via an external config file. All in about 500 lines of Java code...



High School Projects

Temperature Regulated Charge Algorithms

This page is dedicated to research I conducted in my high school's science research class. I presented the work at several science competitions and ultimately filed a patent for the technology during my freshman year in university. This page is very out of date.

BaLiS (Bacterium Life Simulator)

BaLiS was my first attempt at designing an AI (high school). It uses a priority matrix that builds as a virtual bacterium explores. Different items are encountered, such as food. The bacterium exhibits certain behaviors based on what it finds, and adjusts it's memory accordingly.

© Chris Harrison