Posts by Collection



Indutivo: Contact-Based Object-Driven Interactions with Inductive Sensing.

Published in User Interface Software Technology (UIST18), 2018

Watch Indutivo Demo Video We present Indutivo, a contact-based inductive sensing technique for contextual interactions. Our technique recognizes conductive objects (metallic primarily) that are commonly found in households and daily environments, as well as their individual movements when placed against the sensor. These movements include sliding, hinging, and rotation. We describe our sensing principle and how we designed the size, shape, and layout of our sensor coils to optimize sensitivity, sensing range, recognition and tracking accuracy. Through several studies, we also demonstrated the performance of our proposed sensing technique in environments with varying levels of noise and interference conditions. We conclude by presenting demo applications on a smartwatch, as well as insights and lessons we learned from our experience.

Recommended citation: Jun Gong, Xin Yang, Teddy Sayed, Josh Urban Davis, Xing-Dong Yang. “Indutivo: Contact-Based Object-Driven Interactions with Inductive Sensing.” ; Proc of User Interface Software Technology (UIST). Berlin, Germany 2018.

IllumiWear: A Bendable Interactive Fiber-Optic eTextile for Audio and Visual Interactions

Published in NIME, 2019

We present IllumiWear, a novel eTextile prototype that uses fiber optic cables as interactive input and visual output. Fiber optic cables are separated into bundles and then woven like a basket into a bendable glowing fabric. By equipping light emitting diodes to one side of these bundles and photodiode light intensity sensors to the other, loss of light intensity can be measured when the fabric is bent. The sensing technique of IllumiWear is not only able to discriminate between discreet touch, slight bends, and harsh bends, but also recover the location of deformation. In this way, our computational fabric prototype uses it’s intrinsic means of visual output (light) as a tool for interactive input. We provide design and implementation details for our prototype as well as a technical evaluation of it’s effectiveness and limitations as an interactive computational textile. In addition, we examine the potential of this prototypes interactive capabilities by extending our eTextile to create a tangible user interface for audio and visual manipulation.Download paper here

Recommended citation: Josh Urban Davis “IllumiWear: A Bendable Interactive Fiber-Optic eTextile for Audio and Visual Interactions.” ; Proc of New Interfaces in Music Expression (NIME). Porto Alegre, Brazil 2019. /files/Illumiwear_NIME_2018.pdf


PsycheVR: Virtual Reality and Biofeedback to Promote Wellness in Astronauts


Watch Synapstraction Demo Video PsycheVR is a virtual reality and biofeedback project conducted in collaboration with the Space Medicine Labratory at the Geisell School of Medicine and National Aeronautics Space Administration (NASA). The project consists of developing virtual reality content to promote relaxation when the user is confined to small, isolated spaces for long durations of time. The products of this project were used in trial experimentation with NASA astronauts intended for future missions to Mars. We also prototyped biofeedback mechanisms to take in a user’s biometric state as determined by GSR, EEG, facial expression and other stress indicators and subsequently adjust the content of the VR enviornment to assit calming the user.

Synapstraction: Brain-Computer Interaction for Tangible Abstract Painting


Watch Synapstraction Demo Video Synapstraction is a brain-computer interaction project which allows users to create an abstract painting based on neuro-feedback. The system uses a special headset that measures the electrical activity of a person’s scalp processed by machine learning algorithms to discriminate a stimuli’s effect on the brain. Each visitor is invited to enter the installation and approach one of five “sense” stations, each with an electroencephalography (EEG) headset. We then measure the event related potential (ERP) elicited by the visitor’s brain using the EEG when the visitor receives 1 of 5 stimuli. These 5 stimuli correspond to the 5 senses (sight, touch, taste, sound, smell). In addition, each stimuli is mapped to a specific creative instrument such as a paintbrush, sponge, or marker. Once the visitor has secured their headset, they are presented with a stimulus (e.g. a Chopin Nocturne if at the “sound” station or fresh ground clove for the “scent” station). Our system then uses a machine learning method called linear discriminant analysis to map the activity of the visitor’s brain while experiencing the stimuli to acoustic frequencies which actuate the painting implements. After visiting each of the 5 “sense” stations within the installation, the participant is invited to keep their finished painting.The participant’s brain serves as a conduit, translating the stimulation of each sense into the finished image. In this way, the usual methodology of an artist using their senses to create a media object is inverted; the senses use the artist to create an image. Synapstraction largely takes its aesthetic interests from the abstract expressionists of the 20th century and its conceptual framework from aleatory artists such as John Cage. Unlike Cage, however, Synapstraction maps all senses to image, and renders the consumption of sensual stimuli as an act of image creation. The material of this artwork doesn’t necessarily lie in the paintings themselves, nor the equipment used in the installation, but instead rests in the speculative reconsideration of potential alternative roles for human senses in art making. This project premiered during the Digital Arts Festival at the Black Visual Arts Center in 2017. Press: Junction Magazine

The Gender Generator: Towards a Brain-Computer Interaction to Evoke Gender Dysphoria Symptoms


Watch Gender Generator Demo VideoScience and technology have profoundly affected our abilities to observe, transform, and manipulate bodily functions as well as our concepts of the body. Recent research into empathic technologies and empathic embodied technologies suggests that our embodied experience through virtual reality strongly influence our cognitive state and social biases. However, this trend of examining the effect embodied experience has on the mind is complicated when the target group of the empathic exercise may experience a disconnection between their physical body and their experienced body. Recent investigations indicate a potential biological basis for gender dysphoria. This brings into question how to create empathic technologies to explore the embodied experience of a transgender person, when a transgender person may not necessarily identify with their own embodied experience? To explore this, we present three Brain Computer Interfaces (BCI) for the development of empathic technologies beyond immersive embodiment. Hex Plexus investigates BCI music composition practices as the basis for a series of mobile performances.  Synapstraction is an installation which allows users to create abstract paintings based on neuro-feedback. The Gender Generator allows users to explore gender expression and construction in the form of a P300-based BCI paradigm. We conducted a series of user studies to understand the effectiveness of these machines as expressive tools and educational experiences. These projects address the notion of empathic technologies for body-dysmorphic users by building towards a Machine-Empathy Interface system which serves to create an empathic link between users.

Recommended citation: Davis, Josh Urban. (2018). "The Gender Generator: Towards a Brain-Computer Interaction to Evoke Gender Dysphoria Symptoms." Extended Abstracts Human Factors in Computing (CHI18). Montreal, QC.


Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.