Google Glass and Wearable Computing Demonstrations

Speaker Name: 
Thad Starner
Date: 
Feb 9 2014 - 2:00pm
Location: 
TSRB, 2nd Floor, Room 243
85 5th Street, NW Atlanta, Ga 30318

Google Glass and Wearable Computing Demonstrations

Presented by
The Contextual Computing Group

February 9, 2014

Technology Square Research Building
85 5th Street, NW
2nd Floor, Room 243
Atlanta, Ga 30318

Thad Starner is the director of the Contextual Computing Group (CCG) and is also a Technical Lead/Manager on Google's Project Glass. In general, the CCG creates computational interfaces and agents for use in everyday mobile environments. We combine wearable and ubiquitous computing technologies with techniques from the fields of artificial intelligence (AI), pattern recognition, and human computer interaction (HCI). We continually develop new interfaces for mobile computing (and mobile phones) with an emphasis on gesture. Below are some of the projects we are currently exploring.

Contacts:  Thad Starner, Professor (thad@gatech.edu)
Ed Price, Director of Research Development and Partnerships (ed.price@gatech.edu) - 404-889-5956

 
Demo Title Demo Description
CHAT - A Dolphin Interaction Wearable
 
Faculty: Thad Starner, Peter Presti,  Scott Gilliland
Students: Daniel Kohlsdorf, Celeste Mason, Stewart Butler
CHAT (Cetacean Hearing Augmentation & Telemetry) is a wearable underwater computer system, engineered to assist researchers in establishing two-way communication with dolphins. The project seeks to facilitate the study of marine mammal cognition by providing a waterproof mobile computing platform. An underwater speaker and keyboard enables the researchers to generate whistles. The system is equipped with a two channel hydrophone array used for localization and recognition of specific responses that are translated into audio feedback. The current system is the result of multiple field tests, guided by the researcher’s feedback and the environmental constraints.
CopyCat
 
Faculty: Thad Starner, Peter Presti
Students: Kareem Hemanshu, Zahoor Zafrulla
This project involves the design and evaluation of an interactive computer game that allows deaf children to practice their American Sign Language skills. The game includes an automatic sign language recognition component utilizing computer vision and wireless accelerometers. The project is a collaboration with Dr. Harley Hamilton at the Atlanta Area School for the Deaf.

 
DMITRI
 
Faculty: Thad Starner, Nate Heintzman (USCD)
Students: Subrai Pai, Daniel Kohlsdorf, Andy Pruett, Aditya Tirodkar,
The DMITRI pilot study demonstrated the feasibility of quantitative real-world monitoring of important physiologic variables in Type 1 Diabetic individuals. The study placed an emphasis on collecting data relevant to diabetes management using common diabetes management technology, as well as continuous data from unobtrusive monitors for heart rate, physical activity, and sleep. The data was collected under real-world conditions from subjects with diabetes undertaking a variety of activities including classroom learning sessions, episodes of moderate and vigorous exercise, meals, and sleep. We present interdisciplinary work on the DMITRI data set combining the Diabetes Informatics and Analytics (DIAL) Lab from UCSD with Computer Science work at Georgia Tech, focusing on pattern recognition and wearable computing research.
Facilitating Interactions for Dogs with Occupations (FIDO)
 
Faculty:  Thad Starner, Melody Jackson, Clint Zeagler
 
The FIDO project is investigating wearable technology to enable better communication between working dogs and their handlers.  Through sensors on an assistance dog or police dog vest, the dogs can transmit information such as alerting to their handler's medical condition (medical alert dog), whether steps are up or down (guide dog), or what class of explosive they have found (police dog).
Glass Display for Brain-Computer Interface
 
Faculty:  Melody Jackson
The GT BrainLab is researching whether Google Glass can be used as a stimulus for evoking specific, visually-oriented brain signals.  These brain signals can then be detected and used for control, such as driving a wheelchair for people with severe motor disabilities.
 
Mobile Music Touch (MMT)
 
Faculty: Thad Starner 
Students: Caitlyn Seim
MMT consists of a wireless tactile glove, with a vibration motor for each finger, and a lightweight computing device such as an MP3 player or a smart phone. When instrumental music is played (such as piano or saxophone), the tactile glove vibrates the fingers to indicate which fingers play which notes. Thus with MMT, users can hear a song and feel it playing on their hands. The MMT gloves have been demonstrated to be able to teach wrote motor tasks of the fingers through passive haptic learning (PHL). Passive learning is learning achieved while all conscious attention is devoted to another task - such as reading or taking a standardized test. PHL is passive learning taught through haptic stimulation. Wearers of the MMT glove can learn a simple piano melody such as the first portion of Amazing Grace in 45 minutes of wear. In addition to Passive Haptic Learning, MMT gloves can also be used for rehabilitation. The loss of functionality of the hands can severely interrupt a person's life, and hand rehabilitation can be a long, arduous process. In fact, many patients find certain traditional therapy exercises, such as squeezing an object for several hours a day, or other simple strengthening exercises, monotonous and un-motivating. Thus we propose the Mobile Music Touch (MMT) system as an engaging, pervasive hand rehabilitation aid. The MMT system can augment the stimulation of the afferent (sensory) nerves, motivate patients to use their hands in a fun way, and teach them the enjoyable and relaxing skill of playing an instrument, which may further motivate long term hand use. Finally, we are researching the application of this tactile interface for use in teaching other skills, like typing Braille or Stenography. Schools of Stenography report up to 95% drop-out rates and even expert users need to practice for hours a week to maintain competitive speeds. These applications aim to reduce practice times and drop-out rates for learning these crucial language entry methods.
Order Picking with Wearable Computers
 
Faculty: Thad Starner
Students: Shashank Raghu Saad Ismail Joseph Simoneau Anhong Guo Xiaohui Luo Xuwen Xie
Warehouses throughout the world distribute approximately $1 trillion in goods per year from nearly a million warehouses. Order Picking is the process of collecting items from inventory and sorting them into orders for distribution. It represents one of the main activities performed in warehouses. About 60% of the total operational costs of these warehouses is order picking. Most are still picked by hand, often using paper pick lists. Our objective is to implement and compare various order-picking systems, including: • Pick-By-Paper list • Pick-By-Light • Pick-By-Tablet • Pick-By-HUD (Heads-Up Display).
Passive Haptic Learning via Wearable Computers


Faculty:  Thad Starner
Student:  Caitlyn Seim
Passive Haptic Learning (PHL) allows people to learn “muscle memory” through vibration stimuli without devoting attention to the stimulus.  Previous work on PHL taught users rote patterns of finger movements corresponding to piano melodies.  Expanding on this research, we are currently exploring the capabilities and limits of Passive Haptic Learning as we investigate whether more complex skills and meaning can be taught through wearable, tactile interfaces. 
 
Translation on Glass
 
Faculty: Thad Starner
Student:   Jay Zuerndorfer
 
Glassware and a companion Android app to seamlessly translate a conversation between two different language speakers. One user speaks in their native language into the Android phone and the text is transcribed, translated to English, and sent to the other user's Glass display. That user can then speak in English and their voice is transcribed and translated back into the first user's native language and shown on the Android device.
 
Tongue Magnet Interface (TMI)
 
Faculty: Thad Starner, Maysam Ghovanloo - Students: Himanshu Sahni Pavleen Thukral
We seek to address the problem of performing continuous speech recognition where direct audio data is not available or is highly noisy.  Such a system is useful for individuals with disabilities that render them incapable of audible or intelligible speech (e.g., due to stroke, head injury, cerebral palsy, physical trauma or surgical removal of the larynx due to cancer). It is also potentially of interest to able-bodied individuals in situations where privacy is a concern or the environment is too noisy (e.g., firefighting or combat).
 
Our Tongue Magnet Interface (TMI) uses 3-axis  magnetometers on Glass to measure  the  movement  of a small magnet glued to the user’s tongue.
Such a system allows us to do speech recognition without the user having to produce any sound, i.e. Silent Speech Recognition!
 

© Copyright 2013 Georgia Institute of Technology

Subscribe to IPaT Insider

Email
 
Institute for People and Technology (IPaT) at Georgia Tech
75 5th St NW, 6th Floor, Suite 600
Atlanta, GA, 30308
404-894-IPAT (4728)
ipat@gatech.edu