Abstract
(type = abstract)
Acoustic frequencies, commonly approximated from 20Hz to 20kHz, possess great potential for wireless sensing applications in the mobile Internet-of-Things (IoT). However, modern mobile IoT has underutilized this spectrum relative to higher frequency bands (i.e., MHz or higher), leaving much of this potential untapped. Inaudible acoustic sensing is both possible and practical on mobile devices (e.g., smartphones, smartwatches, tablets) for myriads of essential daily functions, including facilitating and securing telecommunications through user verification. Such feats are possible due to the propagation behaviors of acoustic frequencies near the thresholds of human hearing ability (i.e., under 500Hz orover 16kHz) when travelling through air and solids. Acoustic signals are attenuated by the material they propagate through. Ordinarily regarded as interference, this attenuation can also reveal information about the propagation medium, not only if it was a human body, but if it was a specific body. Thus, we allow mobile devices to identify users from mere
physical contact and respond accordingly, such as by locking or unlocking access to data. This dissertation aims to demonstrate these ideas by studying acoustic behavior on mobile devices and acoustic responsiveness to different user hands and bodies. We first investigate the versatility of inaudible acoustic frequencies and their aptitude at transferring information between transmitter and receiver sensors. We theoretically model speaker non-linearity and transmission power, designing communication schemes utilizing two speakers to achieve inaudibility. At the receiver side, we double the coefficient of received signal strength by leveraging microphone non-linearity. Experimental results suggest that our system can achieve over 2m range and over 17kbps throughput, achieving longer range and/or higher throughput than similar works while remaining inaudible. We then study the ability of acoustic signals to capture user-specific information when travelling through the hand that holds the device. We propose a non-intrusive hand sensing technique to derive unique acoustic features in both the time and frequency domains, which can effectively capture the physiological and behavioral traits of a user’s hand (e.g., hand contours, finger sizes, holding strengths, and holding styles). Learning-based algorithmsare developed to robustly identify the user under various environments and conditions. We conduct extensive experiments with 20 participants, gathering 80,000 hand geometry samples using different smartphone and tablet models across 160 key use case scenarios. Our results were shown to identify users with over 94% accuracy, without requiring any
active user input. Having verified the concept on smartphones, we then extend the study to smartwatches, which possess considerably less powerful sensors and new design constraints. Our redesigned system employs a challenge-response process to passively capture behavioral and physiological biometrics from an unobtrusive touch gesture using low-fidelity acoustic and vibration smartwatch sensors. We develop a cross-domain sensing technique (i.e., measuring acoustic signals in the vibration domain) to capture robust and effective features specific to user fingers and improve robustness. A low-cost profile matching-based classifier is designed to enable stand-alone user authentication on smartwatches. Experimentswith 54 participants using varied hardware, environments, noise levels, user motions, and other impact factors, achieved around 97% true positive rate and 2% false positive rate in user authentication. Finally, we explore how structural characteristics of the mobile device can heighten the sensitivity of acoustic sensing. We thus propose an acoustic sensing system for smartphones that leverages smartphone cases modified with internal mini- structures to capture finger-tip biometric information. The design of the mini-structure allows developers to control the behavior of structure-borne sound such that unique responses are produced when different users and fingers touch the smartphone case at different locations. Experiments with 46 users over 10 weeks illustrate how we can differentiate users with over 94% accuracy at a 5% false positive rate.