Description
TitleDesign of inertial and camera sensing support for smart intersections
Date Created2017
Other Date2017-10 (degree)
Extent1 online resource (xviii, 127 p. : ill.)
DescriptionModern cities are alive with sensors, including but not limited to smartphones, cameras, vehicles, and wearable devices. Contrary to popular belief that the evolution of smart cities needs an overhaul of advanced sensors across our cities, this dissertation presents techniques that enable existing sensing devices to expand their role and innovate novel smart city context. We undertake the task of supporting a diverse set of applications, ranging from large-scale video analytics to pedestrian safety, on a heterogeneous assembly of sensors. Motivated by rising pedestrian fatalities in our cities, we investigate the performance of GPS-based approaches for determining pedestrian risk in dense urban environments. To address its inadequacy, we introduce a novel outdoor surface profiling technique using shoe-mounted inertial sensors for location classification based on surface gradient profile and step patterns. We seek to detect transitions from sidewalk locations to in-street locations, to enable applications such as alerting texting pedestrians when they step into the street. We achieve transition detection rates higher than 95% even in the intricate midtown Manhattan pedestrian environment. Further, we extend this ability to mobile cameras, and explore how well commercial-off-the-shelf smartphone cameras can learn texture to distinguish among paving materials in uncontrolled outdoor urban settings. We devise an approach that performs material recognition on the pedestrian's walking surface, with more than 85% accuracy, to identify safe and unsafe walking locations. Finally, to advance the state of video analytics in smart cities, we build a virtualization system for public cameras to support multiple applications simultaneously. We introduce the concept of mobility-awareness, which enables these otherwise static cameras to pan, tilt, and zoom to capture events of interest. This improves immensely upon the current state-of-the-art wherein traffic operators examine live video streams. Experiments with a live camera setup demonstrate that we can support multiple applications simultaneously, capturing up to 80% more events of interest in a wide scene, compared to a fixed view camera. This work is based on the insight that relative positions and motion patterns are crucial for generating safety context and meaningful analytics at traffic intersections. Furthermore, we demonstrate the efficacy of our approaches by building end-to-end systems, calling for exhaustive real-world data collection in complex metropolitan environments like New York, London, and Paris; supported by rigorous testing and scalability of the solution.
NotePh.D.
NoteIncludes bibliographical references
Noteby Shubham Jain
Genretheses, ETD doctoral
Languageeng
CollectionSchool of Graduate Studies Electronic Theses and Dissertations
Organization NameRutgers, The State University of New Jersey
RightsThe author owns the copyright to this work.