Description
TitleEnabling high-quality mobile immersive computing through edge support
Date Created2019
Other Date2019-10 (degree)
Extent1 online resource (xiv, 133 pages) : illustrations
DescriptionEmerging Mobile Immersive Computing applications, such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR), are changing the way human beings interact with the world. Such systems promise to provide unprecedented immersive experiences in the fields of video gaming, education, and healthcare. However, several key processes, such as rendering and object detection, are highly computational intensive, which make them extremely hard to run on mobile devices. Offloading these bottleneck processes to the edge or cloud is also very challenging due to the stringent requirements on high quality and low latency.
In order to achieve high quality and low latency performance of mobile immersive computing applications on mobile thin clients, the system requires to finish the entire offloading pipeline within very short end-to-end latency. Offloading Vision tasks to the edge cloud typically involves several main processes: Sensing, Uplink Transmission, Processing, and Downlink Transmission. These four processes form a round trip from the mobile device to the edge cloud and back to mobile devices. Compared to traditional offloading approaches that execute these processes in a sequential way, our key contribution is to design new video streaming and processing pipelines that can significantly reduce the offloading latency and improve vision quality of VR and AR systems.
High-quality VR systems generate graphics data at a data rate much higher than those supported by existing wireless-communication products such as Wi-Fi and 60GHz wireless communication. The necessary image encoding, makes it challenging to maintain the stringent VR latency requirements. To address this issue, we introduces an end-to-end untethered VR system design and open platform that can meet virtual reality latency and quality requirements at 4K resolution over a wireless link.
Most existing Augmented Reality (AR)/Mixed Reality (MR) systems are able to understand the 3D geometry of the surroundings but lack the ability to detect and classify complex objects in the real world. Such capabilities can be enabled with deep Convolutional Neural Networks (CNN), but it remains difficult to execute large networks on mobile devices. Offloading object detection to the edge or cloud is also very challenging due to the stringent requirements on high detection accuracy and low end-to-end latency. The long latency of existing offloading techniques can significantly reduce the detection accuracy due to changes in the user's view. To address the problem, we design a system that enables high accuracy object detection for commodity AR/MR systems running at 60fps.
Furthermore, we build EdgeSharing, an object sharing system leveraging large computational resources at the edge cloud. Beyond the capability of providing object detection service to nearby mobile clients, EdgeSharing holds a real-time 3D feature map of its coverage region on the edge cloud and uses it to provide accurate localization and object sharing service to the client devices passing through this region. By sharing a moving object's location between different camera-equipped devices, it effectively extends the vision of participants beyond their field of view. We further propose several optimization techniques to increase the localization accuracy, reduce the bandwidth consumption and decrease the offloading latency of the system. The result shows that EdgeSharing is able to achieve high quality localization and object sharing accuracy with a low bandwidth and latency requirement.
NotePh.D.
NoteIncludes bibliographical references
Genretheses, ETD doctoral
LanguageEnglish
CollectionSchool of Graduate Studies Electronic Theses and Dissertations
Organization NameRutgers, The State University of New Jersey
RightsThe author owns the copyright to this work.