DescriptionIn this work, we explore the fundamental problems associated with Photographic Steganography, the process of discretely sending information camouflaged in natural images from electronic display to camera. Broadly stated, the goals are minimizing the perceived visual impact of adding a new message to an image, while simultaneously maximizing the ability to accurately recover this message camera-side. This process is complicated by the photometric and radiometric effects of cameras, electronic displays, and their relative geometry and illumination conditions. In Chapter 2, we model these effects jointly as a Camera-Display Transfer Function (CDTF) and introduce two online radiometric calibration techniques to mitigate the effects of the CDTF. In Chapter 3, we extend photographic steganography by modeling and predicting color shifts that minimize perceptual impact and maximize accurate camera recovery. In Chapter 4, we use deep convolutional neural networks to jointly learn a steganographic embedding and recovery algorithm that requires no multi-frame synchronization, one of the most significant practical barriers to success for photographic steganography. The proposed techniques have all been implemented in real-time demos using consumer-grade displays and smartphone cameras. This body of work represents a fundamental contribution to the field of camera-display communication and photographic steganography. Chapter 5 explores how computer vision techniques can be extended to monostatic radar for shape recognition.