DescriptionFrom the gestures that accompany speech to images in social media posts, humans effortlessly combine words with visual presentations. Communication succeeds even though visual and spatial representations are not necessarily wired to syntax and con- ventions, and do not always replicate appearance. Machines, however, are not equipped to understand and generate such presentations due to people’s pervasive reliance on commonsense and world knowledge in relating words and external presentations. I show the potential of discourse modeling for solving the problem of multimodal com- munication. I start with presenting a computational model for diagram understanding, extending linguistics accounts to learn the interpretation of schematic elements such as arrows. I then present a novel framework for modeling and learning a deeper com- bined understanding of text and images by classifying inferential relations to predict temporal, causal, and logical entailments in context. This enables systems to make inferences with high accuracy while revealing author expectations and social-context preferences. I proceed to design methods for generating text based on visual input that use these inferences to provide users with key requested information. The results show a dramatic improvement in the consistency and quality of the generated text by decreasing spurious information by half. Finally, I describe the design of two multi- modal interactive systems that can reason on the context of interactions in the areas of human-robot collaboration and conversational artificial intelligence and describe my research vision: to build human-level communicative systems and grounded artificial intelligence by leveraging the cognitive science of language use.