Bhowmik, Rajarshi. Neural methods for entity-centric knowledge extraction and reasoning in natural language. Retrieved from https://doi.org/doi:10.7282/t3-gk5z-3g02
DescriptionEntities are the cornerstone for the dissemination of factual knowledge in natural language. Human verbal and written communication invariably refer to entities, their properties, and their relationships. Moreover, human reasoning often relies on a proper understanding of relationships among various entities. It is perceived that representing entity-centric knowledge in a structured form is most suitable for machines to consume and reason about it. Over the past few decades, numerous methodological advances have been made in extracting entity-centric knowledge from unstructured and semi-structured sources, representing entity-centric knowledge as graph-structured data known as Knowledge Graphs, and using these knowledge graphs in various knowledge-intensive natural language processing tasks. Despite these advances, machines are yet to achieve human-level ability to extract and reason with factual knowledge and use it for various knowledge-intensive tasks. This dissertation proposes novel neural methods to narrow this gap. In particular, for factual knowledge extraction, it proposes efficient and effective methods for the tasks of Entity Linking and Relation Extraction. For knowledge-based logical reasoning, an explainable link prediction method for emerging entities in knowledge graphs is proposed. Furthermore, as a representative of knowledge-intensive natural language processing tasks, this dissertation studies the problem of entity summarization to retrieve relevant facts and generate fact-allegiant textual descriptions of entities.