DescriptionResearchers have been seeking to develop intelligent systems with the ability to behave like humans by autonomously making accurate and reasonable decisions for real-world tasks. It now becomes imaginable and achievable with the help of advanced artificial intelligence (AI), especially the deep learning technique that is known for its superior representation and predictive power. Such deep learning based decision-making systems have been shown to be surprisingly effective in delivering accurate predictions, but at the price of lack of explainability due to the “black-box” of deep neural networks. However, explainability plays a pivotal role in practical human-involved applications such as user modeling, digital marketing and e-commerce platforms. Explanations can be leveraged to not only assist model developers to understand and debug the working mechanism of the decision-making process, but also facilitate better engagement and trustworthiness for the end users who consume the results produced by the systems. In this thesis, we concentrate on one category of explainable decision-making system that relies on external heterogeneous graphs to generate accurate predictions accompanied with faithful and comprehensible explanations, which is also known as the neural graph reasoning for explainable decision-making. Unlike existing work on explainable machine learning that mainly yields model-agnostic explanations for deep neural networks, we attempt to develop intrinsically interpretable models based on graphs with the guarantee of both accuracy and explainability. The meaningful and versatile graph structures (e.g., knowledge graphs) are shown to be effective in improving model performance, and more importantly, make it possible for an intelligent decision-making system to conduct explicit reasoning over graphs to generate predictions. The benefit is that the resulting graph paths can be directly regarded as the explanations to the prediction results because the traceable facts along the paths reflect the decision-making process and can also be easily understood by humans. To this end, our goal is to develop neural graph reasoning approaches to generate such path-based explainable results by marrying the merits of predictive power by deep neural models and the interpretability of graph structures. Specifically, we propose four methods from different perspectives: (i) a fundamental graph reasoning framework based on reinforcement learning, (ii) a neural-symbolic model featured by its self-explaining and compositional neural symbolic modules, (iii) a neural logic model that explicitly learns personalized and explainable reasoning rules, and (iv) an imitation learning based method that learns to distinguish the quality of explainable paths from demonstrations. These approaches are extensively evaluated on real-world benchmarks across different applications such as e-commerce recommendation and column annotation in digital marketing. The experimental results demonstrate the effectiveness of the proposed methods in achieving satisfying prediction accuracy and providing users with faithful and understandable path-based explanations.