DescriptionRecently, there has been extensive interest in developing intelligent human-centered AI (artificial intelligence) systems that support human participation so as to facilitate cooperation between humans and machines. As one of the typical decision making paradigm in AI, recommender systems which have become an integral part of our lives, are a particularly pervasive form of AI system that can aid in decision-making in the face of ever-growing amounts of information. It now becomes imaginable and achievable with the help of advanced artificial intelligence, especially the modern deep learning based recommender systems that is known for its superior representation and predictive power, have made great strides in accuracy and effectiveness. Meanwhile, it also raises a number of important challenges: 1) How can we actively incorporate human participation into the decision-making procedure of recommender systems? It aims to integrate human participation as guidance to keep the decision-making process consistent with human feedback to maintain the trustworthiness to human beings. 2) How can we ensure that explanations are provided such that users can better understand why particular items are being recommended? In this aspect, explainable recommendation can be leveraged to not only assist the agent to provide high-quality recommendation results but also offering personalized and intuitive explanations with better user engagement, which are important for several modern recommender systems such as e-commerce and social media platforms etc. 3) How can we alleviate biases in recommender systems? Seldom progress has been explored to mitigate the biases that arise in human-centered recommender systems so as to hurt user satisfaction and trust towards the recommendation service. In this thesis, we proposes several novel methods to fill these gaps. In particular, for improved human understanding, we introduce an adversarial semantic learning framework for cross-lingual settings understanding. For human integration, a human-in-the-loop conversational recommender system with external graph structure is introduced. To ensure fair explanations, we mitigate the unfairness within graph-based explainable reasoning in the recommender system. Finally, for human-system cooperation, we present a popularity debiasing framework to integrate user interaction and debiased dialogue stat management in a conversational recommender system. We not only extensively evaluate our proposed approaches on multiple real-world recommendation datasets, but also contribute open public datasets to the community. The experimental results demonstrate the effectiveness of the proposed methods in achieving satisfying prediction accuracy, mitigating bias, and providing users with understandable explanations.