DescriptionThis work analyzes an optimal control problem for which the performance is measured by a dynamic risk measure. While dynamic risk measures in discrete-time and the control problems associated are well understood, the continuous-time framework brings great challenges both in theory and practice. This study addresses modeling, numerical schemes and applications. In the first part, we focus on the formulation of a risk-averse control problem. Specifically, we make use of a decoupled forward-backward system of stochastic differential equations to evaluate a fixed policy: the forward stochastic differential equation (SDE) characterizes the evolution of states, and the backward stochastic differential equation (BSDE) does the risk evaluation at any instant of time. Relying on the Markovian structure of the system, we obtain the corresponding dynamic programming equation via weak formulation and strong formulation; in the meanwhile, the risk-averse Hamilton-Jacobi-Bellman equation and its verification are derived under suitable assumptions. In the second part, the main thrust is to find a convergent numerical method to solve the system in discrete-time setting. Specifically, we construct a piecewise-constant Markovian control to show its arbitrarily closeness to the optimal control. The results heavily relies on the regularity of the solution to generalized Hamilton-Jacobi-Bellman PDE. In the third part, we propose a numerical method for risk evaluation defined by BSDE. Using dual representation of the risk measure, we converted risk valuation to a stochastic control problem, where the control is the Radon-Nikodym derivative process. The optimality conditions of such control problem enables us to use a piecewise-constant density (control) to arrive at a close approximation on a short interval. Then, the Bellman principle extends the approximation to any finite time horizon problem. Lastly, we give a financial application in risk management in conjunction with nested simulation.