TY - JOUR TI - Optimal data utilization for goal-oriented learning DO - https://doi.org/doi:10.7282/T3XP772T PY - 2016 AB - We are interested in the problem of utilizing collected data to inform and direct learning towards a stated goal. In this work, a controller is presented with a finite set of actions that may be sequentially (and repeatedly) taken towards the achievement of some goal. While the outcome of any action is stochastic, the result provides information about future results of that action, and potentially others. By following a rule or control policy, the controller wishes to sequentially take actions, collect information, and utilize it towards future action decisions, in such a way as to approach the stated goal. In the first model, at least one action is `best', and the goal is to identify and take such an action as frequently as possible. This requires learning the actions' underlying dynamics based on repeated observations of the stochastic results of those actions; this encapsulates the classic `exploration vs exploitation' dynamic, to test many actions, or to take only the action currently believed to be best. We derive asymptotic lower bounds on how effective any universally good policy can be, as a function of initial knowledge. Additionally, we define a generic control policy and conditions under which it is provably asymptotically optimal, and give a number of examples to illustrate the scope and application of the model. In the second model, the goal is to maximize some utility of all actions taken, e.g., total expected rewards collected. Additionally, each action has an associated breaking or halting time, which if reached ends the control process. This again captures the `exploration vs exploitation' dynamic, as the controller must balance the reward of any one action against the risk of halting and loss of opportunity for future rewards. As the goal depends on the actual results achieved, there is generally no single `best' action as in the previous model. In many contexts, we derive a dynamic `action valuation' scheme that gives rise to an optimal control policy. KW - Mathematics LA - eng ER -