Deep Learning and Reward Design for Reinforcement Learning
Monday, December 19, 2016|
10:00am - 12:00pm
3725 Beyster Bldg.
Add to Google Calendar
About the Event
One of the fundamental problems in Artiﬁcial Intelligence is that of sequential decision making in a stochastic environment. Reinforcement Learning (RL) gives a set of tools for solving sequential decision problems. Although the theory of RL addresses a general class of learning problems with a constructive mathematical formulation, the challenges posed by the interaction of rich perception and delayed rewards in many domains remain a signiﬁcant barrier to the widespread applicability of RL methods. The rich perception problem itself has two components: 1) the sensors at any time step do not capture all the information in the history of observations, leading to partial observability, and 2) the sensors provide very high-dimensional observations, such as images and natural languages, that introduce computational and sample-complexity challenges for the representation and generalization problems in policy selection. The delayed reward problem—that the effect of actions in terms of future rewards is delayed in time—makes it hard to determine how to credit action sequences for reward outcomes. This dissertation offers a set of contributions that adapt the hierarchical representation learning power of deep learning to address rich perception in vision and text domains, and develop new reward design algorithms to address delayed rewards. The first contribution is a new learning method for deep neural networks in vision-based real-time control. The learning method distills slow policies of the Monte Carlo Tree Search (MCTS) into fast convolutional neural networks, which outperform the conventional Deep Q-Network. The second contribution is a new end-to- end reward design algorithm to mitigate the delayed rewards for the state-of- the-art MCTS method. The reward design algorithm converts visual perceptions into reward bonuses via deep neural networks, and optimizes the network weights to improve the performance of MCTS end-to- end via policy gradient. The third contribution is to extend existing policy gradient reward design method from single tasks to multiple tasks. Reward bonuses learned from old tasks are transferred to new tasks to facilitate learning. The final contribution is an application of deep reinforcement learning to another type of rich perception, ambiguous texts. A synthetic data set is proposed to evaluate the querying, reasoning and question-answering abilities, and a deep memory network architecture is applied to solve these challenging problems to substantial degrees.
Sponsor(s): Professors Satinder Singh Baveja and Richard L. Lewis
Open to: Public