Loading…
NIPS 2013 has ended
Sunday, December 8 • 2:00pm - 6:00pm
Adaptive Step-Size for Policy Gradient Methods

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

In the last decade, policy gradient methods have significantly grown in popularity in the reinforcement--learning field. In particular, they have been largely employed in motor control and robotic applications, thanks to their ability to cope with continuous state and action domains and partial observable problems. Policy gradient researches have been mainly focused on the identification of effective gradient directions and the proposal of efficient estimation algorithms. Nonetheless, the performance of policy gradient methods is determined not only by the gradient direction, since convergence properties are strongly influenced by the choice of the step size: small values imply slow convergence rate, while large values may lead to oscillations or even divergence of the policy parameters. Step--size value is usually chosen by hand tuning and still little attention has been paid to its automatic selection. In this paper, we propose to determine the learning rate by maximizing a lower bound to the expected performance gain. Focusing on Gaussian policies, we derive a lower bound that is second--order polynomial of the step size, and we show how a simplified version of such lower bound can be maximized when the gradient is estimated from trajectory samples. The properties of the proposed approach are empirically evaluated in a linear--quadratic regulator problem.
None


Sunday December 8, 2013 2:00pm - 6:00pm PST
Harrah's Special Events Center, 2nd Floor
  Posters
  • posterid Sun56
  • location Poster# Sun56