Loading…
NIPS 2013 has ended
Sunday, December 8 • 2:00pm - 6:00pm
Symbolic Opportunistic Policy Iteration for Factored-Action MDPs

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

We address the scalability of symbolic planning under uncertainty with factored states and actions. Prior work has focused almost exclusively on factored states but not factored actions, and on value iteration (VI) compared to policy iteration (PI). Our first contribution is a novel method for symbolic policy backups via the application of constraints, which is used to yield a new efficient symbolic imple- mentation of modified PI (MPI) for factored action spaces. While this approach improves scalability in some cases, naive handling of policy constraints comes with its own scalability issues. This leads to our second and main contribution, symbolic Opportunistic Policy Iteration (OPI), which is a novel convergent al- gorithm lying between VI and MPI. The core idea is a symbolic procedure that applies policy constraints only when they reduce the space and time complexity of the update, and otherwise performs full Bellman backups, thus automatically adjusting the backup per state. We also give a memory bounded version of this algorithm allowing a space-time tradeoff. Empirical results show significantly improved scalability over the state-of-the-art.
None


Sunday December 8, 2013 2:00pm - 6:00pm PST
Harrah's Special Events Center, 2nd Floor
  Posters
  • posterid Sun05
  • location Poster# Sun05