Dynamically Constructed (PO)MDPs for Adaptive Robot Planning (2016)
To operate in human-robot coexisting environments, intelligent robots need to simultaneously reason with commonsense knowledge and plan under uncertainty. Markov decision processes (MDPS) and partially observable MDPs (POMDPs), are good at planning under uncertainty toward maximizing long-term rewards; P-LOG, a declarative programming language under Answer Set semantics, is strong in commonsense reasoning. In this paper, we present a novel algorithm called DCPARP to dynamically represent, reason about, and construct (PO)MDPs using P-LOG. DCPARP successfully shields exogenous domain attributes from (PO)MDPs so as to limit computational complexity, but still enables (PO)MDPs to adapt to the value changes these attributes produce. We conduct a large number of experimental trials using two example problems in simulation and demonstrate DCPARP on a real robot. Results show significant improvements compared to competitive baselines.
In IJCAI'16 Workshop on Autonomous Mobile Service Robots, New York City, USA, July 2016.

Piyush Khandelwal Ph.D. Alumni piyushk [at] cs utexas edu
Peter Stone Faculty pstone [at] cs utexas edu
Shiqi Zhang Postdoctoral Alumni szhang [at] cs utexas edu