UCR

Computer Science and Engineering



Christian R. Shelton, Professor

Policy Improvement for POMDPs Using Normalized Importance Sampling (2001)

by Christian R. Shelton


Abstract: We present a new method for estimating the expected return of a POMDP from experience. The estimator does not assume any knowledge of the POMDP, can estimate the returns for finite state controllers, allows experience to be gathered from arbitrary sequences of policies, and estimates the return for any new policy. We motivate the estimator from function-approximation and importance sampling points-of-view and derive its bias and variance. Although the estimator is biased, it has low variance and the bias is often irrelevant when the estimator is used for pair-wise comparisons. We conclude by extending the estimator to policies with memory and compare its performance in a greedy search algorithm to the REINFORCE algorithm showing an order of magnitude reduction in the number of trials required.

Download Information

Christian R. Shelton (2001). "Policy Improvement for POMDPs Using Normalized Importance Sampling." Proceedings of the Seventeenth International Conference on Uncertainty in Artificial Intelligence (pp. 496-503). pdf   ps ps.gz    

Bibtex citation

@inproceedings{She01,
   author = "Christian R. Shelton",
   title = "Policy Improvement for {POMDPs} Using Normalized Importance Sampling",
   year = 2001,
   booktitle = "Proceedings of the Seventeenth International Conference on Uncertainty in Artificial Intelligence",
   booktitleabbr = "{UAI}",
   pages = "496--503",
}

More Information

Address

University of California, Riverside
Chung Hall, room 327
Riverside, CA 92521
Tel: (951) 827-2554
E-mail: cshelton@cs.ucr.edu

 


Other Links