Computer Science and Engineering

Christian R. Shelton, Professor

Learning from Scarce Experience (2002)

by Leonid Peshkin and Christian R. Shelton

Abstract: Searching the space of policies directly for the optimal policy has been one popular method for solving partially observable reinforcement learning problems. Typically, with each change of the target policy, its value is estimated from the results of following that very policy. This requires a large number of interactions with the environment as different policies are considered. We present a family of algorithms based on likelihood ratio estimation that use data gathered when executing one policy (or collection of policies) to estimate the value of a different policy. The algorithms combine estimation and optimization stages. The former utilizes experience to build a non-parametric representation of an optimized function. The latter performs optimization on this estimate. We show positive empirical results and provide the sample complexity bound.

Download Information

Leonid Peshkin and Christian R. Shelton (2002). "Learning from Scarce Experience." Proceedings of the Nineteenth International Conference on Machine Learning (pp. 498-505). pdf   ps ps.gz    

Bibtex citation

   author = "Leonid Peshkin and Christian R. Shelton",
   title = "Learning from Scarce Experience",
   booktitle = "Proceedings of the Nineteenth International Conference on Machine Learning",
   booktitleabbr = "{ICML}",
   year = 2002,
   pages = "498--505",

More Information


University of California, Riverside
Chung Hall, room 327
Riverside, CA 92521
Tel: (951) 827-2554
E-mail: cshelton@cs.ucr.edu


Other Links