previous  home  search  LaTeX  -  PostScript  -  PDF  -  Html/Gif   contact    up    next  

Self-Optimizing and Pareto-Optimal Policies in General Environments based on Bayes-Mixtures


Author: Marcus Hutter (2002)
Comments: 15 LaTeX pages
Subj-class: Artificial Intelligence; Learning

ACM-class:  

I.2; I.2.6; I.2.8; F.1.3; F.2
Reference: Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT 2002) ??-??, Springer
Report-no: IDSIA-04-02 and cs.AI/0204040
Paper: LaTeX  -  PostScript  -  PDF  -  Html/Gif 
Slides: PostScript - PDF

Keywords: Rational agents, sequential decision theory, reinforcement learning, value function, Bayes mixtures, self-optimizing policies, Pareto-optimality, unbounded effective horizon, (non) Markov decision processes.

Abstract: The problem of making sequential decisions in unknown probabilistic environments is studied. In cycle t action yt results in perception xt and reward rt, where all quantities in general may depend on the complete history. The perception xt and reward rt are sampled from the environmental probability distribution . Sequential decision theory tells us how to act in order to maximize the total expected reward, called value, if is known. Reinforcement learning is usually used if is unknown. In the Bayesian approach one defines a mixture distribution as a weighted sum of distributions M, where M is any class of distributions including the true environment . We show that the Bayes-optimal policy p based on the mixture is self-optimizing in the sense that the average value converges asymptotically for all M to the optimal value achieved by the (infeasible) Bayes-optimal policy p which knows in advance. We show that the necessary assumption that M admits self-optimizing policies at all, is also sufficient. No other structural assumptions are made on M. Furthermore, we show that p is Pareto-optimal in the sense that there is no other policy yielding higher or equal value in all environments M and a strictly higher value in at least one.

 previous  home  search  LaTeX  -  PostScript  -  PDF  -  Html/Gif   contact    up    next  

Table of Contents

 previous  home  search  LaTeX  -  PostScript  -  PDF  -  Html/Gif   contact    up    next  

BibTeX Entry

@InProceedings{Hutter:02selfopt,
  author =       "Marcus Hutter",
  title =        "Self-Optimizing and {P}areto-Optimal Policies in
                  General Environments based on {B}ayes-Mixtures",
  series =       "Lecture Notes in Artificial Intelligence",
  volume =       "",
  year =         "2002",
  pages =        "",
  booktitle =    "Proceedings of the 15th Annual Conference on Computational
                 Learning Theory (COLT 2002)",
  publisher =    "Springer",
  url =          "http://www.hutter1.net/ai/selfopt.htm",
  url2 =         "http://arxiv.org/abs/cs.AI/0204040",
  ftp =          "ftp://ftp.idsia.ch/pub/techrep/IDSIA-04-02.ps.gz",
  keywords =     "Rational agents, sequential decision theory,
                  reinforcement learning, value function, Bayes mixtures,
                  self-optimizing policies, Pareto-optimality,
                  unbounded effective horizon, (non) Markov decision
                  processes.",
  abstract =     "The problem of making sequential decisions in unknown
                  probabilistic environments is studied. In cycle $t$ action $y_t$
                  results in perception $x_t$ and reward $r_t$, where all quantities
                  in general may depend on the complete history. The perception
                  $x_t'$ and reward $r_t$ are sampled from the (reactive)
                  environmental probability distribution $\mu$. This very general
                  setting includes, but is not limited to, (partial observable, k-th
                  order) Markov decision processes. Sequential decision theory tells
                  us how to act in order to maximize the total expected reward,
                  called value, if $\mu$ is known. Reinforcement learning is usually
                  used if $\mu$ is unknown. In the Bayesian approach one defines a
                  mixture distribution $\xi$ as a weighted sum of distributions
                  $\nu\in\M$, where $\M$ is any class of distributions including the
                  true environment $\mu$. We show that the Bayes-optimal policy
                  $p^\xi$ based on the mixture $\xi$ is self-optimizing in the sense
                  that the average value converges asymptotically for all $\mu\in\M$
                  to the optimal value achieved by the (infeasible) Bayes-optimal
                  policy $p^\mu$ which knows $\mu$ in advance. We show that the
                  necessary condition that $\M$ admits self-optimizing policies at
                  all, is also sufficient. No other structural assumptions are made
                  on $\M$. As an example application, we discuss ergodic Markov
                  decision processes, which allow for self-optimizing policies.
                  Furthermore, we show that $p^\xi$ is Pareto-optimal in the sense
                  that there is no other policy yielding higher or equal value in
                  {\em all} environments $\nu\in\M$ and a strictly higher value in
                  at least one.",
}
      
 previous  home  search  LaTeX  -  PostScript  -  PDF  -  Html/Gif   contact    up    next