NIPS 2002 Workshop
Universal Learning Algorithms and Optimal Search
Whistler/Blackcomb Resort, B.C., Canada, Saturday 14, December, 2002
Organizers: Jürgen Schmidhuber
& Marcus Hutter,
IDSIA, Switzerland
|
|
Andrei Kolmogorov The theory of probability ... can and
should be developed from axioms ...
Entities should not be multiplied unnecessarily William of Occam
|
Recent theoretical and practical advances are currently driving a
renaissance in the fields of Universal Learners (rooted in Solomonoff's
universal induction scheme, 1964) and Optimal Search (rooted in Levin's
universal search algorithm, 1973). Both are closely related to the theory
of Kolmogorov complexity. The new millennium has brought several significant
developments including:
- Sharp expected loss bounds for universal predictors
(no i.i.d assumptions necessary)
- Theoretically optimal reinforcement learners
for general computable environments
- Computable optimal predictions based on natural priors
that take algorithm runtime into account
- Practical, bias-optimal, incremental, universal search algorithms
that extend previous non-incremental ones.
Kolmogorov's 100th birthday is a welcome occasion to discuss recent
advances in this workshop (which follows the very successful NIPS 2001
Occam and MDL workshops). Topics will also include:
- Practical but general MML/MDL/SRM approaches
with theoretical foundation
- Weighted majority approaches (with worst case loss bounds,
as opposed to expected loss bounds for Bayesmix approaches)
- No free lunch theorems for the special case where
the data is sampled from a uniform distribution.
The optimists are claiming that the future will belong to universal
or near-universal learners. They point out that the latter are much
more general than traditional reinforcement learners / decision makers
depending on strong Markovian assumptions, or than learners based on
traditional statistical learning theory, which require often unrealistic
iid or Gaussian assumptions. They also claim that due to hardware
advances the time is now ripe for optimal search in algorithm space,
as opposed to the limited space of reactive mappings embodied by
traditional methods such as SVMs and feedforward networks.
The pessimists are claiming otherwise.
We expect a lively discussion between optimists and pessimists.
Researchers interested in the foundations of
machine learning including reinforcement learning, inductive inference,
time-series forecasting, and optimization and search in complex domains
should be interested in this workshop.
Speakers will include Ray Solomonoff,
Paul Vitanyi, Marcus Hutter, Juergen Schmidhuber, and other experts on
Kolmogorov complexity and Solomonoff induction, as well as Manfred
Warmuth and Nicolo Cesa-Bianchi as representatives of weighted majority
approaches. We also expect contrarian speakers from the no free lunch camp.
The workshop will provide a forum for discussing results and
problems in any of the above mentioned areas.
We hope to examine the future directions and new
perspectives that will keep the field lively and growing.
- P. Vitanyi and M. Li.
Simplicity, Information, Kolmogorov Complexity, and Prediction
in: Simplicity, Inference and Modelling, Arnold Zellner,
Hugo A. Keuzenkamp, and Michael McAleer, Eds., Cambridge University Press, Cambridge, UK, pp. 135--155, 2001/2002
- D. Wolpert and W. MacReady.
No Free Lunch Theorems for Optimization
IEEE Transactions on Evolutionary Computation, Vol.1 No.1, pp. 67--82, 1997
- R. Solomonoff.
A Formal Theory of Inductive Inference, Part I
Information and Control, Part I: Vol 7, No. 1, pp. 1-22, March 1964
- N. Cesa-Bianchi, Y. Freund, D.P. Helmbold, D. Haussler, R.E. Schapire, M.K. Warmuth.
How to Use Expert Advice
Journal of the ACM, Vol. 44(3), pp. 427-485, May 1997
- N. Merhav and M. Feder.
Universal Prediction
IEEE Transactions on Information Theory, Vol.44, No.6, pp. 2124-2147, 1998
- J. Schmidhuber.
Optimal Ordered Problem Solver.
TR IDSIA-12-02, 31 July 2002.
- M. Hutter. Towards a Universal Theory of Artificial
Intelligence based on Algorithmic Probability and Sequential Decisions
Proceedings of the 12th European Conference on Machine Learning (ECML-2001) 226-238
Invited Speakers
Paul Vitanyi,
Centrum voor Wiskunde en Informatica, Amsterdam, The Netherlands
Nicolo Cesa-Bianchi,
Dipartimento di Tecnologie dell'Informazione, Universita degli Studi di Milano, Crema, Italy
Ray Solomonoff,
Oxbridge Research, Cambridge, Ma., USA
Ilya Nemenman,
Kavli Institute for Theoretical Physics, University of California, Santa Barbara, USA
Session 1, 7:30 - 10:30
07.30 - 07.45 Juergen Schmidhuber Welcome
07.45 - 08.15 Ray Solomonoff A General System For Incremental Machine Learning
08.15 - 08.45 Coffee break
08.45 - 09.15 Nicolo Cesa-Bianchi Universal Prediction, Game Theory, and Pattern Recognition
09.15 - 09.45 Juergen Schmidhuber Optimal Ordered Problem Solver
09.45 - 10.30 Panel Discussion (Solomonoff, Schmidhuber, Cesa-Bianchi)
Session 2, 16.15 - 19:00
16.15 - 16.45 Ilya Nemenman Universal Learning: A View of a Bayesian
16.45 - 17.15 Paul Vitanyi The Similarity Metric
17.15 - 17.45 Coffee break
17.45 - 18.15 Marcus Hutter Universal Sequential Decisions based on Algorithmic Probability
18.15 - 19.00 Panel Discussion (Nemenman, Vitanyi, Hutter)
NIPS 2002 Conference
Occam Workshop (NIPS 2001)
Algorithmic Information Theory Resource Page
Job openings at IDSIA on related topics
Ray Solomonoff ... Algorithmic Probability can serve as a kind of
'Gold Standard' for induction systems
Only math nerds would call 2500 finite Leonid Levin
|
© 2002 by Marcus Hutter and
Juergen Schmidhuber