Logo du site
  • English
  • Français
  • Se connecter
Logo du site
  • English
  • Français
  • Se connecter
  1. Accueil
  2. Université de Neuchâtel
  3. Publications
  4. Bayesian Reinforcement Learning via Deep, Sparse Sampling
 
  • Details
Options
Vignette d'image

Bayesian Reinforcement Learning via Deep, Sparse Sampling

Auteur(s)
Divya Grover
Debabrota Basu
Dimitrakakis, Christos 
Institut d'informatique 
Date de parution
2020
In
AISTATS
Vol.
2020
Mots-clés
  • Machine Learning (cs.LG)
  • Artificial Intelligence (cs.AI)
  • Machine Learning (stat.ML)
  • Machine Learning (cs....

  • Artificial Intelligen...

  • Machine Learning (sta...

Résumé
We address the problem of Bayesian reinforcement learning using efficient model-based online planning. We propose an optimism-free Bayes-adaptive algorithm to induce deeper and sparser exploration with a theoretical bound on its performance relative to the Bayes optimal policy, with a lower computational complexity. The main novelty is the use of a candidate policy generator, to generate long-term options in the planning tree (over beliefs), which allows us to create much sparser and deeper trees. Experimental results on different environments show that in comparison to the state-of-the-art, our algorithm is both computationally more efficient, and obtains significantly higher reward in discrete environments.
Identifiants
https://libra.unine.ch/handle/123456789/30953
_
1902.02661v4
Type de publication
journal article
Dossier(s) à télécharger
 main article: 1902.02661.pdf (653.42 KB)
google-scholar
Présentation du portailGuide d'utilisationStratégie Open AccessDirective Open Access La recherche à l'UniNE Open Access ORCIDNouveautés

Service information scientifique & bibliothèques
Rue Emile-Argand 11
2000 Neuchâtel
contact.libra@unine.ch

Propulsé par DSpace, DSpace-CRIS & 4Science | v2022.02.00