Communication Dans Un Congrès Année : 2024

The Value of Reward Lookahead in Reinforcement Learning

Résumé

In reinforcement learning (RL), agents sequentially interact with changing environments while aiming to maximize the obtained rewards. Usually, rewards are observed only after acting, and so the goal is to maximize the expected cumulative reward. Yet, in many practical settings, reward information is observed in advance -- prices are observed before performing transactions; nearby traffic information is partially known; and goals are oftentimes given to agents prior to the interaction. In this work, we aim to quantifiably analyze the value of such future reward information through the lens of competitive analysis. In particular, we measure the ratio between the value of standard RL agents and that of agents with partial future-reward lookahead. We characterize the worst-case reward distribution and derive exact ratios for the worst-case reward expectations. Surprisingly, the resulting ratios relate to known quantities in offline RL and reward-free exploration. We further provide tight bounds for the ratio given the worst-case dynamics. Our results cover the full spectrum between observing the immediate rewards before acting to observing all the rewards before the interaction starts.
Fichier principal
Vignette du fichier
Competitive_ratio_for_MDPs.pdf (613.53 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04893988 , version 1 (17-01-2025)

Licence

Identifiants

Citer

Nadav Merlis, Dorian Baudry, Vianney Perchet. The Value of Reward Lookahead in Reinforcement Learning. NeurIPS 2024 - 38th Conference on Neural Information Processing Systems, Dec 2024, Vancouver, Canada. ⟨hal-04893988⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

More