Similarities between policy gradient methods (PGM) in reinforcement learning (RL) and supervised learning (SL)
Benhamou, Éric (2019), Similarities between policy gradient methods (PGM) in reinforcement learning (RL) and supervised learning (SL). https://basepub.dauphine.fr/handle/123456789/21202
TypeDocument de travail / Working paper
External document linkhttps://hal.archives-ouvertes.fr/hal-02886505
Series titlePreprint Lamsade
MetadataShow full item record
Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision [LAMSADE]
Abstract (EN)Reinforcement learning (RL) is about sequential decision making and is traditionally opposed to supervised learning (SL) and unsupervised learning (USL). In RL, given the current state, the agent makes a decision that may influence the next state as opposed to SL (and USL) where, the next state remains the same, regardless of the decisions taken, either in batch or on-line learning. Although this difference is fundamental between SL and RL, there are connections that have been overlooked. In particular, we prove in this paper that gradient policy method can be cast as a supervised learning problem where true label are replaced with discounted rewards. We provide a new proof of policy gradient methods (PGM) that emphasizes the tight link with the cross entropy and supervised learning. We provide a simple experiment where we interchange label and pseudo rewards. We conclude that other relationships with SL could be made if we modify the reward functions wisely.
Subjects / KeywordsPolicy gradient; Supervised learning; Cross entropy; Kullback Leibler divergence; entropy
Showing items related by title and author.
Distinguish the indistinguishable: a Deep Reinforcement Learning approach for volatility targeting models Benhamou, Éric; Saltiel, David; Tabachnik, Serge; Wong, Sui Kai; Chareyron, François (2021) Document de travail / Working paper