• xmlui.mirage2.page-structure.header.title
    • français
    • English
  • Help
  • Login
  • Language 
    • Français
    • English
View Item 
  •   BIRD Home
  • LAMSADE (UMR CNRS 7243)
  • LAMSADE : Publications
  • View Item
  •   BIRD Home
  • LAMSADE (UMR CNRS 7243)
  • LAMSADE : Publications
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Browse

BIRDResearch centres & CollectionsBy Issue DateAuthorsTitlesTypeThis CollectionBy Issue DateAuthorsTitlesType

My Account

LoginRegister

Statistics

Most Popular ItemsStatistics by CountryMost Popular Authors
Thumbnail

AAMDRL: Augmented Asset Management with Deep Reinforcement Learning

Benhamou, Éric; Saltiel, David; Ungari, Sandrine; Mukhopadhyay, Abhishek; Atif, Jamal (2020), AAMDRL: Augmented Asset Management with Deep Reinforcement Learning. https://basepub.dauphine.psl.eu/handle/123456789/22301

View/Open
AAMDRL_Augmented.pdf (401.2Kb)
Type
Document de travail / Working paper
Date
2020
Series title
Preprint Lamsade
Published in
Paris
Metadata
Show full item record
Author(s)
Benhamou, Éric
Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision [LAMSADE]
Saltiel, David
Laboratoire d'Informatique Signal et Image de la Côte d'Opale [LISIC]
Ungari, Sandrine
Mukhopadhyay, Abhishek
Atif, Jamal
Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision [LAMSADE]
Abstract (EN)
Can an agent learn efficiently in a noisy and self adapting environment with sequential, non-stationary and non-homogeneous observations? Through trading bots, we illustrate how Deep Reinforcement Learning (DRL) can tackle this challenge. Our contributions are threefold: (i) the use of contextual information also referred to as augmented state in DRL, (ii) the impact of a one period lag between observations and actions that is more realistic for an asset management environment , (iii) the implementation of a new repetitive train test method called walk forward analysis, similar in spirit to cross validation for time series. Although our experiment is on trading bots, it can easily be translated to other bot environments that operate in sequential environment with regime changes and noisy data. Our experiment for an augmented asset manager interested in finding the best portfolio for hedging strategies shows that AAMDRL achieves superior returns and lower risk.
Subjects / Keywords
Deep Reinforcement Learning; Portfolio selection

Related items

Showing items related by title and author.

  • Thumbnail
    Time your hedge with Deep Reinforcement Learning 
    Benhamou, Éric; Saltiel, David; Ungari, Sandrine; Mukhopadhyay, Abhishek (2020) Document de travail / Working paper
  • Thumbnail
    Bridging the gap between Markowitz planning and deep reinforcement learning 
    Benhamou, Éric; Saltiel, David; Ungari, Sandrine; Mukhopadhyay, Abhishek (2020) Document de travail / Working paper
  • Thumbnail
    Deep Reinforcement Learning (DRL) for portfolio allocation 
    Benhamou, Éric; Saltiel, David; Ohana, Jean-Jacques; Atif, Jamal; Laraki, Rida Communication / Conférence
  • Thumbnail
    Trade Selection with Supervised Learning and Optimal Coordinate Ascent (OCA) 
    Saltiel, David; Benhamou, Eric; Laraki, Rida; Atif, Jamal (2021) Communication / Conférence
  • Thumbnail
    Distinguish the indistinguishable: a Deep Reinforcement Learning approach for volatility targeting models 
    Benhamou, Éric; Saltiel, David; Tabachnik, Serge; Wong, Sui Kai; Chareyron, François (2021) Document de travail / Working paper
Dauphine PSL Bibliothèque logo
Place du Maréchal de Lattre de Tassigny 75775 Paris Cedex 16
Phone: 01 44 05 40 94
Contact
Dauphine PSL logoEQUIS logoCreative Commons logo