Algorithmic Stock Trading using Deep Reinforcement learning

Detta är en Kandidat-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Författare: Alexander Brokking; Michael Wink; [2021]

Nyckelord: ;

Sammanfattning: Recent breakthroughs in Deep Learning and Reinforcement Learning have enabled the new field of Deep Reinforcement Learning. This study explores some of the state of the art applications of deep reinforcement learning in the field of finance and algorithmic trading. By building on previous research from Yang et al. at Columbia University, this study aims to validate their findings and explore ways to improve their proposed trading model using the Sharpe ratio in the reward function. We show that there is significant variability in the performance of their trading model and question their premise of basing their results on the best performing model iteration. Moreover, we explore how the Sharpe ratio calculated over a 21 day and 63 day rolling period can be used as a reward function. However, this did not result in any significant change in outcome which could be attributed to the high performance variability in both the original algorithm and our changed algorithm which thwarts consistent conclusions. 

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)