Reinforcement Learning with Imitation for Cavity Filter Tuning : Solving problems by throwing DIRT at them

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Författare: Simon Lindståhl; [2019]

Nyckelord: ;

Sammanfattning: Cavity filters are vital components of radio base stations and networks.After production, they need tuning, which has proven to be a difficultprocess to do manually and even more so to automate. Previously, attemptsto automate this process with Reinforcement Learning have beenmade but have failed to reach consistent performance on anything butthe simplest filter models. This Master thesis builds upon these resultsand aims to improve them. Multiple methods are tested and evaluated,including introducing a pre-processing step, tuning hyperparameters anddividing the problem into multiple sub-tasks. In particular, by using Imitationlearning as an initial phase, a semi-realistic filter model with 13tuning screws is tuned, fulfilling both insertion loss and return loss requirements.On this problem, this algorithm has a greater efficiency thanany previously published results on Reinforcement Learning for Cavityfilter tuning.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)