How does the performance of NEAT compare to Reinforcement Learning?

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Författare: Marcus Andersson; [2022]

Nyckelord: ;

Sammanfattning: This study examined the relative performance of Deep Reinforcement Learning compared to a neuroevolution algorithm called NEAT when used to train AIs in a discrete game environment. Today there are many AI techniques to choose from among which NEAT and RL have become popular alternatives. As manifested by game-related research papers these methods allow for automating AI development. With the end of Moore’s law advances in computer hardware have started leaning towards parallelism. NEAT and RL have similar yet at the same time distinct ways of training neural networks which benefit from parallelism via repetitive simulations. To evaluate both solutions a framework for statistical sampling is introduced using levels that resemble problems from the game Super Mario Bros. Adjustments were made to simplify experiments so they could be performed with little computational resources. Measurements indicated that NEAT reliably produces better AIs than RL in the studied problem classes. These findings express guidelines for game developers concerning what the right path is for them to take. Results showed that no human input is necessary for evolving solutions but that local optima can sometimes slow down learning significantly. 

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)