Towards Learning for System Behavior

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Författare: Liangcheng Yu; [2019]

Nyckelord: ;

Sammanfattning: Traditional network management typically relies on clever heuristics to capture thecharacteristics of environments, workloads in order to derive an accurate model.While such methodology has served us well in early days, it is challenged by thegrowing intricacies of modern network design from various dimensions: the rocketingtraffic volumn, proliferation of software applications and varied hardware, higheruser-specific Quality of Experience (QoE) requirements with respect to bandwidthand latencies, overwhelming number of knobs and configurations and so forth. Allthese surging complexity and dynamics pose greater difficulty on us to understandand derive management rules to reach global optimum with heuristics that fits thedynamic context. Driven by the pulls of the challenges and encouraged by the successin machine learning techniques, this work elaborates on augmenting adaptive systemsbehaviors with learning approaches. This thesis specifically investigates the use caseof the packet scheduling. The work explores the opportunity to augment systemsto learn existing behaviors and explore custom behaviors with Deep ReinforcementLearning (DRL). We show the possibility to approximate the existing canonicalbehaviors with a generic representation, meanwhile, the agent is able to explorecustomized policy that are comparable to the state-of-art approaches. The resultsdemonstrate the potentials of learning based approaches as an alternative to canonicalscheduling approaches.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)