Regression with Bayesian Confidence Propagating Neural Networks

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Sammanfattning: Bayesian Confidence Propagating Neural Networks (BCPNNs) are biologically inspired artificial neural networks. These networks have been modeled to account for brain-like aspects such as modular architecture, divisive normalization, sparse connectivity, and Hebbian learning. Recent research suggests that these networks have achieved high performance in classification-based tasks with additions such as structural plasticity. There is, however, scarce research on modeling BCPNN in regression-based tasks. In modeling these tasks, we aim to mimic continuous codes in the brain. Such codes have been shown to describe functions such as brain-motor activity. In this thesis, we conduct a systematic analysis to compare a baseline model with extensions of the BCPNN architecture that include two encoding techniques: Interval encoding and Gaussian Mixture Model (GMM) encoding. We also extend the model with a Ridge regressor. Our systematic analysis shows that this BCPNN model adapted with GMM encoding outperforms others. Moreover, the encodings also preserve sparse activity.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)