Modelling Immediate Serial Recall using a Bayesian Attractor Neural Network

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Sammanfattning: In the last decades, computational models have become useful tools for studying biological neural networks. These models are typically constrained by either behavioural data from neuropsychological studies or by biological data from neuroscience. One model of the latter kind is the Bayesian Confidence Propagating Neural Network (BCPNN) - an attractor network with a Bayesian learning rule which has been proposed as a model for various types of memory. In this thesis, I have further studied the potential of the BCPNN in short-term sequential memory. More specifically, I have investigated if the network can be used to qualitatively replicate behaviours of immediate verbal serial recall, and thereby offer insight into the network-level mechanisms which give rise to these behaviours. The simulations showed that the model was able to reproduce various benchmark effects such as the word length and irrelevant speech effects. It could also simulate the bow shaped positional accuracy curve as well as some backward recall if the to-be recalled sequence was short enough. Finally, the model showed some ability to handle sequences with repeated patterns. However, the current model architecture was not sufficient for simulating the effects of rhythm such as temporally grouping the inputs or stressing a specific element in the sequence. Overall, even though the model is not complete, it showed promising results as a tool for investigating biological memory and it could explain various benchmark behaviours in immediate serial recall through neuroscientifically inspired learning rules and architecture. 

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)