Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>> However, the mechanisms that connectionists usually propose for implementing memory are not plausible. Existing proposals are mainly variants upon a single idea: a recurrent neural network that allows reverberating activity to travel around a loop (Elman 1990). There are many reasons why the reverberatory loop model is hopeless as a theory of long-term memory. For example, noise in the nervous system ensures that signals would rapidly degrade in a few minutes. Implementationist connectionists have thus far offered no plausible model of read/write memory. [4.3 Systematicity and Productivity]

I wonder: is this information outdated?

A Neural Turing Machine (first described in 2014 by Alex Graves) is a recurrent neural network architecture with an external memory store. Reads and writes from and to the memory are controlled by an attention mechanism. A newer version is the Differential Neural Computer (first described in 2016, also by Graves).

The setup is not fundamentally different to the Elman networks or Long-Short Term Memory networks other than the mechanism by which "memory" is manipulated and storage, retrieval or discarding of "memories" is decided, although the mechanisms are very similar too (for instance, in LSTMs, you could say that training a network to decide when to "recall" a weight value is essentially similar to the "attention" mechanism).

Is there a significant difference between an LSTM-based neural architecture with a "reverberatory" memory and one with an external storage, both controlled by similar mechanisms?

I would say- yes.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: