To save reading all of this lengthy post, my other responses to Jonathan are summarized: #2, #3 and #5 We can’t accurately describe these phenomena, let alone model them. Also, Chris has massively understated the differences between RAM and working memory. It would be cruel and severely diminish the capability of your fly. For instance, to make capacity vary you could some of your RAM unavailable sometimes. Why would you do that?
To assume that current simulations of the spreading activation of a neuron works with a lookup table – the current method – is accurate is to assume that we know all about how this addressing works. I assure you that we don’t. Some of my current work examines the effect of working memory load on inhibition (Following from but I should add that I’m not affiliated with them in any way shape or form). Are you trying to tell me that the amount of RAM available will affect how we traverse a neural network lookup table? Because then the difference between working memory (which we don’t really understand either) and RAM becomes extremely important.
Thus when Jonathan says “implement a neural network” does he mean a current neural network, in which case it isn’t really very much like the brain, and thus not in conflict with this article at all? Or does he mean implement an accurate model of all functional aspects of the brain? Because computers aren’t like that now and we have no evidence they ever will be.
The simple fact is that arguing that the brain is analogous to a Turing machine is a dangerous thing to do. Philosophers have created theoretical machines capable of solving the halting problem (for the uninitiated that’s a problem computers can’t solve). The brain may be a realisation of some super-Turing machine. It is true that any parallel arrangement of Turing machines can be modelled by a single machine, but it is not certain that the brain can be modelled by a collection of parallel Turing machines.
The main thing is that you COULD make enough RAM act like working memory, but that is the same as saying you could make a fly look like a raisin
Yeah, we’ve built “artificial neural networks”, but most of those are research simulations! Simulating analog processes on a digital system (or vice versa) tends to pull in huge overheads, worsening the basic order of the computational costs — and it still isn’t “exact”.
Simulating massively parallel systems on CPU-based systems is worse, and less reliable. The CPU version fundamentally has time cost at least linear to the number of nodes and connections, whereas a true parallel system does not.
Then too, our ability to “program” neural nets is frankly humbled, by the ordinary development of almost any vertebrate’s nervous system.
It might well be possible to make something like “content-addressible” memory in the RAM model, but it would be a “bloody hack” with no connection to our usual programming schemes, or to a biological-style memory
“Computer models based on the detailed biology of the brain can help us understand the myriad complexities of human cognition and intelligence. Here, we review models of the higher level aspects of human intelligence, which depend critically on the prefrontal cortex and associated subcortical areas. The picture emerging from a convergence of detailed mechanistic models https://oem-parts.hu/en/upload/cache/s/m/5/sm51-000-001-021-izzo-fenyszoro-ba20d-12v-35-35w-cool-white.webp and more abstract functional models represents a synthesis between analog and digital forms of computation. Specifically, the need for robust active maintenance and rapid updating of information in the prefrontal cortex appears to be satisfied by bistable activation states and dynamic gating mechanisms. These mechanisms are fundamental to digital computers and may be critical for the distinctive aspects of human intelligence.”