The hype about the “neuron brain model” Spaun made me think of my skeptical FringeWare Review piece about storing or replicating human consciousness, “Consciousness in a Box.” Sci-fi culture has set the assumption that construction of an “artificial brain” is not only possible, but inevitable, but I’ve argued that it’s unlikely, if not impossible to build a machine that replicates human cognition. Context is important: however we came to “think” in the way we do, to be conscious, sentient entities, that won’t be replicated in a bundle of switches, however slick, fast, and capable. SPAUN, in fact, is somewhat less than the hype suggests:
The first thing to point out is that Spaun doesn’t learn anything. It can be arranged to tackle eight pre-defined tasks and it doesn’t learn any new tasks or modify the way it performs existing tasks. The whole system is based on the Neural Engineering Framework – NEF- which can be used to compute the values of the strengths of connections needed to make a neural network do a particular task. If you want a neural net to implement a function of the inputs f(x) then NEF will compute the parameters for a leaky integrate and fire network that will do the job. This is an interesting approach, but it doesn’t show any of the plasticity that the real brain and real neural networks show.
If anything, this approach is more like the original McCulloch and Pitts networks where artificial neurons were hand-crafted to create logic gates. For example. you can put neurons together to create a NAND gate and from here you can use them to implement a complete computer – a PC based on a Pentium, say, using the neuronal NAND gates to implement increasingly complex logic. It would all work but it wouldn’t be a thinking brain or a model of a neuronal computer.
If we ever do build a “thinking machine” that is to any degree autonomous, I’m certain it won’t replicate human consciousness or thought processes – it’ll have its own way of “thinking.”
2 thoughts on “Spaun is not “consciousness in a box””
Maybe the context problem is one of hardware. When you can create a robot that moves like a human that a thinking machine can control and see through, maybe you’ll start to see more human behaviors.
Certainly you can mimic or simulate human behaviors with a machine, but my point is that the simulation is not representative of a human kind of thinking or conscious awareness. You won’t have something like human cognition.
Comments are closed.