I came across this terrifying robot on a documentary a while back. In a nutshell, researchers have spliced neurons from a rat brain to a computer-chip. The computer then transmits signals to the robot controlling its movement. As the neurons “experiment” with the movements of the robot, the neural network actually evolves or develops (learns), developing its own behavior. This is a rather terrifying example of the sort of strange mereologies I’ve been talking about. Ordinarily we don’t think of neurons as entities or objects in their own right, but as parts of another object (a body) that are unable to exist in their own right. Yet here we have a rather terrifying example of stratified objects where we have objects wrapped inside of other objects. The neurons, when transplanted to the chip, become something other than they were and new powers not present in the rat itself become manifested. The truly horrifying question for me is that of whether these neurons continue to have some form of consciousness when transplanted in this way. Is there some highly confused sentient being in this assemblage that is thoroughly bewildered by the assemblage in which it finds itself and which is living an existing of shrieking pain? Here’s the video:
February 3, 2010
Rat-Brained Robot
Posted by larvalsubjects under Assemblages, Object-Oriented Philosophy[5] Comments
February 4, 2010 at 2:51 am
I share your horror, and commend your willingness to extend that sort of compassion.
February 4, 2010 at 9:41 am
Levi wrote:
The truly horrifying question for me is that of whether these neurons continue to have some form of consciousness when transplanted in this way.
They couldn’t continue to have consciousness, as they didn’t have it in the first place. The Reading team aren’t transplanting clumps of neurons from rat brains, they are taking neurons from rat foetuses and then artificially growing them in the lab. When they hook them up to the robot, all the neurons are disconnected and the robot ‘learns’ as the neurons start connecting to each other.
The horror scenario then becomes whether the neurons could develop some form of rudimentary consciousness as they form connections. Have you seen this?
http://news.bbc.co.uk/1/hi/health/8497148.stm
Perhaps consciousness requires less brain tissue than was previously thought…
February 4, 2010 at 2:04 pm
I think we can safely assume that some sort of consciousness persists. Here I’d cite the numerous historical examples of trauma to human brains.
This cyber rat is another reason for us to push this question of ethics until it breaks, because it’s very clearly the case that we are already cyborgs and will continue to enter into larger assemblages with machines.
February 4, 2010 at 9:28 pm
I suppose it depends on what, exactly, you mean by consciousness (it’s a convoluted word that can probably be divided into at least a few subtypes), but it’s unlikely that under any normal conception, neurons ever have consciousness. That is, as I believe johneffay was suggesting, consciousness is a product of the interaction of neurons, not of the neurons themselves. At least, it works that way under every current model of consciousness.
The rat robot really isn’t all that much different from the robots that people like Rod Brooks have been making for more than a decade, which involve individual “neurons” (previously, individual electronic nodes) that are essentially little independent agents, which form connections with each other. Those connections determine how much influence each individual node will have in a given context. The robot learns by tweaking the individual connection strengths and their valence. It seems to me that unless we treat consciousness as a specifically biological property, than this robot is no more, and no less, likely to develop consciousness than the early non-rat robots.
February 5, 2010 at 12:17 am
also worth being aware of this: Brain control-Rat
http://bit.ly/Yp41T