Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a neuroscientist, I have the opposite intuition. If we swap out one Neuron with a digital version you’ll be the same person. Do that 84B times and at the end of the day I see absolutely no reason why you would lose consciousness or sense of self.


Neurons are only part of the brain (50% ?). Glia have been shown to be part of the NMDA repair/replace cycle [0]. And that's just some prelim stuff. We've no idea what else the other cells may/not be doing yet. Just replacing neurons is unlikely to replicate a brain's functions. You need the whole caboodle.

[0] https://duckduckgo.com/?q=glia+cells+NMDA&atb=v173-1&ia=web


Do you intuit - or think - or whatever - as a neuroscientist - that speed of this "swapping" matters?

What about "swapping" one-by-one vs all-at-once? Would that matter?

What I'm getting at - which is where things seem to go sideways to think about it - is that if you did this swap - but rather the artificial neurons remain outside the body, and you did it near-instantly (vs slowly one-by-one) - and I am speaking of relative speeds here; say "near-instantly" means the entire swap takes one second, vs the slower manner of each swap taking a millisecond (or whatever).

Would that matter?

Now - what if instead of swapping - you did a parallel simulation instead - one-by-one, recreating the brain, but the artificial version operating in lockstep with the original; when one neuron "fires" the same artificial neuron "fires", same "paths" taken, etc.

Then you kill (choose your method and make it quick) the natural brain - instant swap? Or is there something different? Where does the "consciousness" go? If it is different, why is it any different than the "near-instant" swap?

Why would making that "lockstep-copy" matter, vs not making a "copy"?

I think you get what I am saying. Think on it a bit. There isn't a good answer that I am aware of.

Note: I'm assuming an "instant kill" - death to the natural brain faster than neural signals can travel neuron-to-neuron, ideally. We can posit the idea that if it were any slower (and especially if it were really slow) that the two brains would diverge in experiences, and would become two different "consciousnesses". But it does make you wonder why this should happen with a copy, vs not (in theory?) with a one-by-one swap. Heck - maybe there's an answer in there somewhere...


> Then you kill (choose your method and make it quick) the natural brain - instant swap? Or is there something different? Where does the "consciousness" go? If it is different, why is it any different than the "near-instant" swap?

This seems to be a non-sequitur. If we simply understand consciousness as a property of the brain, as long as you have 2 copies of the same brain, you naturally have 2 consciousnesses. The 2 may be equal or not (most likely not, given that random noise is extremely likely to play a role in brain processes), but they are definitely not a single object.


That only works if neurons do generate consciousness. The only thing we know is that their activity seems to correlate with consciousness. My guess is that you can't build a digital Neuron, at least not one that has consciousness properties. That's because I take neurons to be what consciousness looks like when viewed through our senses.


A transistor junction does not have computational properties; it is only in aggregate that they can make a computer.

Supposing that the mind is the result of neurons' inherent consciousness properties is like attributing morphine's effect as arising from its "dormitive virtue" - it does not advance our understanding.


Neurons are much more complicated that that, and indeed may do computations on their own.


I am not sure of the relevance of that here, especially as the issue is not whether a computer could be made from neurons. My point is that the premise, that the mind is a result of neurons interacting, is not predicated on a requirement that each individual neuron has 'consciousness properties', whatever they might be. I cannot see how the computational abilities of neurons invalidates my analogy to silicon computers, where the computational properties are only found in aggregations of the basic building blocks, and not in the basic blocks themselves.

Also, it seems quite plausible that an aggregation of semiconductor junctions could have the same computational abilities as a neuron. This is, in fact, an active area of R&D.


But what's special about neurons aside from the causal relationships it has with the rest of the system? If its utility to consciousness is nothing but its causal relationships then a digital replica will support consciousness just the same.


"nothing but causal relationships" is indeed all you'd need, provided they go the materialist way, i.e. that consciousness comes from specific arrangements of neurons. I don't think that makes sense, as nobody can give the slightest hint of a theory that would make that work. The reverse theory, that consciousness is primary, makes more sense to me. In that theory, the causality works the other way, and digital replicas will not have consciousness, they will be a mirror of what consciousness looks like when seen through our senses.


>In that theory, the causality works the other way, and digital replicas will not have consciousness

Why not? Digital stuff is made up of matter as well. Unless there's something special about neurons when it comes to consciousness, i.e. the conscious bits reside in neurons and nowhere else for some reason, there's reason to think non-biological structures that produce the same output would also be conscious. Any substantive theory of consciousness, materialist or not, will need to include a place for structure and dynamics, i.e. information cascades, within their theory to account for the correlations we observe between conscious states and brain structure and dynamics. But these qualities are present in a digital implementation as well.


> I don't think that makes sense, as nobody can give the slightest hint of a theory that would make that work

I'm not sure what could possibly answer this problem, at least to the satisfaction of the people who think it is a problem. Personally, I'm relatively happy that Daniel Dennett's ideas demonstrate how what we think of as consciousness, qualia etc can be explained via mechanistic/algorithmic processes.


> and digital replicas will not have consciousness

They will also go on internet forums arguing that while they, of course, do have consciousness - the digital replicas of themselves would not!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: