Will AI ever be conscious? Is it already? Nope

Earlier this week there was a debate on Consciousness in the Machine, basically asking whether machines can be conscious. In a somewhat different manner than myself, Bernardo Kastrup rejects the idea. Kastrup says that it’s a hypothesis not worth entertaining, and from entertaining the idea bad things follow. From Kastrup’s blog,

Those who take the hypothesis of conscious AI seriously do so based on an appallingly biased notion of isomorphism—a correspondence of form, or a similarity—between how humans think and AI computers process data. To find that similarity, however, one has to take several steps of abstraction away from concrete reality. After all, if you put an actual human brain and an actual silicon computer on a table before you, there is no correspondence of form or functional similarity between the two at all; much to the contrary. A living brain is based on carbon, burns ATP for energy, metabolizes for function, processes data through neurotransmitter releases, is moist, etc., while a computer is based on silicon, uses a differential in electrical potential for energy, moves electric charges around for function, processes data through opening and closing electrical switches called transistors, is dry, etc. They are utterly different.

Further in the blog post Kastrup elaborates that the positive argument (i.e. in favour of machine consciousness) basically amounts to “If brains can produce consciousness, why can’t computers do so as well?” Kastrup’s counterargument that came up in the debate was, “If birds can fly by flapping their upper limbs, why can’t humans fly by doing so as well?” Kastrup’s argument was countered with: if the Wright brothers had believed that only birds can fly, they wouldn’t have bothered to try and build an airplane, which is itself different from a bird.

In my view, this boils down to definitions: Do airplanes really fly or do they only simulate flying? Airplanes fly only in a metaphorical sense. Airplanes do not fly without a pilot, i.e. what flies is airplane+pilot. A conceivable counterargument can be: But now we have drones! I’d reply: And we have rockets too. Are cannonballs conscious because they fly after having been launched from the cannon?

Another strong point Kastrup makes in the blog post is, “[If] there can be instantiations of private consciousness on different substrates, and that one of these substrates is a silicon computer, then you must grant that today’s ‘dumb’ computers are already conscious…” So perhaps without you guys knowing, your smartphone is truly intelligent already and every time you are turning it off your are killing consciousness. Well, in the Unix world, various kill commands are the norm, so Unix people apparently don’t shy away from murder.

From the last point, namely if modern computers are potentially intelligent/conscious/alive already, it follows, according to Kastrup, that we should seriously consider the rights of AI entities. Anybody ready to go in that direction?

95 thoughts on “Will AI ever be conscious? Is it already? Nope

  1. Erik,

    I’ll address Kastrup’s points later, but right now I’m interested in hearing about your own take on the secret ingredient of consciousness. What, in your view, do humans possess, and machines lack, that makes consciousness possible in humans but impossible in machines, now or in the future, no matter how advanced they become?

    I suspect your answer will be that the secret ingredient is a soul. Is that correct? If not, what is your answer?

  2. keiths: What, in your view, do humans possess, and machines lack, that makes consciousness possible in humans but impossible in machines, now or in the future, no matter how advanced they become?

    Perception and emotion.

  3. Do cats an dogs have perception and emotion?

    Snakes?

    Fish?

    Mosquitoes?

    Artificial consciousness seems inevitable to me, but will require a new kind of hardware. From what I read, this is already I understood and in the works. Having no evidence, I believe it will not happen in my lifetime.

  4. keiths, to Erik:

    What, in your view, do humans possess, and machines lack, that makes consciousness possible in humans but impossible in machines, now or in the future, no matter how advanced they become?

    Neil:

    Perception and emotion.

    Is that because you think that humans possess a nonphysical component (like a soul, for instance) that is necessary for perception and emotion? A nonphysical component that no machine, no matter how advanced, will ever possess?

  5. keiths:

    Is that because you think that humans possess a nonphysical component (like a soul, for instance) that is necessary for perception and emotion?

    Neil:

    No.

    OK. So I take it that you believe that a) certain purely physical arrangements of particles, energy, momenta, etc., including the arrangements we find in functioning humans, are capable of perception and emotion, but b) it is impossible for any machine, present or future, to ever include such an arrangement. Is that a fair inference?

  6. keiths: but b) it is impossible for any machine, present or future, to ever include such an arrangement. Is that a fair inference?

    I wouldn’t agree with that, either. We might change what we mean by “machine”. People already disagree over what the word means.

  7. The more accessible question, for those of us who think organisms are related by common descent, is: will machines ever be able to behave like, say, an amphioxus?

    The obvious answer is yes. And from that follows …

  8. Joe Felsenstein: The obvious answer is yes.

    I have no doubt of the accuracy of common descent and the central role natural selection plays in the process of biological evolution. But the answer is not obvious to me. For machines to emulate amphioxus, they’d need to be self-sustaining and self-replicating. I think AI is far from being on that path.

    (I suspect I’m missing some nuance in Joe’s comment)

  9. Neil:

    I wouldn’t agree with that, either. We might change what we mean by “machine”. People already disagree over what the word means.

    Good point. When you say that machines will never be conscious, what meaning of ‘machine’ do you have in mind?

    PS Hi, Joe!

  10. petrushka: Artificial consciousness seems inevitable to me, but will require a new kind of hardware.

    Avoiding for the moment the problem of whether “consciousness” is a thing, I agree that silicon chips and wafers are a poor substitute for white matter. Assuming that trying to emulate how living brains and nervous systems work is key to progress in AI, a preliminary step would be to understand how cats function.

  11. On investing a few minutes at Bernardo Kastrup’s blog and reading the article Erik links to, establishing what consciousness is seems a prerequisite. Kastrup gives me the impression that his understanding of consciousness is of a binary property that entities either possess or do not.

  12. Neil Rickert: People already disagree over what the word means.

    Indeed.

    For “consciousness”, it is worse. Discussions seem to start at step two: whether some X is conscious, without first considering whether consciousness is a coherent concept.

  13. keiths: Good point. When you say that machines will never be conscious, what meaning of ‘machine’ do you have in mind?

    Evidence for my point. What meaning of “conscious” are people using?

  14. petrushka: Do cats an dogs have perception and emotion?

    Snakes?

    Fish?

    Mosquitoes?

    The point of contention may be located at plants. Many vegetarians/vegans are big on animal rights, but nobody has brought up plant and mushroom rights yet.

  15. Erik: …nobody has brought up plant and mushroom rights yet.

    Why would they? They lack consciousness.

    (Link to L Ron Hubbard measuring tomato pain too much trouble using phone)

  16. Dan Dennett explained consciousness in 1991. Critics claim he rather explains consciousness away. Reading a later work (2004, nearly twenty years ago), Sweet Dreams, at the moment. I note Kastrup is not a fan of Dennett.

  17. Dennett makes a point (he attributes to Leibniz) that there is a non-sequitur from “we are unable to understand the machinery of consciousness” to “consciousness couldn’t be a matter of machinery” that continues to echo in debates following publishing of Consciousness Explained. I’m assuming “machinery” means purely physical processes in this instance.

  18. Here is an example of Kastrup on Dennett.

    ETA

    Kastrup writes:

    Obviously, raw subjective experience – that is, consciousness – isn’t an illusion: it is the only carrier of reality anyone can ever know.

    Defined this way, consciousness seems fine as a portmanteau word for physical processes that can be researched.

  19. keiths: When you say that machines will never be conscious, what meaning of ‘machine’ do you have in mind?

    I try to avoid saying that machines will never be conscious. I do not have a good definition of “machine”. However, as I use the word, humans are not machines. But I’m aware that some folk have a different view of that.

  20. petrushka: Do cats an dogs have perception and emotion?

    Yes.

    Snakes?

    Fish?

    Probably, but I am less sure about that than I am about cats and dogs.

    Mosquitoes?

    I am even less sure about mosquitoes.

  21. Erik: The point of contention may be located at plants. Many vegetarians/vegans are big on animal rights, but nobody has brought up plant and mushroom rights yet.

    You were probably being facetious here but just for the record: as far as we can tell consciousness is associated with nervous systems. Mosquito’s have them. Plants and mushrooms do not. And of course computers don’t have them either, so Kastrup’s argument is quite valid (which is not to say I agree with him, but he makes a fair point). Thanks for linking to his blog.

    Unfortunately, the same cannot be said of your argument.

    Do airplanes really fly or do they only simulate flying?

    To be frank, I don’t think you are really making an argument here. You only simulate making an argument. Of course, I’d agree that it is an argument in the metaphorical sense, but it clearly has a very different structure from real arguments.

  22. Neil:

    I try to avoid saying that machines will never be conscious.

    Apparently you don’t try very hard, because yesterday you said exactly that:

    keiths:

    What, in your view, do humans possess, and machines lack, that makes consciousness possible in humans but impossible in machines, now or in the future, no matter how advanced they become?

    Neil:

    Perception and emotion.

  23. keiths: I’ll address Kastrup’s points later, but right now I’m interested in hearing about your own take on the secret ingredient of consciousness. What, in your view, do humans possess, and machines lack, that makes consciousness possible in humans but impossible in machines, now or in the future, no matter how advanced they become?

    I suspect your answer will be that the secret ingredient is a soul. Is that correct? If not, what is your answer?

    @Erik And now I’ll add my support to this request. Kastrup’s argument carries some weight since we do not know how a collection of neurons firing at each other gives rise to awareness. But if you are of the opinion that we are conscious because God gave us a soul, then we can have a nice chat about why you think an omnipotent God cannot possibly imbue a computer with a soul.

  24. Joe Felsenstein:
    The more accessible question, for those of us who think organisms are related by common descent, is: will machines ever be able to behave like, say, an amphioxus?

    The obvious answer is yes.And from that follows …

    When something *behaves like* something else, does it imply common descent? Something like – birds fly, so do airplanes, therefore…?

  25. Erik:

    Do airplanes really fly or do they only simulate flying? Airplanes fly only in a metaphorical sense.

    Corneel:

    To be frank, I don’t think you are really making an argument here. You only simulate making an argument. Of course, I’d agree that it is an argument in the metaphorical sense, but it clearly has a very different structure from real arguments.

    Heh.

    Erik’s simulated arguments appear to be headed toward a weird rule:

    If some function X can be performed both by machines and by conscious creatures, it is simulated when performed by machines and genuine when performed by conscious creatures.

    He’s already told us that learning is only simulated when done by machines, and that artificial intelligence is only simulated intelligence. He’s even told us that computer arithmetic is only simulated arithmetic. Now he’s suggesting that airplane flight is only simulated flight. I wonder how far he’ll go with this.

    Erik, when a robot walks, is it only simulated walking? Would a robot sniper only be simulating sniping? Does this robot only simulate playing table tennis? Would a killer robot only be simulating killing?

  26. keiths: I suspect your answer will be that the secret ingredient is a soul. Is that correct? If not, what is your answer?

    The soul is not an ingredient. You cannot add a soul and then get something else or better. Maybe Charlie thinks it conceivable to sprinkle souls here and there, and Corneel definitely supposes so. But no, they are wrong.

    The soul is the essence. Properly speaking, living beings are souls and they have bodies that belong to the biosphere.

    Machines cannot have a soul because a soul is not something one has. One either is a soul or not. A machine is an assembly of material components.

    Physicalists of course think that everything is an assembly of material components. They are bound either to be frustrated at the fact that there are observable radical differences between inert material things and living beings or to bravely (and irrationally) conclude that no such differences ultimately exist and therefore living beings and material things can/should be treated the same.

  27. Erik:

    The soul is the essence. Properly speaking, living beings are souls and they have bodies that belong to the biosphere.

    Machines cannot have a soul because a soul is not something one has. One either is a soul or not. A machine is an assembly of material components.

    Your position seems self-contradictory to me.

    You’re saying that a machine is just an assembly of material components, and that it therefore isn’t a soul. Humans are souls. If humans are not just an assembly of material components, but they are souls, then there must be an extra nonphysical ingredient that makes them souls. Yet you say there is no such ingredient.

    Perhaps this question will help: If we had the technology to assemble full, functioning human bodies, would they be souls?

  28. Erik: Maybe Charlie thinks it conceivable to sprinkle souls here and there, and Corneel definitely supposes so. But no, they are wrong.

    The soul is the essence. Properly speaking, living beings are souls and they have bodies that belong to the biosphere.

    Machines cannot have a soul because a soul is not something one has. One either is a soul or not. A machine is an assembly of material components.

    Looks like we got to the heart of the matter rather quickly. Good to get it out in the open.

    Like keiths, I have trouble properly understanding your position. Human beings are souls, machines are not. Hence machines cannot become conscious. This I understand to be your position. So far so good.

    But:

    […] living beings are souls and they have bodies that belong to the biosphere.

    More specifically, conscious living beings are souls with bodies that are equipped with a nervous system. But why? And what makes you so sure that living beings can never have bodies that are built from non-organic material? You have not really answered keiths’ question but merely rephrased it.

  29. keiths: Perhaps this question will help: If we had the technology to assemble full, functioning human bodies, would they be souls?

    It doesn’t help, because it refuses to understand what a soul is. Also, the question does not understand what a human body is. Namely, it is a body in the biosphere where you do not just assemble things.

    And the question also does not understand what technology is. And there are probably more flaws in the question, but enough listed for now.

  30. keiths: Erik’s simulated arguments appear to be headed toward a weird rule:

    If some function X can be performed both by machines and by conscious creatures, it is simulated when performed by machines and genuine when performed by conscious creatures.

    I suppose it only counts if you know you are doing it.

  31. keiths:

    Perhaps this question will help: If we had the technology to assemble full, functioning human bodies, would they be souls?

    Erik:

    It doesn’t help, because it refuses to understand what a soul is.

    I am asking you to tell me what a soul is. You’ve said that humans are souls. Are capybaras souls? Kangaroo rats? Beetles? Paramecia?

    Also, the question does not understand what a human body is. Namely, it is a body in the biosphere where you do not just assemble things.

    OK, so let’s investigate the implications of your position. If I assemble an atom-for-atom duplicate of your human body, the duplicate is not a human body. And the reason it is not a human body, despite being identical to yours, is because I assembled it. And since it is not a human body, it is not a soul.

    Do you think that this atom-for-atom duplicate would behave identically to you? Or would its status as a non-soul make that impossible?

  32. Neil,

    I suspect Erik would respond as I do, by pointing out out that the ‘soul’ of that title is metaphorical.

    Incidentally, I used to work at the company featured in the book (Data General) with some of the engineers it portrays. The book was written before my time, though, so I missed my shot at immortality.

    It’s an excellent book (if somewhat overdramatic at times) and it won the Pulitzer. I used to recommend it to friends and family who wanted to know what I did for a living and what it was like.

  33. There is no clear definition what consciousness is so how could anyone claim AI is self-aware? Are most people on this blog?

  34. The main problem here is: do humans, want or would like, to be self-aware? I’d say no, without running any surveys or stats.

  35. keiths: I suspect Erik would respond as I do, by pointing out out that the ‘soul’ of that title is metaphorical.

    The metaphorical use of “soul” is the only way to use it.

  36. Neil,

    If your point is that actual souls don’t exist, then I agree, but the nonexistence of something doesn’t mean that it’s illegitimate to refer to that thing non-metaphorically. “The Easter Bunny doesn’t exist” is a legitimate, non-metaphorical reference, and so is “The soul would cause the laws of physics to be violated”, which is something I keep saying to CharlieM.

  37. Turing assumed that language use epitomized intelligence and would be the last bastion to fall.

    Surprisingly, the alligator brain is the hard nut.

    Emotion and perception.

    Invisible behaviors.

  38. Alan Fox:
    Dan Dennett explained consciousness in 1991. Critics claim he rather explains consciousness away. Reading a later work (2004, nearly twenty years ago), Sweet Dreams, at the moment. I note Kastrup is not a fan of Dennett.

    I’m not a fan of that description of consciousness. It my describe the process of composing verbal statements, but I think language is a bag.

    “Bag” referencing “Soul of a new Machine.”

  39. I think the nature of machine intelligence would depend strongly on the nature of internal structures and processes. The human brain (and organic brains generally) aren’t known for signaling speed – messages don’t travel along nerves or even neurons all that fast. Yet speed is becoming an issue in modern computers, simply because the speed of light is starting to be a meaningful limit. A signal traveling about 12 inches requires a whole nanosecond. Even internally to processors, I see a shift toward greater speed to be away from clock rates and toward simply shrinking the distance between transistors. And I think this speed issue places real limits on the ability of von Neumann architectures to achieve the famous “singularity” where computers start designing and building better versions of themselves in ways humans could never fully understand. But I think that singularity must be reached for computers to evolve anything we might think of as consciousness.

    Now, we’re starting to look at different architectures. Organic brains don’t get their abilities from fast neurons, but from the sheer density of connections and capacity for a great many processes to be happening simultaneously, with only the minimal necessary organization and coordination at a higher level. Organic brains prove that it’s possible to have a great many “processors” working and still have some useful overall function. An octopus has 9 brains, one in its head and one in each leg. But an octopus is quite smart and works as a single organism. Designers of future computers probably need to abandon the entire concept of a system bus or even multiple buses.

    Some truly baby-step examples of quantum computing have been demonstrated, but these are so far limited to only a few quantum cells, and most of what goes on is error correction. Superposition, interference and entanglement aren’t easy phenomena to capture in a reliable chunk of hardware. Even in theory quantum computers will be fantastic at certain tasks and lousy at others – much like the human brain. We can’t copy books or sum many numbers in seconds, but we CAN recognize a face from the side when we’ve only seen it before from the front. Computers have a hard time with this, but we see faces in everything.

    So computers may become intelligent enough to be useful companions, but different architectures will have different strengths.

  40. petrushka:

    “Bag” referencing “Soul of a new Machine.”

    Haha! I remember that.

    (Petrushka is referring to the decision to make the new machine backward-compatible with the previous generation of machines known as the Eclipse line. Backward compatibility just means that a new machine is capable of running software written for an older one. Engineers complained that they were being asked to “put a bag on the side of the Eclipse”, conjuring up the image of some kind of unholy graft.

    The objection sounds quaint today, because backward compatibility is now absolutely mandatory. Present-day X86 machines are bags on the side of bags on the side of bags…on the side of bags on the side of the 8086.)

  41. Flint:

    The human brain (and organic brains generally) aren’t known for signaling speed – messages don’t travel along nerves or even neurons all that fast.

    Yeah, the speeds are only in the tens of meters per second, which is surprisingly slow. Indy cars are faster than that.

    With speeds that slow, parallelism was an evolutionary necessity.

  42. petrushka: I’m not a fan of that description of consciousness.

    Me neither. Discussion of “consciousness” usually ends up as an exercise in talking past each other. I suggest the reason for this is not just there is no widely accepted definition of “consciousness” but that “consciousness” is not a coherent concept.

    I’d say the same of “soul”.

  43. Flint: …computers start designing and building better versions of themselves in ways humans could never fully understand.

    The only way this could happen is if the builds included variation and the fittest for purpose were selected. Over time…

  44. J-Mac: The main problem here is: do humans, want or would like, to be self-aware? I’d say no, without running any surveys or stats.

    This explains a lot about J-Mac’s comments.

  45. Flint: A signal traveling about 12 inches requires a whole nanosecond. Even internally to processors, I see a shift toward greater speed to be away from clock rates and toward simply shrinking the distance between transistors.

    If you read the IBM article I linked, you would find that there is a concerted effort to devise architectures in which memory and CPU are unified. That is, there’s no transfer of data from storage to processor.

    Ironically, for the purposes of this discussion, the current chips just simulate this.

    This supposedly can improve performance a thousandfold without requiring esoteric stuff like quantum computing.

    At the cost of certainty in computation.

    But there are lots of tasks that favor speed over certainty.

  46. One of the amusing results of making a computer using a zillion multi state “neurons” is that it becomes a soul, figuratively.

    One of the dark fears of programming is allowing threads to lose synch and make the results wrong or unpredictable.

    In the proposed IBM architecture, this would be a feature. Except, presumably, it would learn to become less wrong.

Leave a Reply