Will AI ever be conscious? Is it already? Nope

Earlier this week there was a debate on Consciousness in the Machine, basically asking whether machines can be conscious. In a somewhat different manner than myself, Bernardo Kastrup rejects the idea. Kastrup says that it’s a hypothesis not worth entertaining, and from entertaining the idea bad things follow. From Kastrup’s blog,

Those who take the hypothesis of conscious AI seriously do so based on an appallingly biased notion of isomorphism—a correspondence of form, or a similarity—between how humans think and AI computers process data. To find that similarity, however, one has to take several steps of abstraction away from concrete reality. After all, if you put an actual human brain and an actual silicon computer on a table before you, there is no correspondence of form or functional similarity between the two at all; much to the contrary. A living brain is based on carbon, burns ATP for energy, metabolizes for function, processes data through neurotransmitter releases, is moist, etc., while a computer is based on silicon, uses a differential in electrical potential for energy, moves electric charges around for function, processes data through opening and closing electrical switches called transistors, is dry, etc. They are utterly different.

Further in the blog post Kastrup elaborates that the positive argument (i.e. in favour of machine consciousness) basically amounts to “If brains can produce consciousness, why can’t computers do so as well?” Kastrup’s counterargument that came up in the debate was, “If birds can fly by flapping their upper limbs, why can’t humans fly by doing so as well?” Kastrup’s argument was countered with: if the Wright brothers had believed that only birds can fly, they wouldn’t have bothered to try and build an airplane, which is itself different from a bird.

In my view, this boils down to definitions: Do airplanes really fly or do they only simulate flying? Airplanes fly only in a metaphorical sense. Airplanes do not fly without a pilot, i.e. what flies is airplane+pilot. A conceivable counterargument can be: But now we have drones! I’d reply: And we have rockets too. Are cannonballs conscious because they fly after having been launched from the cannon?

Another strong point Kastrup makes in the blog post is, “[If] there can be instantiations of private consciousness on different substrates, and that one of these substrates is a silicon computer, then you must grant that today’s ‘dumb’ computers are already conscious…” So perhaps without you guys knowing, your smartphone is truly intelligent already and every time you are turning it off your are killing consciousness. Well, in the Unix world, various kill commands are the norm, so Unix people apparently don’t shy away from murder.

From the last point, namely if modern computers are potentially intelligent/conscious/alive already, it follows, according to Kastrup, that we should seriously consider the rights of AI entities. Anybody ready to go in that direction?

95 thoughts on “Will AI ever be conscious? Is it already? Nope

  1. Alan Fox: The only way this could happen is if the builds included variation and the fittest for purpose were selected. Over time…

    I don’t know if the analogy to biological evolution is that exact. I would see a different problem here, which has been phrased as: if a congress of gorillas were to get together to design a human being, what criteria would they use? They would probably try to improve what they are best at, designing for large size and great strength. Not being that intelligent themselves, whey wouldn’t likely place a high priority on intelligence. Even us humans, when we contemplate a better human, tend to assume we’d want more of what WE are best at, so science fiction authors tend to design much smarter people, rather than people better adapted to thrive in future environments whose constraints (or even location) we can’t know.

    So our self-designing computers would probably design for small size, high speed, fast (but not error-free) memory access. I doubt they’d design for things like mobility, or maybe self-powering.

  2. petrushka: If you read the IBM article I linked, you would find that there is a concerted effort to devise architectures in which memory and CPU are unified. That is, there’s no transfer of data from storage to processor.

    Could you repost that link? I can’t find it in this thread.

  3. I’m sorry if I missed this point in the thread, but I just started reading and it is late.

    In short, I don’t think the real question is whether AI will ever achieve consciousness, but rather, given humanity’s tendency towards egocentrism and exceptionalism, would we ever acknowledge AI consciousness. Or would we just keep shifting the goal posts?

  4. Flint: Even us humans, when we contemplate a better human, tend to assume we’d want more of what WE are best at, so science fiction authors tend to design much smarter people, rather than people better adapted to thrive in future environments whose constraints (or even location) we can’t know.

    Agree. Daneel Olivaw is doomed to remain fictional.

  5. Acartia: would we ever acknowledge AI consciousness. Or would we just keep shifting the goal posts?

    Someone should make a serious attempt at explaining what consciousness is. Till that happens, I’m not sure there are any goalposts.

  6. Flint,

    An interesting paper was published recently analysing enigmatic patterns of dots and lines associated with neolithic cave drawing and painting that (I think) pushes back the beginning of proto-history maybe 20 to 30 thousand years. I’ll try and get round to posting an OP. The hypothesis is that people were recording counts and using them to predict events.

  7. keiths:
    Flint,

    It was in the ChatGPT thread.

    https://research.ibm.com/blog/the-hardware-behind-analog-ai

    Here’s another:

    In computing, much of the time and energy that is spent in processing is spent shifting electrons back and forth between a device’s processor and memory. For years, researchers at IBM have been working on developing analog in-memory computer chips, where the computing is carried out in the memory itself. The goal of these chips is both to save energy, and build devices that could be used for training and inferring with AI systems.
    What we’ve used computers to calculate has always needed to be precise. You can’t guess the flight path of a rocket, or hope your tax software will just figure out what you’re supposed to pay that year. But there are some things in life that don’t have to be quite as accurate. If you learn how to drive in one country, for example, you know that you’ll likely be able to figure out what a stop sign looks like in another country even if you’ve never seen one.
    At this year’s IEEE International Electron Devices Meeting (IEDM), IBM researchers are presenting work that details how future efficient analog chips could be used for deep learning, both for training and for inference.

    https://research.ibm.com/blog/why-we-need-analog-AI-hardware

  8. Alan Fox: Dan Dennett explained consciousness in 1991. Critics claim he rather explains consciousness away.

    Dennett says that consciousness is an illusion. How does this explain anything about consciousness?

    Alan Fox: Someone should make a serious attempt at explaining what consciousness is.

    See, Dennett was not explanatory enough for you after all…

    Humans calculate, sometimes with abacuses and computers. Abacuses and computers never calculate, except when humans use them for calculations. This is an easily observed categorical distinction. Consciousness makes the difference.

    Unless someone puts consciousness into computers (which, according to me, cannot happen and won’t), computers will not have any. But how do you put consciousness into computers? First you have to know what it is. Dennett says it is an illusion. How do you take an illusion from one thing and put it in another thing?

    Another physicalist assumption is that consciousness simply emerges when you add enough complex bells and whistles to anything, whatever it is. Fat chance.

  9. Alan:

    But what is consciousness, Erik?

    Alan,

    Erik’s OP reminded me of an OP I did a few years ago regarding Kastrup’s claim that consciousness cannot have evolved. I was looking through the comments there and came across this thoughtful reply by BruceS to your repeated demands for a definition. It’s worth reproducing here:

    Alan Fox:

    Sure, Bruce. For a fruitful discussion on qualia to ensue, a workable definition ought to be a prerequisite.

    Thousands of philosophers and scientists disagree: they don’t seem to have a problem doing published work on consciousness without a formal definition.

    Of course, they all start with a rough definition which you can find in their work or in summaries like IEP on qualia.

    That’s how science and philosophy work when they use terms, including those of everyday language. The scientific or philosophical understanding of the term is not an if and only if definition; rather, it is implicit in the resulting theoretical claims which detail the initial rough definition.

    So it is often better to think of theoretical terms as cluster concepts. When theories differ, the elements of the cluster need not completely overlap. In biology, this is evident for terms like ‘natural kind’, ‘fitness’, or ‘biological function’ (eg see Sandwalk’s series on biological function and genome).

    That’s why I think it is wrongheaded to keep asking for a precise definition before starting inquiry.

  10. Alan Fox:
    But what is consciousness, Erik?

    You realise the irony of trying to explain consciousness to one who denies consciousness, do you? The special irony is that consciousness is a prerequisite when one wants to deny anything.

  11. Erik: You realise the irony of trying to explain consciousness to one who denies consciousness, do you?

    An explanation is the goal in trying to understand some phenomenon. But the beginning of the process presumably involves some preliminary idea of the properties of the phenomenon. There doesn’t seem much consensus on what properties “consciousness” has or does not have.

  12. Erik: The special irony is that consciousness is a prerequisite when one wants to deny anything.

    Are you saying “consciousness” is simply thinking?

  13. Alan Fox: An explanation is the goal in trying to understand some phenomenon. But the beginning of the process presumably involves some preliminary idea of the properties of the phenomenon. There doesn’t seem much consensus on what properties “consciousness” has or does not have.

    Consensus has nothing to do with it. Also, when you always assume a “phenomenon”, you will not find everything that can be found. Particularly when we are looking for consciousness, which is that which must inevitably exist before anything can be found.

    Alan Fox: Are you saying “consciousness” is simply thinking?

    Too much irony. You read Dennett and you thought you learned something, but you did not. Since even books cannot help you, how can an internet comment help you?

  14. Erik: Consensus has nothing to do with it.

    I disagree. Consensus on what words mean is important for effective communication.

  15. Erik: Also, when you always assume a “phenomenon”, you will not find everything that can be found.

    It’s not me assuming phenomena. I’ve merely wondered out loud what people who bandy “consciousness” around mean when they use the word.

  16. Alan:

    I’ve merely wondered out loud what people who bandy “consciousness” around mean when they use the word.

    In my experience, they usually mean one or more of the following:

    1. Subjective awareness. Thomas Nagel’s pithy criterion is that X is conscious if it is “like something” to be X. Hence Nagel’s paper “What Is It Like to Be a Bat?”

    2. Access consciousness. Ned Block’s term for the kind of consciousness that involves holding something in one’s mind for the purposes of reasoning about it. He contrasts that with ‘phenomenal consciousness’, which is the same thing as subjective awareness.

    3. Self-awareness. Conceiving of yourself as yourself. Something the mirror test is designed to assess.

  17. I would advocate abandoning the attempt to define consciousness and to start thinking about how it evolved.

    The actual history is lost, but we have living species that are similar to ancient species.

    It would be interesting to study the minimum brain complexity necessary for certain kinds of behavior. I am more interested in modeling mosquito brains using the same number of components, then working from there.

  18. petrushka:

    I would advocate abandoning the attempt to define consciousness and to start thinking about how it evolved.

    Alan:

    I agree.

    Alan, just a few days ago you wrote this:

    Discussion of “consciousness” usually ends up as an exercise in talking past each other. I suggest the reason for this is not just there is no widely accepted definition of “consciousness” but that “consciousness” is not a coherent concept.

    So consciousness isn’t even a coherent concept, but we should immediately start thinking about how it evolved?

  19. keiths: So consciousness isn’t even a coherent concept, but we should immediately start thinking about how it evolved?

    Humans evolved. Consciousness either evolved (if people mean alertness, self-awareness, a physical property or properties possessed by humans but not exclusively ) or consciousness is an incoherent concept.

    I doubt discussion of human evolution is enhanced by considering consciousness as other than a physical ( albeit incoherent) and heritable aspect of humans. As petrushka points out, looking at other species and their nervous systems WRT awareness and self-awareness has been and should continue to be a fruitful line of research.

  20. Alan,

    You are saying that consciousness is an incoherent concept, but that we should investigate how consciousness evolved.

    That isn’t, um, coherent.

  21. keiths,

    Nope. I’m saying the way to improve our knowledge of humans is to consider human evolution. Being sidetracked by talking about consciousness as if it were a coherent concept is what I’m suggesting avoiding. But if other folks want to persist with the word, that’s fine, so long as they make clear what they mean by it. (The legitimate use of the word in The Glasgow Scale has been superceded.)

    What do you mean by consciousness when you use the word?

  22. keiths: [claims AF said] …we should investigate how consciousness evolved.

    Without looking, I’m guessing I either never said that or I misspoke.

  23. keiths:

    You are saying that consciousness is an incoherent concept, but that we should investigate how consciousness evolved.

    That isn’t, um, coherent.

    Alan:

    Without looking, I’m guessing I either never said that or I misspoke.

    You agreed with petrushka when he suggested this:

    petrushka:

    I would advocate abandoning the attempt to define consciousness and to start thinking about how it evolved.

    Alan:

    I agree.

  24. Also, you wrote this:

    I doubt discussion of human evolution is enhanced by considering consciousness as other than a physical ( albeit incoherent) and heritable aspect of humans.

    To consider consciousness as a physical, incoherent, and heritable aspect of humans, as you suggest, is nonsensical. If consciousness were incoherent, it couldn’t exist, and there would be no use in considering it as a physical and heritable aspect of humans.

  25. keiths: If consciousness were incoherent, it couldn’t exist..

    What couldn’t exist? Something you call consciousness, presumably. What is that, according to Keiths?

  26. Definition Troll:

    What couldn’t exist? Something you call consciousness, presumably. What is that, according to Keiths?

    Dear Definition Troll,

    Please see this comment.

  27. I think some famous person once said the existence of consciousness is self evident. Or words to that effect.

    A phenomenon is neither coherent nor incoherent, but attempts to describe or define it can be.

    Implicit in any description or definition is the assumption that we know something about it.

    There is an elephant in the room, and we are blind men.

  28. petrushka:

    A phenomenon is neither coherent nor incoherent, but attempts to describe or define it can be.

    If a concept is incoherent, like the concept of libertarian free will, then its referent cannot exist. Alan claims that the concept of consciousness is incoherent. If he were actually right about that, then consciousness could not exist, and he would be advocating the study of how a nonexistent phenomenon evolved.

  29. petrushka: Implicit in any description or definition is the assumption that we know something about it.

    Or at least have some idea.

    There is an elephant in the room, and we are blind men.

    Reasonable analogy. But is there really an elephant? Do the bits belong together?

  30. Brains are what happens when you keep adding features, and never do a rewrite from scratch.

  31. petrushka:
    Brains are what happens when you keep adding features, and never do a rewrite from scratch.

    To believe this requires an abandonment of any sense of categories.

    Bernardo Kastrup:
    I can run a detailed simulation of kidney function, exquisitely accurate down to the molecular level, on the very iMac I am using to write these words. But no sane person will think that my iMac might suddenly urinate on my desk upon running the simulation, no matter how accurate the latter is. After all, a simulation of kidney function is not kidney function; it’s a simulation thereof, incommensurable with the thing simulated. We all understand this difference without difficulty in the case of urine production. But when it comes to consciousness, some suddenly part with their capacity for critical reasoning: they think that a simulation of the patterns of information flow in a human brain might actually become conscious like the human brain. How peculiar.

    https://iai.tv/articles/bernardo-kastrup-the-lunacy-of-machine-consciousness-auid-2363

  32. And yet, there are dialysis machines. Go figure.

    Hint: computers are not artificial brains.

    I do not know if it is possible to build devices that follow the architecture of brains, but I suspect it is.

    The attempt is in its infancy.

  33. Computers were originally people.

    What we have on desktops is artificial computers, not artificial humans.

    We have built devices that do symbolic reasoning, and do such a good job that they have replaced nearly all human computers, clerks, weather forecasters, market analysts. They do any task that can be done my manipulating data.

    And they are so good they can write college level papers better than all but the smartest humans. They will soon be doing medical diagnoses, and they will do it better than all but the very best doctors. In the years to come they will be reading and writing contracts, and advising lawyers in the most difficult cases.

    Reason is not what makes us human. Our animal ancestry is what makes us aware and conscious.

  34. petrushka: We have built devices that do symbolic reasoning, and do such a good job that they have replaced nearly all human computers, clerks, weather forecasters, market analysts. They do any task that can be done my manipulating data.

    I have worked office jobs all this century. I wish I had access to devices that would automate all the tedious tasks I have to do, tasks that I know can be done a thousand times faster with appropriate software instead of the sucky tools that are imposed on us. The promised future that is already allegedly here is nowhere to be seen. I can only slightly improve my own condition by secretly installing some this and that and use it and hope the employer does not notice.

    But yeah, you can keep on saying that office jobs are in the past. Really, you can. Not much harm in simply saying stuff.

  35. Sorry about your personal situation, but it is equivalent to a construction worker pounding nails with a rock. Or a hammer, for that matter.

    This discussion is about what is possible. It’s in the title.

    My comment was about most, not about all. Nearly all computation is done by machine, and data entry is rapidly being replaced.

  36. petrushka: This discussion is about what is possible.

    Replacement of market analysts, doctors, lawyers, judges etc. by AI is not possible. Automating some of their tasks is. But when the automation is allowed to lead to less skilled humans at the workplace, the problem will build up to a crisis, which brings many manual tasks back again. We have seen some of these cycles when global outsourcing went haywire.

  37. Erik: But when the automation is allowed to lead to less skilled humans at the workplace, the problem will build up to a crisis, which brings many manual tasks back again.

    Actually, I tend to agree. Good satiric treatment of this in the movie, Idiocracy.

    But then I think about actual medical diagnostics, and how much is done by machines, and think, nah, machines are better.

    Much of the stupidity of current software is due to programming error, or limitations in programming.

    ChatGBT is a sign that future systems will bypass the programming bottleneck.

  38. “Will AI ever be conscious? Is it already? Nope”
    AI will never be conscious vs those who claim to be conscious as it requires to be conscious in the first place .

  39. J-Mac:
    “Will AI ever be conscious? Is it already? Nope”
    AIwill never beconscious vsthose who claim to be conscious as it requires to be conscious in the first place .

    So if you ask your friendly neighborhood AI “are you conscious” and it replies “yes I am, aren’t you?” what do you conclude?

  40. Flint: So if you ask your friendly neighborhood AI “are you conscious” and it replies “yes I am, aren’t you?” what do you conclude?

    You could just as well say that Google Search replies to you, gives you nice sensible answers, and is therefore conscious.

    An AI never replies anything. It only simulates replying. This is an insurmountable point. As revealed in the other thread, computers do not even compute truly. When examined closely, computers do not know arithmetic. To assume that an AI knows anything more than that is fallacious. Whatever AI says, there can follow no conclusion about its consciousness.

    Some people hear wind speaking to them. What do you conclude? It’s not a conclusion about the wind, is it?

  41. Flint: So if you ask your friendly neighborhood AI “are you conscious” and it replies “yes I am, aren’t you?” what do you conclude?

    I conclude you know nothing because nobody can prove he/she/them are conscious…
    Furthermore, nobody knows what consciousness is or how to define it. Even most Christians can’t explain the Lazarus phenomenon: where was Lazarus consciousness when he was dead for 4 days…

Leave a Reply