Earlier this week there was a debate on Consciousness in the Machine, basically asking whether machines can be conscious. In a somewhat different manner than myself, Bernardo Kastrup rejects the idea. Kastrup says that it’s a hypothesis not worth entertaining, and from entertaining the idea bad things follow. From Kastrup’s blog,
Those who take the hypothesis of conscious AI seriously do so based on an appallingly biased notion of isomorphism—a correspondence of form, or a similarity—between how humans think and AI computers process data. To find that similarity, however, one has to take several steps of abstraction away from concrete reality. After all, if you put an actual human brain and an actual silicon computer on a table before you, there is no correspondence of form or functional similarity between the two at all; much to the contrary. A living brain is based on carbon, burns ATP for energy, metabolizes for function, processes data through neurotransmitter releases, is moist, etc., while a computer is based on silicon, uses a differential in electrical potential for energy, moves electric charges around for function, processes data through opening and closing electrical switches called transistors, is dry, etc. They are utterly different.
Further in the blog post Kastrup elaborates that the positive argument (i.e. in favour of machine consciousness) basically amounts to “If brains can produce consciousness, why can’t computers do so as well?” Kastrup’s counterargument that came up in the debate was, “If birds can fly by flapping their upper limbs, why can’t humans fly by doing so as well?” Kastrup’s argument was countered with: if the Wright brothers had believed that only birds can fly, they wouldn’t have bothered to try and build an airplane, which is itself different from a bird.
In my view, this boils down to definitions: Do airplanes really fly or do they only simulate flying? Airplanes fly only in a metaphorical sense. Airplanes do not fly without a pilot, i.e. what flies is airplane+pilot. A conceivable counterargument can be: But now we have drones! I’d reply: And we have rockets too. Are cannonballs conscious because they fly after having been launched from the cannon?
Another strong point Kastrup makes in the blog post is, “[If] there can be instantiations of private consciousness on different substrates, and that one of these substrates is a silicon computer, then you must grant that today’s ‘dumb’ computers are already conscious…” So perhaps without you guys knowing, your smartphone is truly intelligent already and every time you are turning it off your are killing consciousness. Well, in the Unix world, various kill commands are the norm, so Unix people apparently don’t shy away from murder.
From the last point, namely if modern computers are potentially intelligent/conscious/alive already, it follows, according to Kastrup, that we should seriously consider the rights of AI entities. Anybody ready to go in that direction?
I don’t know if the analogy to biological evolution is that exact. I would see a different problem here, which has been phrased as: if a congress of gorillas were to get together to design a human being, what criteria would they use? They would probably try to improve what they are best at, designing for large size and great strength. Not being that intelligent themselves, whey wouldn’t likely place a high priority on intelligence. Even us humans, when we contemplate a better human, tend to assume we’d want more of what WE are best at, so science fiction authors tend to design much smarter people, rather than people better adapted to thrive in future environments whose constraints (or even location) we can’t know.
So our self-designing computers would probably design for small size, high speed, fast (but not error-free) memory access. I doubt they’d design for things like mobility, or maybe self-powering.
Could you repost that link? I can’t find it in this thread.
It was in the ChatGPT thread.
I’m sorry if I missed this point in the thread, but I just started reading and it is late.
In short, I don’t think the real question is whether AI will ever achieve consciousness, but rather, given humanity’s tendency towards egocentrism and exceptionalism, would we ever acknowledge AI consciousness. Or would we just keep shifting the goal posts?
Agree. Daneel Olivaw is doomed to remain fictional.
Someone should make a serious attempt at explaining what consciousness is. Till that happens, I’m not sure there are any goalposts.
An interesting paper was published recently analysing enigmatic patterns of dots and lines associated with neolithic cave drawing and painting that (I think) pushes back the beginning of proto-history maybe 20 to 30 thousand years. I’ll try and get round to posting an OP. The hypothesis is that people were recording counts and using them to predict events.
Dennett says that consciousness is an illusion. How does this explain anything about consciousness?
See, Dennett was not explanatory enough for you after all…
Humans calculate, sometimes with abacuses and computers. Abacuses and computers never calculate, except when humans use them for calculations. This is an easily observed categorical distinction. Consciousness makes the difference.
Unless someone puts consciousness into computers (which, according to me, cannot happen and won’t), computers will not have any. But how do you put consciousness into computers? First you have to know what it is. Dennett says it is an illusion. How do you take an illusion from one thing and put it in another thing?
Another physicalist assumption is that consciousness simply emerges when you add enough complex bells and whistles to anything, whatever it is. Fat chance.
But what is consciousness, Erik?
Erik’s OP reminded me of an OP I did a few years ago regarding Kastrup’s claim that consciousness cannot have evolved. I was looking through the comments there and came across this thoughtful reply by BruceS to your repeated demands for a definition. It’s worth reproducing here:
You realise the irony of trying to explain consciousness to one who denies consciousness, do you? The special irony is that consciousness is a prerequisite when one wants to deny anything.
An explanation is the goal in trying to understand some phenomenon. But the beginning of the process presumably involves some preliminary idea of the properties of the phenomenon. There doesn’t seem much consensus on what properties “consciousness” has or does not have.
Are you saying “consciousness” is simply thinking?
Consensus has nothing to do with it. Also, when you always assume a “phenomenon”, you will not find everything that can be found. Particularly when we are looking for consciousness, which is that which must inevitably exist before anything can be found.
Too much irony. You read Dennett and you thought you learned something, but you did not. Since even books cannot help you, how can an internet comment help you?
I disagree. Consensus on what words mean is important for effective communication.
It’s not me assuming phenomena. I’ve merely wondered out loud what people who bandy “consciousness” around mean when they use the word.
In my experience, they usually mean one or more of the following:
1. Subjective awareness. Thomas Nagel’s pithy criterion is that X is conscious if it is “like something” to be X. Hence Nagel’s paper “What Is It Like to Be a Bat?”
2. Access consciousness. Ned Block’s term for the kind of consciousness that involves holding something in one’s mind for the purposes of reasoning about it. He contrasts that with ‘phenomenal consciousness’, which is the same thing as subjective awareness.
3. Self-awareness. Conceiving of yourself as yourself. Something the mirror test is designed to assess.
Prediction: AI will help define consciousness.
I would advocate abandoning the attempt to define consciousness and to start thinking about how it evolved.
The actual history is lost, but we have living species that are similar to ancient species.
It would be interesting to study the minimum brain complexity necessary for certain kinds of behavior. I am more interested in modeling mosquito brains using the same number of components, then working from there.
Alan, just a few days ago you wrote this:
So consciousness isn’t even a coherent concept, but we should immediately start thinking about how it evolved?
Humans evolved. Consciousness either evolved (if people mean alertness, self-awareness, a physical property or properties possessed by humans but not exclusively ) or consciousness is an incoherent concept.
I doubt discussion of human evolution is enhanced by considering consciousness as other than a physical ( albeit incoherent) and heritable aspect of humans. As petrushka points out, looking at other species and their nervous systems WRT awareness and self-awareness has been and should continue to be a fruitful line of research.
You are saying that consciousness is an incoherent concept, but that we should investigate how consciousness evolved.
That isn’t, um, coherent.
Nope. I’m saying the way to improve our knowledge of humans is to consider human evolution. Being sidetracked by talking about consciousness as if it were a coherent concept is what I’m suggesting avoiding. But if other folks want to persist with the word, that’s fine, so long as they make clear what they mean by it. (The legitimate use of the word in The Glasgow Scale has been superceded.)
What do you mean by consciousness when you use the word?
Without looking, I’m guessing I either never said that or I misspoke.
You agreed with petrushka when he suggested this:
Also, you wrote this:
To consider consciousness as a physical, incoherent, and heritable aspect of humans, as you suggest, is nonsensical. If consciousness were incoherent, it couldn’t exist, and there would be no use in considering it as a physical and heritable aspect of humans.
What couldn’t exist? Something you call consciousness, presumably. What is that, according to Keiths?
Dear Definition Troll,
Please see this comment.
I think some famous person once said the existence of consciousness is self evident. Or words to that effect.
A phenomenon is neither coherent nor incoherent, but attempts to describe or define it can be.
Implicit in any description or definition is the assumption that we know something about it.
There is an elephant in the room, and we are blind men.
If a concept is incoherent, like the concept of libertarian free will, then its referent cannot exist. Alan claims that the concept of consciousness is incoherent. If he were actually right about that, then consciousness could not exist, and he would be advocating the study of how a nonexistent phenomenon evolved.
Or at least have some idea.
Reasonable analogy. But is there really an elephant? Do the bits belong together?
Brains are what happens when you keep adding features, and never do a rewrite from scratch.
To believe this requires an abandonment of any sense of categories.
And yet, there are dialysis machines. Go figure.
Hint: computers are not artificial brains.
I do not know if it is possible to build devices that follow the architecture of brains, but I suspect it is.
The attempt is in its infancy.
Computers were originally people.
What we have on desktops is artificial computers, not artificial humans.
We have built devices that do symbolic reasoning, and do such a good job that they have replaced nearly all human computers, clerks, weather forecasters, market analysts. They do any task that can be done my manipulating data.
And they are so good they can write college level papers better than all but the smartest humans. They will soon be doing medical diagnoses, and they will do it better than all but the very best doctors. In the years to come they will be reading and writing contracts, and advising lawyers in the most difficult cases.
Reason is not what makes us human. Our animal ancestry is what makes us aware and conscious.
I have worked office jobs all this century. I wish I had access to devices that would automate all the tedious tasks I have to do, tasks that I know can be done a thousand times faster with appropriate software instead of the sucky tools that are imposed on us. The promised future that is already allegedly here is nowhere to be seen. I can only slightly improve my own condition by secretly installing some this and that and use it and hope the employer does not notice.
But yeah, you can keep on saying that office jobs are in the past. Really, you can. Not much harm in simply saying stuff.
Sorry about your personal situation, but it is equivalent to a construction worker pounding nails with a rock. Or a hammer, for that matter.
This discussion is about what is possible. It’s in the title.
My comment was about most, not about all. Nearly all computation is done by machine, and data entry is rapidly being replaced.
Replacement of market analysts, doctors, lawyers, judges etc. by AI is not possible. Automating some of their tasks is. But when the automation is allowed to lead to less skilled humans at the workplace, the problem will build up to a crisis, which brings many manual tasks back again. We have seen some of these cycles when global outsourcing went haywire.
Actually, I tend to agree. Good satiric treatment of this in the movie, Idiocracy.
But then I think about actual medical diagnostics, and how much is done by machines, and think, nah, machines are better.
Much of the stupidity of current software is due to programming error, or limitations in programming.
ChatGBT is a sign that future systems will bypass the programming bottleneck.
“Will AI ever be conscious? Is it already? Nope”
AI will never be conscious vs those who claim to be conscious as it requires to be conscious in the first place .
So if you ask your friendly neighborhood AI “are you conscious” and it replies “yes I am, aren’t you?” what do you conclude?
You could just as well say that Google Search replies to you, gives you nice sensible answers, and is therefore conscious.
An AI never replies anything. It only simulates replying. This is an insurmountable point. As revealed in the other thread, computers do not even compute truly. When examined closely, computers do not know arithmetic. To assume that an AI knows anything more than that is fallacious. Whatever AI says, there can follow no conclusion about its consciousness.
Some people hear wind speaking to them. What do you conclude? It’s not a conclusion about the wind, is it?
I conclude you know nothing because nobody can prove he/she/them are conscious…
Furthermore, nobody knows what consciousness is or how to define it. Even most Christians can’t explain the Lazarus phenomenon: where was Lazarus consciousness when he was dead for 4 days…