In a short essay, Bernardo Kastrup argues that consciousness cannot be the product of evolution:
Consciousness Cannot Have Evolved
I disagree, but I’ll leave my objections for the comment thread.
In a short essay, Bernardo Kastrup argues that consciousness cannot be the product of evolution:
Consciousness Cannot Have Evolved
I disagree, but I’ll leave my objections for the comment thread.
You must be logged in to post a comment.
No sorry, you are the one being imprecise.
One condition means that ANYTHING that decides the outcome must be the same. So whether we talk about computers (or your brain, the way you have defined it), then OF COURSE the circumstances of the environment within which it is making the outcome can only be one condition=one outcome. How could anyone think otherwise, its nonsensical. It seems you haven’t thought about this.
If you ask a computer which forecasts the weather, if it will rain today, then how could you think that you can separate the computer from the factors which cause it to make the outcome. THE FACTORS DECIDE THE OUTCOME. If you ask it on a day in which it will probably rain, then you are going to get a different answer then if you ask it on a day when it probably won’t rain. Same computer!
Now you are trying to say, well, forget about the factors just think of the condition of the computer. Well, the condition of the computer is affected by the factors, for petes sake. Just the same as YOU are affected by the factors of the environment. Did you think somehow someone was suggesting they are separate?
So, if you NOW say that outside factors being the same, and internal factors of the computer being the same, then only ONE outcome is possible, under your paradigm, then great we agree. Then YOU can not make a decision. Whatever combination of outside factors and internal factors, only ONE outcome is possible for you.
I just don’t agree with your paradigm.
keiths,
The only mystery now would be what the heck it is you mean by a decision. If the internal factors and external factors combined can only produce ONE outcome and not TWO, then what does a decision mean? The meaning of decision now can only mean the result, not a “choice”. There is no choice, if only one result is possible.
THIS is the logical problem you haven’t overcome, even then you keep claiming you have.
phoodoo,
I’m not trying to say that. I’m pointing out that your suboptimal writing left your intended meaning of “one condition” unspecified.
One outcome is possible, but a choice is still made. We’ve been over and over this.
Second, it doesn’t depend on physicalism as you seem to think. Do you understand why?
An outcome was produced, but not a choice. You are being imprecise. A computer doesn’t make a choice, it produces an outcome. That outcome is ENTIRELY based on the combination of external input, and internal configuration.
There are no other factors. There is no choice being made.
This is turning into another case of you claiming you resolved an issue with you argument that you have not done.
phoodoo:
Does this ring a bell?
phoodoo:
keiths:
Those definitions work just fine when the outcome of the choice is predetermined.
Think about it, phoodoo.
Think about it, what does pick out or select mean?
You have use the term pre-determined, where the “pre” is unnecessary. The outcome is determined. It is not selected, it is forced by the conditions.
Its not selected any more than a ball on a roulette wheel selects where it lands. It just looks more complicated when you can’t see all the factors.
phoodoo,
Sure it is. Before the car’s decision is made, there are two or more alternatives for it to consider. It considers each of them in turn in order to determine which one is the best. It then picks that one.
Fits the definition perfectly.
Car? Does a roulette ball make a decision? It chooses to land on either red or black?
Nonsense.
phoodoo,
The roulette ball doesn’t examine the alternatives and select the best one. It doesn’t fit the definition. The car does fit the definition.
keiths,
You keep talking about a car, what car??
You have now added another criteria to your definition-examining. A computer doesn’t examine. It may seem like examine, because , you know sometimes lights flash and stuff, that is not examining. It doesn’t have sense to examine, it doesn’t look feel, and touch. I can understand why you would like to incorporate the term examining into the discussion, because you are aware of the differences between a computer and a mind. But you are destroying your own argument by doing so.
Do you think a calculator examines? When you press 1+1 on a calculator, does it examine that, then choose the best response?
Sorry, you argument is not solid at at all Keiths.
I am not a big fan of the style, where you just keep repeating “choose the best one” and then claim you are saying something valid. Its not a selection if one condition only equals one outcome. There is no choice involved. A calculator doesn’t choose the answer to 1+1. Two is the only option.
From the SEP article I linked upthread:
[start of quote]
Searle 2010 describes the conclusion in terms of consciousness and intentionality:
[…]
Searle’s shift from machine understanding to consciousness and intentionality is not directly supported by the original 1980 argument. However the re-description of the conclusion indicates the close connection between understanding and consciousness in Searle’s later accounts of meaning and intentionality.
[end of quote]
The experiment is supposed to show that one cannot get semantics from syntax alone; that is what “intentionality” is referring to. I think consciousness is related to understanding — Searle says the man following the instructions does not understand Chinese. I take understanding for Searle to require the possibility of consciousness of meaning, eg in the sense to we can consciously think about and respond to the question “what do you mean by that?”
I realized later I should have been clearer about my understanding of “rule-following”. Informally, it means carrying out a set of precise instructions literally and by rote. More formally, it means what is described as an effective procedure in this SEP article on the Church-Turing thesis (and so it also means rule-following is a key part of what Turing machines do):.
https://plato.stanford.edu/entries/church-turing/
So when you say “data matching”, to me that implies rule following — namely the rules used to match data. Physical computers running software involve multiple layers of rule following; that’s the point of my hardware reference upthread.
Another point: you talk about programming in commands as what human programmers do. I’m not sure what you mean by that, but for me the programming languages used by human programmers all must involved three concepts: sequence, selection, iteration. If programming commands just involves sequence for you, then it is not enough to capture what programming languages involve.
https://irisiri.weebly.com/sequence-selection-and-iteration.html
I’ll leave how selection relates to human choosing for you and Keith.
BruceS,
My more pragmatic conclusion would simply be that, with enough data, one could present the illusion of understanding.
I think the same thing is happening with Keiths. He thinks if a self-driving car has enough data, it somehow understands that data, and thus can make a decision based on that data. When it reality it is just a bunch of small calculators, wired together and coming up with complex results, that without a detailed look into the calculations, give the illusion of choice.
BruceS,
Yes, I don’t disagree with that. I think each of those concepts are still essentially forms of if-thens.
I suppose matching is another part of the equation when you get to data base referencing, like some programs have to do. If you ask a computer a question about Selina Gomez, it has to find within its database the things that have been tagged with that reference, and figure out what best matches based on the order of the question, etc…I don’t which of those three concepts you would call the matching process.
Well, that’s what the Chinese Room claims to demonstrate with the data being the rule book the man follows.
Given that argument, for a scientific explanation of human understanding, you have to accept of one of the replies or create one of your own and then incorporate that into an approach to cognitive science. The Computational Theory of Mind article I linked covers many such attempts.
Sorry, I am not going to get into the choice topic, aside from the comment that you seem to be making a claim similar in spirit at least to the Chinese Room argument.
Are you in some time zone where this is a normal time of day. My excuse is insomnia (I am in Toronto).
Yes.
All I am saying about the Chinese Room argument is that I don’t find it useful to explain anything, other than we can be fooled. People can also do puppet shows and make it seem they are really talking. Its not very explanatory in regards to life. It doesn’t help us to understand when we are NOT being fooled, so, there seems little point.
You are right that it does not explain anything.
But I take the point of the argument to be that there is something that needs to be explained, namely the difference between human understanding and computer syntactic rule following.
Searle says the explanation lies in human biology.
CTM says that, in essence, there is no difference, although the details matter. Precisely which details is unresolved and is the subject of ongoing research and philosophical/scientific controversy..
Simple engineering question:
Is it possible (for humans) to make something whose behavior is too complex to predict?
BruceS,
Yup. Those who claim there has to,be some essential difference between a brain (or an individual with a brain) analysing sensory information and acting on it and a sufficiently complex computer analysing sensory information and acting on it (as a sufficiently well-designed driverless car control system might) need to explain why computation depends on the medium.
I vote no.
To “avoid that whole briar patch of epistemology” is reminiscent of the drunk man looking for his keys under the light. We should not be avoiding an undertaking purely on the grounds that it is going to be difficult to get through.
When you talk about “the given” as used by Sellars and I talk about “the given” as used by Steiner, we are talking about two different things.
Correct me if I’m wrong but my understanding is that Sellars’ “given” is something that is known in a fundamental way without any effort on the part of the knower. Steiner’s “given” is the opposite of this. It is everything and anything that enters my sphere of apprehension prior to my activity in trying to understand it.
I think that Sellars and Steiner would have agreed that what is given through our senses contains no information that we would be able to gain knowledge of without activity on our part. Do you agree?
So Steiner begins by trying to understand cognition itself without making any prior assumptions. He states that, “when the better-known systems of epistemology are more closely examined it becomes apparent that a whole series of presuppositions are made at the beginning, which cast doubt on the rest of the argument” and in Truth and Knowledge, Introduction to The Philosophy of Freedom he says:
Steiner argues that Kant’s question, “How are synthetical judgments a priori possible?” is not free of presuppositions and so sets us in the wrong direction right from the start.
The consistency and strength of the cable doesn’t matter if the load it is supporting is of no use to anyone and is just so much excess baggage.
How about a dice tossing machine?
I do have somewhere to go with this.
The first thing I would note here is that fibre optic transmission uses radiation outside of the visible spectrum.
What has the fact that we perceive a range of colours have to do with fibre optic transmission? What is being transmitted, colours or light energy? Both have to do with light but these are not the same thing.
What they say is that the exact functional (ie input/output) behaviour implemented by the medium depends on the details of the medium, eg to properly capture brains, one needs to capture the functions made possible by the biochemistry and architecture of neurons, systems of interconnected neurons, hormones, and other brain components.
That is called neurofunctionalism.
OK. So is this supposed to be binary? Could not an accurate enough model emulate the behaviour of neurons, architecture, the connections, the changing (dendrite growth and atrophy) of connections, the effect of hormones to produce an “artificial” brain? Of course it cannot be more complex than a human brain!
Don’t see any problem there.
Alan Fox,
Do you really think you can make a computer that can be sad?
Yes, that is what neurofunctionalist means. In particular, functionalism means that there is nothing to being X beyond functional behavior of X.
If it walks like a duck and quacks like a duck then it is a duck. Or as Dennett puts it:
[start of quote]
“Functionalism is the idea that handsome is as handsome does, that matter only matters because of what matter can do. Functionalism in this broadest sense is so ubiquitous in science that it is tantamount to a reigning presumption of all of science.”
[end of quote]
The neuro bit emphasizes that it is the functions of brain processes and structures that are of concern.
And to get back to OP, Goff and Kastrup disagree that duplicating brain functions is enough. They argue that functional behavior and so science cannot capture the qualitative nature of subjective experience.
But they don’t agree on the best alternative — Goff says bottom-up panpsychism, but Kastrup rejects this (and top down cosmopsychism) in favor of idealism.
I’d need you to be more specific to respond to this. If you are just making an observation I’ll leave it at that.
That is not what I am claiming. I gave that as an example of observation without prejudgement.
By observing consciousness in ourselves and in the world around us without jumping to conclusions about cause and effect, I’m sure we can come to some agreements about its attributes.
As Goethe said colours are the product of the interactions of light and darkness Spectrophotometers work by manipulating attenuated light. The results of spectrometry are not dependent on light containing colours. The colours are products of the activity, they are not “in the light”. Light is invisible.
Quote from Chalmers:
‘When I was in graduate school, I recall hearing “One starts as a materialist, then one becomes a dualist, then a panpsychist, and one ends up as an idealist”. I don’t know where this comes from, but I think the idea was something like this. First, one is impressed by the successes of science, endorsing materialism about everything and so about the mind. Second, one is moved by problem of consciousness to see a gap between physics and consciousness, thereby endorsing dualism, where both matter and consciousness are fundamental. Third, one is moved by the inscrutability of matter to realize that science reveals at most the structure of matter and not its underlying nature, and to speculate that this nature may involve consciousness, thereby endorsing panpsychism. Fourth, one comes to think that there is little reason to believe in anything beyond consciousness and that the physical world is wholly constituted by consciousness, thereby endorsing idealism.’
https://philpapers.org/archive/CHAIAT-11.pdf
The thinking-about-thinking trap! We can’t understand ourselves just by thinking about it. There is mileage in a bottom-up approach – better models of simpler systems – but I guess economics will govern where research money is spent.
If it works for him, best of luck! 🙂
No.
Well why not, what’s preventing it? We are just computers.
I am incompetent to discuss the philosophical issues here, but I wonder why no one is discussing things from an evolutionary standpoint. Instead of asking how a human consciousness could come into existence, start with (say) a worm, a bilaterally symmetric tubelike thing with muscles, a mouth at the front, and some nerve cells to sense the environment. Are there ways that natural selection could improve the connections between nerves and muscles? Could improve the firing patterns of the nerves to make the worm orient better toward food or away from predators? Could make ganglia bigger and have their nerve connections do a better job of turning the nerve signals into effective behavior? Are there ways that such a nervous system could have states that reflect recent perceptions in addition to immediate ones?
And does this get us closer to a beast that can process inputs algorithmically? Or have I misunderstood the issues, and failed to understand that the “consciousness” of monkeys, or mice, is irrelevant, that we must be discussing human consciousness and only that? (Or that all this was covered somewhere upthread in the 500 comments not all of which I read?)
(And to anyone who points out that I have started by assuming that evolution happened and that natural selection is the reason we have such effective adaptations, of course I did — this is not addressed to people who want to disbelieve those assumptions).
I have to agree.
“Consistent with this hypothesis, Gordon Gallup found that chimps and orangutans, but not little monkeys or terrestrial gorillas, demonstrated self-awareness in mirror tests. The concept of consciousness can refer to voluntary action, awareness, or wakefulness.”
https://en.m.wikipedia.org/wiki/Animal_consciousness
Proof for the evolution of consciousness…
It also depends on other factors, such as whether a computer chip fails in the middle of the computation.
I agree with phoodoo on this point. We talk of computers making decisions, but that talk involves using metaphors.
That definition fits pragmatic decision making far better than it fits logic conclusions.
Pragmatism is in; logic is out.
Logic is still available as a pragmatically chosen tool. But at its core, choosing is pragmatic rather than logical.
I’d say that it is useful. But it does not demonstrate what Searle claims to show.
It is useful for making clear the distinction between semantic decisions and syntactic decisions. The argument makes the case that the computer uses purely syntactic operations, and does not touch the semantics (in the normal sense of semantics rather than in the sense of “formal semantics”).
This stark distinction is well known to mathematicians and computer programmers. But it is not nearly as familiar to people whose background is in the humanities.
Joe Felsenstein,
I think that an evolutionary perspective is extremely helpful, but it doesn’t resolve the philosophical issue. (From which one might conclude, so much the worse for philosophy.)
The evolution of complexity in neurocomputational processes , beginning with an ancestral bilateral worm, still leaves open the philosophical question as to why that process is accompanied by increasing degrees of awareness.
That’s Searle’s failure right there.
He is dealing with a philosophical question, not a biological question. It is a mistake to pass the buck to biology.
Are asking this as an “in principle” question or an “in practice” question?
There is a difference between a brain (or, really, a person with a brain) acting in the world, and a computer acting in the world.
And I suppose that’s a trivial point, because a computer doesn’t actually have a world. Computation is abstract.
Wouldn’t worry about that, Joe. Plenty of others are pitching in without the least competence; including me.
I’m totally in agreement with you that evolution is the only plausble route to sentient organisms such as humans. Regarding consciousness, it’s apparently trolling to ask what people mean when they use the word and what it is about “consciousness” that prevents “it” emerging as have all other aspects of sentient awareness, self-awareness and the ability to think that precludes an evolutionary, incremental pathway.
🙂
But there are, in my view, differing levels of awareness and self-awareness across the animal kingdom. Those branch tips can be linked to a convincing extent by bringing in fossils and phylogenetics.
But we can feed sensory information into the model, surely?
Pretty much all of my thinking about human cognition is from an evolutionary standpoint. That’s probably why just about everyone thinks I am obviously wrong.
Well, no, that’s not how I have been thinking of it. That way of thinking is probably a pathway into the trap of computationalism.
No, it’s the trap into thinking we can think our way through into solving how we think. We are misled by the first-person experience of thinking into thinking that we thus know how we think. At least I think so! 😉
Information is abstract. It does not provide a world.
I was not attempting to solve the problem of how we think. Instead, I was looking at the question of how we learn. And that question seemed to be the key to everything.