In a short essay, Bernardo Kastrup argues that consciousness cannot be the product of evolution:
Consciousness Cannot Have Evolved
I disagree, but I’ll leave my objections for the comment thread.
In a short essay, Bernardo Kastrup argues that consciousness cannot be the product of evolution:
Consciousness Cannot Have Evolved
I disagree, but I’ll leave my objections for the comment thread.
You must be logged in to post a comment.
Be brave, what is one more post in guano?
We gain a sure starting point in our quest to understand reality.
An example of a method which makes careful observations and lets them speak for themselves can be found in Goethe’s “Theory of Colours”.
An article quoted from below gives a good account of the way Goethe dealt with colour.
Between Light and Eye: Goethe’s Science of Colour and the Polar Phenomenology of Nature by Alex Kentsis
He quotes Goethe:
He writes that In Goethe’s phenomenology, “the higher phenomenon did not appear to the senses. Instead, it was discovered within the sensory.”
He did not imagine some unobtainable hidden reality lying behind the senses. He used his senses combined with memory and thinking to bring reality within his grasp.
Kentsis continues:
And something I think Kantian Naturalist would agree with in part at least:
Philosophers like Daniel Dennett do not begin by observing and letting these observations speak to them, they begin by assuming that the prevailing physicalist account of evolution is true and then theorising from there.
Goethe trusted his senses, not to give him a true account of reality but to enable him to approach reality. Rather than dismissing sense phenomena as false, he used them to reach higher phenomena through experience.
In the classic brain in a vat, what your brain is experiencing is a virtual reality indistinguishable from the “real world” reality that you are a brain in a jar. In that scenario , is there thinking going on?
Uh huh. How do you know this? Have you read any of Dennett’s epistemology? Or any epistemology that Dennett’s work relies upon?
I tend to think that a bit of clarity here involves distinguishing between cognition as content and cognition as computation.
Content cognition is “ordinary language” cognition, cognition in the sense of “what are you thinking about?” — it’s the medium in which we think when we think to ourselves and aloud with others. (For most people it’s linguistic. I actually do think in language but I understand that some people — most? — do not.)
Cognition as computation is cognitive science cognition, cognition in the sense of information processing, reliably tracking covariations, etc.
This distinction allows us to pose some further questions:
1. is computation skull-bound or embodied? (here we would need to be careful to deal with the coupling-constitution fallacy — it’s one thing to say that computation is a neural process that needs a body to get going, and other to say that being embodied itself constitutes the computational process.)
2. is computation necessary and sufficient for content, or just necessary?
The “brain in the vat” scenario rests on the assumptions that computation is restricted to the brain and that computation (plus a sufficiently rich information source) is not only necessary for content but also sufficient.
I think that both assumptions are incompatible with our best cognitive science.
Firstly, we still don’t know how tightly integrated neurocomputational processes are with sensory transducers and motor effectors, so we’re not yet in a position to say that if the transducers and effectors were replaced with computer-generated data, the neurcomputional processes would work at all.
Secondly, we have some fairly compelling evidence that content is an emergent property that involves dynamic causal loops between brains, bodies, and the world.
I have started to realize that the question “how does content emerge from computation?” is at the heart of cognitive neuroscience. There’s a nice debate to be framed between Quine and Sellars with regard to whether naturalism requires content eliminativism (there just aren’t any such things as meanings) or content emergentism. Plausibly the projects of Rorty, Churchland, Dennett, Millikan, and others consist in an attempt to split this difference.
Interesting question. Taking Neil’s view” Brains don’t think. People think, and use their brains in the process” if brains are people then Neil’s statement becomes “ Brains don’t think. People( brains) think, and use their brains in the process” that did not seem quite right.
So maybe the answer to your question is, it depends on what the brain is doing and where it is doing it whether it is a people.
I was hoping you might supply some clarityism.
Ha, fair enough! I’m so used to writing for academic audiences that I often forget to break it down sufficiently! Yeah, that was definitely more jargon-y than it needed to be!
If we presuppose technology to keep a brain in a jar, might as well go all in and presuppose the virtual stimulus is indistinguishable at all levels from the non-virtual normal.
No problem, challenging is good. Just did not wants another post about my conflating terms by Greg
When will you finally leave that starting point?
Yeah, I remember how we discussed Goethean colour physics. I note that between the Goethean and the Newtonian colour physics, it is the latter that has been the spectacular succesful one with many applications in technology and science, whereas the former is gathering metaphorical dust. Why do you expect this will be different in our understanding of consciousness (or anything really)?
Is there thinking going on when we dream? Are we conscious when we dream? Have you ever questioned whether you are dreaming while you were dreaming?
petrushka,
Yes.
Yes. We’re conscious of the thoughts and sensations that make up the dream.
Yes, and I learned a simple and effective technique for determining whether you are dreaming. Whenever you’re reading an item (a page, an ad, a road sign, etc.), look away momentarily and then look at the item again. If the text changes, you are in a dream.
newton,
Right — this is a thought experiment, after all. And if the stimulus matches, the brain activity should also match.
To put it differently, the brain-in-vat has no way of determining that it is a brain-in-vat.
Only can judge by my rather poor dreaming ability, I often realize it is a dream even as I am responding to the dream. That awareness seems like thinking, then maybe just dreaming.
You can experience emotions , so some part of you is paying attention
To most unsettling is dreaming you woke up from a dream, the resulting dream seems more real .
Regarding the question of whether brains compute, I answer in the affirmative and offer this example (which I used a couple of years ago):
The addition is an instance of computation, and it takes place entirely in the brain.
I’m a skeptic of that whole idea.
Or, more specifically, I am skeptical of the possibility of a virtual reality that is indistinguishable from actual reality.
No (in my opinion).
“Conscious” is too vague a term to be able to answer that. Perhaps disjunctivists would say “no”, but I’m not even sure of that.
Neil,
It need only be an in-principle possibility. This is a thought experiment, after all.
My answer to that quoted question would be “It doesn’t.”
Whether there are meanings is tricky. But I don’t doubt that there is meaning. There’s perhaps an issue on whether it can be individuated to specific meanings. But then I suppose I’m not much concerned about what naturalism is said to require.
I’ve read enough Dennett to know where he is coming from. Among other things he is a Darwinian reductionist who thinks that we “are approximately 100 trillion little cellular robots” and nothing else..
.
As he writes here
I’d be interested in any quotes you can give us from Dennett that contradicts my view.
He thinks that if the problem of consciousness is to be solved then it will be solved by neuroscience.
He enjoys talking about how our consciousness is fooled by illusions. But then he goes on to explain what the illusion is. For example most of us will know the illusion of what looks like a white Necker cube with black discs behind the corners. He then explains the reality of the image. In other words he is consciously aware of the reality behind the illusion. So his consciousness is being fooled but he conscious of it being fooled at the same time.
What these illusions show me is that in order to approach reality we need to apply our thinking to our visual perceptions. Once we have found the appropriate concepts that belong to our perceptions then we become aware of the reality.
Acquiring knowledge is a unifying process.
I have left it and I have built my world picture from there. There are many here including yourself who have been arguing against some of the conclusions I have drawn from this starting position.
I know that Goethe’s colour theories have been applied by artists and dyers. And there is this from Physics Today:
“Exploratory Experimentation: Goethe, Land, and Color Theory”
Newton believed that colours are somehow “hidden” in white light. In what way do you think this has been applied to technology?
CharlieM,
Dennett is to be read as exploring the consequences of a hypothesis. The proof of the pudding is in the eating of it: what problems does it avoid and what puzzles does it solve? You are constantly trying to go back to some ultimate first principle. That’s not how Dennett does philosophy — and I think he’s right to avoid that whole briar patch of epistemology. (But for a work of philosophy that develops the epistemology that’s compatible with Dennett’s work, try Groundless Belief by Williams.)
What emerges quite nicely in Williams and Dennett is a consistently anti-foundationalist, holistic epistemology: what we aim for is not a bedrock of unquestionable first principles but inferential consistency across multiple lines of evidence and inquiry.
As Charles Peirce put it, “reasoning should not form a chain which is no stronger than its weakest link, but a cable whose fibers may be ever so slender, provided they are sufficiently numerous and intimately connected” (in “Some Consequences of Four Incapacities“).
Me too at first, but since brains in living in jars has become so commonplace I guess it was just a matter of time. I believe there still is a problem with getting the texture of peanut butter just right.
Fiber optic transmission of data.
That’s not very reassuring. Your conclusions never logically follow from any “sure starting point”, and are invariably fanciful fabrications (sorry). Looks to me like you use your insistence on epistemological bedrock mainly to dismiss alternative “reductionist” explanations .
I agree that exploratory and descriptive experiments have a place in research, but fail to see why you claim that as a success of “not prematurely assuming a separation between subject and object”. You will need to unpack this a little for me.
And how did you envisage this to be implemented for gaining an understanding of consciousness?
Newton* mentioned one application already. The example I was thinking of was spectrophotometers, which can measure light absorbance of a sample. Light diffraction is used to produce a monochromatic beam of light. There are many more applications.
ETA *The other one, who comments here at TSZ. LOL!
So if I understand you correctly, you responded to his argument before he made his argument. Is that right?
From the OP:
I’m still waiting…
You objected before he even spoke?
Mung,
If you’re going to troll, at least make it entertaining.
For those who care about philosophical take on things, if only to belittle it:
In last couple of days, SEP posted significantly revised articles on Computational Theory of Mind and on Searle’s Chinese Room
https://plato.stanford.edu/entries/computational-mind/
https://plato.stanford.edu/entries/chinese-room/
BruceS,
I can not understand how the Chinese room argument, if you want to call it that, is even saying something. People ask questions in Chinese, a computer gives the appropriate answer, and the person who is the go between is just sliding the answers back under the door. What in the heck is that supposed to teach us about anything, other than the computer understood the Chinese characters, even if the person in the room didn’t. SO?
What are these so called “rules” about manipulating the characters that Searle is talking about? If you did the same thing with English, and all you did was slip English questions under the door, and the guy in the room just gives it to the computer and asks the computer to come up with an appropriate response, the person on the other side of the door might make one of several conclusions-The guy inside speaks English, the guy inside has a translating program, or the guy in the room is calling his friend and telling him what the letters look like, and asking how to respond.
Where is the complex intellectual mystery?
BruceS,
I would even suggest that such an argument is not even talking about language, but rather it is talking about math. Its more like taking a calculator and pushing the buttons, Seven plus five plus twelve equals twenty four. Modern calculators even say the words as you hit the buttons. The person inputting the figures doesn’t have to understand math, the “computer” doesn’t have to understand math, BUT the person who made the program, THEY need to understand math.
So someone had to understand it, it is not just about following symbols. Now, when it comes to language, its just more complex use of the symbols. Whoever made the computer program, they better understand Chinese, and not only that they better understand history, and culture. Or else, the computer had better be able to see text FROM OTHER PEOPLE that understand Chinese and gather that information, so that they can answer. And the less data the computer has, the less likely it is that their answer will seem intelligent or even intelligible. More data, from more real people, and you are more likely to get accurate replies.
Not too mysterious really. It all comes down to SOMEONE understanding it.
If a calculator is programmed by someone who only understands addition, then if you try to get it to give calculations about division, your answers won’t work.
Yes, that is the point. Searle is claiming there is something more to language as used by human communities than what is captured by formal logic rules. A computer program which is not running is a part of formal logic (the phrase in italics is important.)
Another way of putting it is that human language has meaning, ie semantics, and not just syntax. Formal logic/rules is just syntax.
Later versions of Searle emphasized that meaning requires intentionality (ie aboutness) and also that humans are conscious eg of the meaning or of using language meaningfully. As per the article, many take later Searle as emphasizing the human consciousness thrust of his argument.
You can read all the replies and Searle’s counters in the article. There have been previous threads in TSZ where these are thrashed out. I still like the robot reply. (FWIW, a similar idea is what I take KN to be including as part of his dynamic causal loops in his post above)
But I also think there is merit in Scott A’s questioning of the unjustified intuition underlying the no consciousness bit of Searle’s argument.
https://www.quora.com/Whats-your-take-on-John-Searles-Chinese-room-argument
What I think you are referring to is what I know of as the derived versus original intentionality separation that Searle also pushes. He agrees that the understanding/intentionality in the rules is derived from that of whoever built the rules. It is not only in the rules. Only conscious (and so necessarily biological) humans can have original intentionality, according to Searle.
That Scott’s point, I think: if there is no behavioural difference, how can Searle justify saying that the person who does not understand Chinese but just follows the rules does not in fact understand Chinese.
ETA: Searle might answer that we know than the I/O behavior — we also know the mechanism producing it and the fact that the mechanism is not biological.
It’s Searle’s intuition, shared by many, that such a rule-following person would not understand.
If the human already spoke English, then using English questions and replies would not capture the point of the thought experiment.
BruceS,
What I meant was, by using Chinese as the language he is sort of implying that one can just follow rules about making characters and generate answers-which isn’t the case.
By the same token, if the person didn’t understand English, and you sent English questions under the door, you couldn’t just use some English rules , and generate coherent answers. At the root of it all, SOMEONE would have to understand English, and then supply the data of answers.
The premise of the thought experiment is could.
It’s been a while since I looked that paper, but I think the rules were something like “if you see this squiggle, produce this other squiggle as output. As summarized in SEP, Searle’s current version just says rules as in computer programs.
I think in the original version of the paper he restricted the questions to those pertaining to a specific, provided story so as avoid the objection that on finite set of rules could capture all of the possible questions and answers.
(Aside: But the human brain is finite and we can generate an infinite number of sentences, so maybe the right rules could generate answers to any question? However, no one thinks any such rules are if-then rules. But maybe they are pattern matching rules associated with sensory/motor mental representations of experienced human and world interactions?)
BruceS,
Well, I am trying to imagine what a “rule” in a language would be. If one said in English, “The color of the door is green, isn’t it? What would the “rule” be which answers that?
I don’t think there is a rule. I think a computer could just go through a series of possible answers and choose one, but is that the same thing as following a rule?
Like when you talk to a navigation system, or to Siri, if it is given a question that it doesn’t know the answer to or has never been asked, it will just say, I don’t understand, or who is that, or what is that, or can you ask again…Like if you said-Siru, what’s up? If no one had programmed in what it is supposed to say, based on that set of words, it would just say, Can you repeat that, over and over. Until, one day a programmer made it say, Can I help you, or whatever.
Are those rules, or are those just commands that someone programmed in to answer?
Yes, I think computers follow a rule and I also think that is a necessary part of what physical computation is. But I agree that computers following a rule and humans following a rule may not be the same thing (may)..
What it means humans to follow a rule is one of those things philosophers argue about.
https://plato.stanford.edu/entries/wittgenstein/#RuleFollPrivLang
For computers, rule-following is based in the end in the physical operation of hardware; it is not in itself following rules* (although human scientists may how science may describe it that way)..
I know that people built that hardware and the micro-instruction set underlying CPU operations. But I am referring to the quantum physics of modern computer hardware.*
Some of your post seems to be the topics of making a choice and perhaps even (shudder) free will. I’m not interested in going there; lots of stuff already on that on TSZ.
————————–
* Now some would say that quantum-based hardware is following rules too because the universe is nothing more than a quantum computer implementing a program that produces the universe itself. That’s a topic for another thread.
https://www.amazon.com/Programming-Universe-Quantum-Computer-Scientist/dp/1400033861.
I did not answer that because I don’t know. Here is my guess; I did not bother googling “how does siri work” so feel free to do so and then correct me.
AI language understanding generally involves deep learning, which is implemented on computers, so it is rule following in the end. But the rules are learned by the computer being exposed to language usage as training data (captured from internet, as you say).
The rules are then encoded as weights in a hierarchical network of artificial neurons, not as a traditional programming language as used by people..
I think that part of the understanding is to learn the answers to the question by theprovided input training data..
Further, I suspect that when Siri says “I don’t know” or when it answer with a guess, then it is following something similar to a traditional programming language instruction which takes effect when the deep learning piece detects somehow that it cannot reliably pattern match the language in the question.
It is mainly making an intuitive argument that computation is purely syntactic and does not in any way depend on semantics. Searle believes that this is devastating to AI.
Most mathematicians and computer scientists would completely agree that computation is purely syntactic. They do not agree that this is devastating to AI.
The more I play around with Searle’s Chinese room argument, the more it looks really stupid.
Searle is, after all, not a dualist — he’s a naturalist. He just thinks that brains have original intentionality. (I’m not making this up — he says exactly this.) It’s the task for neuroscience (he says) to explain brains have original intentionality. And because they have original intentionality, everything that is said or done with them — all of our language and actions — therefore has derived intentionality. Brains are genuine semantic engines!
This is where his debate with Dennett becomes relevant, since Dennett thinks that nothing is a genuine semantic engine. For Dennett, rejecting Cartesianism about the mind means accepting that there aren’t any real semantic engines, just syntactic engines that are usefully described as having semantic properties.
But if you look deeper into Searle’s argument for why computers can’t be semantic engines, when brains can be, one comes up empty. And then you realize: Searle never actually says that computers can’t be semantic engines. He says that programs cannot be. And that’s because a program is just a list of instructions.
The upshot of the whole mess is this: the reason why programs cannot be semantic engines, and thereby have original intentionality, has nothing at all to do with physics or biology. It relies solely on the metaphysical truism that abstract objects have no causal powers. And that’s what a Turing machine, strictly defined, is: a logical machine, or an abstract object. (When Turing invented them, he invented the concept in order to solve a problem in pure mathematics!)
In any event, no serious AI researcher — or AI critic — takes Searle’s argument seriously these days. Like Plantinga’s EAAN, it’s fun to play with but really misses the point of the whole debate, more clever than insightful.
Yes, that sums it up nicely.
Even if Searle’s intuition is right — that AI cannot have intentionality — he has failed to prove it. He has no more than his own assertion.
KN,
Computers and programs, when physically instantiated, definitely have causal powers. We wouldn’t pay money for them otherwise.
The thought experiment involves an instance of the abstract “person” category running an instance of the abstract “program” category. The instances are concrete, yet according to Searle, original intentionality is still absent.
Thus the abstract vs. concrete question can’t be the relevant one.
My reading of Searle is that the relevant question is whether you have semantics at the lowest level of the system. A computer manipulates symbols without regard to their meaning; a brain is semantic at its core (according to Searle). Thus the latter possesses original intentionality while the former does not.
I’m with Dennett on this one. The brain is a syntactic machine, just like the computer and program. The intentionality we ascribe to it is really “as if” intentionality, not original intentionality.
Yes, I’m slowly coming around to a Dennettian or Dennettian/Churchlandian position on this stuff . . . I like to think of Dennett’s point as rejecting the very distinction between original and derived intentionality. (Though the intentionality we ascribe is ascribed to the person, not to the brain.)
BruceS,
Well, I am still just not sure that there is “rules following” going on, as much as there is data matching. Either way, I don’t make much of Searle’s argument, but even though I think Scott A’s assessment of the argument is valid, I don’t go to this point:
I don’t get this part. Neither Searle’s argument, nor the students have come close to arguing ANYTHING about consciousness. The only argument being made is that if a data base is big enough (so far no one has made that data base) one could approximate the reply a conscious person would make in most situations.
Not an important realization in my opinion.
I say, ask a computer to tell a joke that has never been told. So far, they can’t.
In this case, not necessarily. If you ask a computer who is Justin Bieber, it doesn’t manipulate words according to their meaning, it is matching words. That is a different idea-even if it looks the same to the end receiver. In same cases it may be using the meaning of a word, to match it to other replies, but I still am not so sure about that.
But you still struggle with explaining a decision, so I don’t buy your assessment there. If one condition can equal two outcomes, then your reasoning doesn’t apply. Otherwise, two people asking Siri the exact same question, under the exact same conditions, might get two different answers.
I am pretty sure that doesn’t happen. But it is a simple experiment. Just have two people stand next to Siri and ask the same question, see if you can get it to give different answers.
phoodoo,
I don’t struggle with explaining decisions and choices, though it seems you struggle to understand me.
Think of a self-driving car in a particular condition and location. Now consider two cases. In case #1 there is a 10-minute traffic jam along the planned route. In case #2 there is no traffic jam. Is it really surprising to you that in case #1, the car can decide to follow a different route? To choose the fastest one? Does it really surprise you if in case #1 it keeps the current route?
It could easily happen. Suppose Siri were engineered to customize its responses depending on the user. Then Bob and Brenda could ask the same question — such as “Siri, where is the restroom?” — and get different answers.
keiths,
Why are you giving me examples of DIFFERENT situations giving different outcomes as examples of the SAME situation giving different outcomes?
You are still struggling with this concept.
phoodoo,
Because you’ve been sloppy about specifying what you mean when you say “one condition, two outcomes.” Condition of what, precisely? The car? The universe?
If the car is in the same condition but the universe is not, then it’s easy to see how the outcome can change: one condition, two outcomes. Agreed?
Oh my heavens keith, one condition means one condition. Yes, EVERYTHING the same. Just like I proposed to your decision that you think you can make, given the exact same set of criteria. Like, Do you want chocolate or vanilla. We can all agree that on some days you might like chocolate more (maybe you have been eating vanilla everyday for a week straight) and other days vanilla. But can you, under only ONE condition, make two decisions? Of course not, if you believe your computer analogy.
So yes, we need ALL conditions that the computer takes into account to be the same. Same sound of the voice (if that is what the computer is designed to account for) , same time (if we program it for time), same weather (again if its factors are programmed for weather) , on, and on. Same means same keiths.
Now, if you believe that your brain is taking into account world events in Syria when it chooses chocolate, then indeed, the world events in Syria must be the same condition which cause you to choose chocolate instead of vanilla. If you believe the amount of isotopes on Pluto affects your decision, then yes, that is part of the condition.
Why is same means same a hard concept? If you ask your computer which is the fastest way home, and on one day a bridge is closed and another day a bridge is open, we don’t expect to get the same result, if the bridge is involved. or if its snowing, and there is a traffic jam. Or if its 4 a.m. instead of 4 p.m.
Same means same keiths. Whatever factors the “computer” is evaluating, must be the same.
phoodoo,
One condition of the car doesn’t mean one condition of the universe, phoodoo. You need to learn to think and write more precisely.
If everything is the same, then the outcome of the decision will be the same (ignoring possible quantum indeterminism). That includes when humans are involved.
We’ve been over this again and again, but apparently you need more repetitions.