677 thoughts on “Consciousness Cannot Have Evolved

  1. phoodoo: I think you will use any excuse you can find to put a post in guano if I give you an answer which shows how wrong you are, so I can’t answer.

    Be brave, what is one more post in guano?

  2. Corneel:

    CharlieM: It is not a matter of rejecting it. It’s a matter of being careful that I am not prematurely assuming I know where the distinction lies and then proceeding from that position.

    Yes, I get all that. But my question was: what do we gain by it? What profound insights do we receive that we would have otherwise missed out on?

    If this kind of loitering is typical of idealists, I don’t blame those blokes that just examine brains to learn about consciousness.

    We gain a sure starting point in our quest to understand reality.

    An example of a method which makes careful observations and lets them speak for themselves can be found in Goethe’s “Theory of Colours”.

    An article quoted from below gives a good account of the way Goethe dealt with colour.

    Between Light and Eye: Goethe’s Science of Colour and the Polar Phenomenology of Nature by Alex Kentsis

    He quotes Goethe:

    If I look at the created object, inquire into its creation, and follow this process back as far as I can, I will find a series of steps. Since these are not actually seen together before me, I must visualize them in my memory so that they form a certain ideal whole. At first I will tend to think in terms of steps, but nature leaves no gaps, and thus, in the end, I will have to see this progression of uninterrupted activity as a whole. I can do so by dissolving the particular without destroying the impression itself’.

    He writes that In Goethe’s phenomenology, “the higher phenomenon did not appear to the senses. Instead, it was discovered within the sensory.”

    He did not imagine some unobtainable hidden reality lying behind the senses. He used his senses combined with memory and thinking to bring reality within his grasp.

    Kentsis continues:

    In contrast to Newton’s, Goethe’s natural philosophy was not interested in the decomposition of phenomena into their causal processes, either mechanical or genealogical. Rather, Goethe sought ‘conditions under which phenomena appear; their consistent succession, their eternal return under thousands of circumstances [and] their uniformity and mutability’. This ‘mistrust of abstraction’13 was an expression of Goethe’s reverence of the natural object. Providing direction, this guide warned against the idealization of Nature by the natural philosopher who ‘should be careful not to transform perceptions into concepts, concepts into words, and then treat these words as if they were objects’

    And something I think Kantian Naturalist would agree with in part at least:

    In the Theory of Colours, he directly rejected the idea that it was possible to have perceptions without theoretical constructions:

    Every act of seeing leads to consideration, consideration to reflection, reflection to combination, and thus it may be said that in every attentive look on nature we already theorise. But in order to guard against the possible abuse of this abstract view, in order that the practical deductions we look to should be really useful, we should theorise without forgetting that we are so doing, we should theorise with mental self-possession, and, to use a bold word, with irony

    Philosophers like Daniel Dennett do not begin by observing and letting these observations speak to them, they begin by assuming that the prevailing physicalist account of evolution is true and then theorising from there.

    Goethe trusted his senses, not to give him a true account of reality but to enable him to approach reality. Rather than dismissing sense phenomena as false, he used them to reach higher phenomena through experience.

  3. Entropy: Are you suggesting that brains-in-vats are not people too?

    Neil Rickert: Yes, pretty much.We actually think about a world.Thinking about a chemical bath doesn’t seem very stimulating.

    In the classic brain in a vat, what your brain is experiencing is a virtual reality indistinguishable from the “real world” reality that you are a brain in a jar. In that scenario , is there thinking going on?

  4. CharlieM: Philosophers like Daniel Dennett do not begin by observing and letting these observations speak to them, they begin by assuming that the prevailing physicalist account of evolution is true and then theorising from there.

    Uh huh. How do you know this? Have you read any of Dennett’s epistemology? Or any epistemology that Dennett’s work relies upon?

  5. Neil Rickert: Yes, I assumed that you disagreed.Most computationalists (“cognition is computation”) are likely to disagree with me about that (about brains in vats).

    I tend to think that a bit of clarity here involves distinguishing between cognition as content and cognition as computation.

    Content cognition is “ordinary language” cognition, cognition in the sense of “what are you thinking about?” — it’s the medium in which we think when we think to ourselves and aloud with others. (For most people it’s linguistic. I actually do think in language but I understand that some people — most? — do not.)

    Cognition as computation is cognitive science cognition, cognition in the sense of information processing, reliably tracking covariations, etc.

    This distinction allows us to pose some further questions:

    1. is computation skull-bound or embodied? (here we would need to be careful to deal with the coupling-constitution fallacy — it’s one thing to say that computation is a neural process that needs a body to get going, and other to say that being embodied itself constitutes the computational process.)

    2. is computation necessary and sufficient for content, or just necessary?

    The “brain in the vat” scenario rests on the assumptions that computation is restricted to the brain and that computation (plus a sufficiently rich information source) is not only necessary for content but also sufficient.

    I think that both assumptions are incompatible with our best cognitive science.

    Firstly, we still don’t know how tightly integrated neurocomputational processes are with sensory transducers and motor effectors, so we’re not yet in a position to say that if the transducers and effectors were replaced with computer-generated data, the neurcomputional processes would work at all.

    Secondly, we have some fairly compelling evidence that content is an emergent property that involves dynamic causal loops between brains, bodies, and the world.

    I have started to realize that the question “how does content emerge from computation?” is at the heart of cognitive neuroscience. There’s a nice debate to be framed between Quine and Sellars with regard to whether naturalism requires content eliminativism (there just aren’t any such things as meanings) or content emergentism. Plausibly the projects of Rorty, Churchland, Dennett, Millikan, and others consist in an attempt to split this difference.

  6. Entropy: Are you suggesting that brains-in-vats are not people too?

    Interesting question. Taking Neil’s view” Brains don’t think. People think, and use their brains in the process” if brains are people then Neil’s statement becomes “ Brains don’t think. People( brains) think, and use their brains in the process” that did not seem quite right.

    So maybe the answer to your question is, it depends on what the brain is doing and where it is doing it whether it is a people.

  7. Kantian Naturalist: I tend to think that a bit of clarity here involves distinguishing between cognition as content and cognition as computation.

    Content cognition is “ordinary language” cognition, cognition in the sense of “what are you thinking about?” — it’s the medium in which we think when we think to ourselves and aloud with others. (For most people it’s linguistic. I actually do think in language but I understand that some people — most? — do not.)

    Cognition as computation is cognitive science cognition, cognition in the sense of information processing, reliably tracking covariations, etc.

    This distinction allows us to pose some further questions:

    1. is computation skull-bound or embodied? (here we would need to be careful to deal with the coupling-constitution fallacy — it’s one thing to say that computation is a neural process that needs a body to get going, and other to say that being embodied itself constitutes the computational process.)

    2. is computation necessary and sufficient for content, or just necessary?

    The “brain in the vat” scenario rests on the assumptions that computation is restricted to the brain and that computation (plus a sufficiently rich information source) is not only necessary for content but also sufficient.

    I think that both assumptions are incompatible with our best cognitive science.

    Firstly, we still don’t know how tightly integrated neurocomputational processes are with sensory transducers and motor effectors, so we’re not yet in a position to say that if the transducers and effectors were replaced with computer-generated data, the neurcomputional processes would work at all.

    Secondly, we have some fairly compelling evidence that content is an emergent property that involves dynamic causal loops between brains, bodies, and the world.

    I have started to realize that the question “how does content emerge from computation?” is at the heart of cognitive neuroscience. There’s a nice debate to be framed between Quine and Sellars with regard to whether naturalism requires content eliminativism (there just aren’t any such things as meanings) or content emergentism. Plausibly the projects of Rorty, Churchland, Dennett, Millikan, and others consist in an attempt to split this difference.

    I was hoping you might supply some clarityism.

  8. newton: I was hoping you might supply some clarityism.

    Ha, fair enough! I’m so used to writing for academic audiences that I often forget to break it down sufficiently! Yeah, that was definitely more jargon-y than it needed to be!

  9. Kantian Naturalist: Firstly, we still don’t know how tightly integrated neurocomputational processes are with sensory transducers and motor effectors, so we’re not yet in a position to say that if the transducers and effectors were replaced with computer-generated data, the neurcomputional processes would work at all.

    If we presuppose technology to keep a brain in a jar, might as well go all in and presuppose the virtual stimulus is indistinguishable at all levels from the non-virtual normal.

  10. Kantian Naturalist: Ha, fair enough! I’m so used to writing for academic audiences that I often forget to break it down sufficiently! Yeah, that was definitely more jargon-y than it needed to be!

    No problem, challenging is good. Just did not wants another post about my conflating terms by Greg

  11. CharlieM: We gain a sure starting point in our quest to understand reality.

    When will you finally leave that starting point?

    CharlieM: An example of a method which makes careful observations and lets them speak for themselves can be found in Goethe’s “Theory of Colours”.

    Yeah, I remember how we discussed Goethean colour physics. I note that between the Goethean and the Newtonian colour physics, it is the latter that has been the spectacular succesful one with many applications in technology and science, whereas the former is gathering metaphorical dust. Why do you expect this will be different in our understanding of consciousness (or anything really)?

  12. newton: In the classic brain in a vat, what your brain is experiencing is a virtual reality indistinguishable from the “real world” reality that you are a brain in a jar. In that scenario , is there thinking going on?

    Is there thinking going on when we dream? Are we conscious when we dream? Have you ever questioned whether you are dreaming while you were dreaming?

  13. petrushka,

    Is there thinking going on when we dream?

    Yes.

    Are we conscious when we dream?

    Yes. We’re conscious of the thoughts and sensations that make up the dream.

    Have you ever questioned whether you are dreaming while you were dreaming?

    Yes, and I learned a simple and effective technique for determining whether you are dreaming. Whenever you’re reading an item (a page, an ad, a road sign, etc.), look away momentarily and then look at the item again. If the text changes, you are in a dream.

  14. newton,

    If we presuppose technology to keep a brain in a jar, might as well go all in and presuppose the virtual stimulus is indistinguishable at all levels from the non-virtual normal.

    Right — this is a thought experiment, after all. And if the stimulus matches, the brain activity should also match.

    To put it differently, the brain-in-vat has no way of determining that it is a brain-in-vat.

  15. petrushka: Is there thinking going on when we dream?

    Only can judge by my rather poor dreaming ability, I often realize it is a dream even as I am responding to the dream. That awareness seems like thinking, then maybe just dreaming.

    Are we conscious when we dream?

    You can experience emotions , so some part of you is paying attention

    Have you ever questioned whether you are dreaming while you were dreaming?

    To most unsettling is dreaming you woke up from a dream, the resulting dream seems more real .

  16. Regarding the question of whether brains compute, I answer in the affirmative and offer this example (which I used a couple of years ago):

    Someone hands you a list of numbers and asks you to add them up in your head and write down the answer. You do.

    There was information in the list. It entered your brain via your visual system. It was processed by your brain, producing the sum of the numbers. Your brain translated that sum into a series of motor commands, causing you to write down the answer underneath the list.

    The addition is an instance of computation, and it takes place entirely in the brain.

  17. newton: In the classic brain in a vat, what your brain is experiencing is a virtual reality indistinguishable from the “real world” reality that you are a brain in a jar. In that scenario , is there thinking going on?

    I’m a skeptic of that whole idea.

    Or, more specifically, I am skeptical of the possibility of a virtual reality that is indistinguishable from actual reality.

  18. petrushka: Is there thinking going on when we dream?

    No (in my opinion).

    Are we conscious when we dream?

    “Conscious” is too vague a term to be able to answer that. Perhaps disjunctivists would say “no”, but I’m not even sure of that.

  19. Neil,

    Or, more specifically, I am skeptical of the possibility of a virtual reality that is indistinguishable from actual reality.

    It need only be an in-principle possibility. This is a thought experiment, after all.

  20. Kantian Naturalist: I have started to realize that the question “how does content emerge from computation?” is at the heart of cognitive neuroscience.

    My answer to that quoted question would be “It doesn’t.”

    There’s a nice debate to be framed between Quine and Sellars with regard to whether naturalism requires content eliminativism (there just aren’t any such things as meanings) or content emergentism.

    Whether there are meanings is tricky. But I don’t doubt that there is meaning. There’s perhaps an issue on whether it can be individuated to specific meanings. But then I suppose I’m not much concerned about what naturalism is said to require.

  21. Kantian Naturalist:

    CharlieM: Philosophers like Daniel Dennett do not begin by observing and letting these observations speak to them, they begin by assuming that the prevailing physicalist account of evolution is true and then theorising from there.

    Uh huh. How do you know this? Have you read any of Dennett’s epistemology? Or any epistemology that Dennett’s work relies upon?

    I’ve read enough Dennett to know where he is coming from. Among other things he is a Darwinian reductionist who thinks that we “are approximately 100 trillion little cellular robots” and nothing else..

    .

    As he writes here

    My aim in this thesis is to solve and dissolve these peripheral epistemological problems with the aid of neurological hypotheses about the “workings of the mind”. I do not presuppose that there are no such things as minds or mental events, but argue at each step that the situations or events customarily considered to involve psychic or mental events (or in some other way not straightforwardly physical events) are in fact entirely physical in just the same way as digestion or walking is physical. The onus is the “reduction” of all mentalistic descriptions to intelligible, self-sufficient physical descriptions. And the byproduct of this programme of strict physicalisrn will be the dissolution of several persistent epistemological pseudoproblems

    I’d be interested in any quotes you can give us from Dennett that contradicts my view.

    He thinks that if the problem of consciousness is to be solved then it will be solved by neuroscience.

    He enjoys talking about how our consciousness is fooled by illusions. But then he goes on to explain what the illusion is. For example most of us will know the illusion of what looks like a white Necker cube with black discs behind the corners. He then explains the reality of the image. In other words he is consciously aware of the reality behind the illusion. So his consciousness is being fooled but he conscious of it being fooled at the same time.

    What these illusions show me is that in order to approach reality we need to apply our thinking to our visual perceptions. Once we have found the appropriate concepts that belong to our perceptions then we become aware of the reality.

    Acquiring knowledge is a unifying process.

  22. Corneel:

    CharlieM: We gain a sure starting point in our quest to understand reality.

    When will you finally leave that starting point?

    I have left it and I have built my world picture from there. There are many here including yourself who have been arguing against some of the conclusions I have drawn from this starting position.

    CharlieM: An example of a method which makes careful observations and lets them speak for themselves can be found in Goethe’s “Theory of Colours”.

    Yeah, I remember how we discussed Goethean colour physics. I note that between the Goethean and the Newtonian colour physics, it is the latter that has been the spectacular succesful one with many applications in technology and science, whereas the former is gathering metaphorical dust. Why do you expect this will be different in our understanding of consciousness (or anything really)?

    I know that Goethe’s colour theories have been applied by artists and dyers. And there is this from Physics Today:

    “Exploratory Experimentation: Goethe, Land, and Color Theory”

    Newton believed that colours are somehow “hidden” in white light. In what way do you think this has been applied to technology?

  23. CharlieM,

    Dennett is to be read as exploring the consequences of a hypothesis. The proof of the pudding is in the eating of it: what problems does it avoid and what puzzles does it solve? You are constantly trying to go back to some ultimate first principle. That’s not how Dennett does philosophy — and I think he’s right to avoid that whole briar patch of epistemology. (But for a work of philosophy that develops the epistemology that’s compatible with Dennett’s work, try Groundless Belief by Williams.)

    What emerges quite nicely in Williams and Dennett is a consistently anti-foundationalist, holistic epistemology: what we aim for is not a bedrock of unquestionable first principles but inferential consistency across multiple lines of evidence and inquiry.

    As Charles Peirce put it, “reasoning should not form a chain which is no stronger than its weakest link, but a cable whose fibers may be ever so slender, provided they are sufficiently numerous and intimately connected” (in “Some Consequences of Four Incapacities“).

  24. Neil Rickert: Or, more specifically, I am skeptical of the possibility of a virtual reality that is indistinguishable from actual reality.

    Me too at first, but since brains in living in jars has become so commonplace I guess it was just a matter of time. I believe there still is a problem with getting the texture of peanut butter just right.

  25. CharlieM: Newton believed that colours are somehow “hidden” in white light. In what way do you think this has been applied to technology?

    Fiber optic transmission of data.

  26. CharlieM: I have left it and I have built my world picture from there. There are many here including yourself who have been arguing against some of the conclusions I have drawn from this starting position.

    That’s not very reassuring. Your conclusions never logically follow from any “sure starting point”, and are invariably fanciful fabrications (sorry). Looks to me like you use your insistence on epistemological bedrock mainly to dismiss alternative “reductionist” explanations .

    CharlieM: there is this from Physics Today:

    “Exploratory Experimentation: Goethe, Land, and Color Theory”

    I agree that exploratory and descriptive experiments have a place in research, but fail to see why you claim that as a success of “not prematurely assuming a separation between subject and object”. You will need to unpack this a little for me.

    And how did you envisage this to be implemented for gaining an understanding of consciousness?

    CharlieM: Newton believed that colours are somehow “hidden” in white light. In what way do you think this has been applied to technology?

    Newton* mentioned one application already. The example I was thinking of was spectrophotometers, which can measure light absorbance of a sample. Light diffraction is used to produce a monochromatic beam of light. There are many more applications.

    ETA *The other one, who comments here at TSZ. LOL!

  27. keiths: It’s already been done.

    So if I understand you correctly, you responded to his argument before he made his argument. Is that right?

    From the OP:

    I disagree, but I’ll leave my objections for the comment thread.

    I’m still waiting…

    You objected before he even spoke?

  28. BruceS,

    I can not understand how the Chinese room argument, if you want to call it that, is even saying something. People ask questions in Chinese, a computer gives the appropriate answer, and the person who is the go between is just sliding the answers back under the door. What in the heck is that supposed to teach us about anything, other than the computer understood the Chinese characters, even if the person in the room didn’t. SO?

    What are these so called “rules” about manipulating the characters that Searle is talking about? If you did the same thing with English, and all you did was slip English questions under the door, and the guy in the room just gives it to the computer and asks the computer to come up with an appropriate response, the person on the other side of the door might make one of several conclusions-The guy inside speaks English, the guy inside has a translating program, or the guy in the room is calling his friend and telling him what the letters look like, and asking how to respond.

    Where is the complex intellectual mystery?

  29. BruceS,

    I would even suggest that such an argument is not even talking about language, but rather it is talking about math. Its more like taking a calculator and pushing the buttons, Seven plus five plus twelve equals twenty four. Modern calculators even say the words as you hit the buttons. The person inputting the figures doesn’t have to understand math, the “computer” doesn’t have to understand math, BUT the person who made the program, THEY need to understand math.

    So someone had to understand it, it is not just about following symbols. Now, when it comes to language, its just more complex use of the symbols. Whoever made the computer program, they better understand Chinese, and not only that they better understand history, and culture. Or else, the computer had better be able to see text FROM OTHER PEOPLE that understand Chinese and gather that information, so that they can answer. And the less data the computer has, the less likely it is that their answer will seem intelligent or even intelligible. More data, from more real people, and you are more likely to get accurate replies.

    Not too mysterious really. It all comes down to SOMEONE understanding it.

    If a calculator is programmed by someone who only understands addition, then if you try to get it to give calculations about division, your answers won’t work.

  30. phoodoo: would even suggest that such an argument is not even talking about language, but rather it is talking about math. Its more like taking a calculator and pushing the buttons,

    Yes, that is the point. Searle is claiming there is something more to language as used by human communities than what is captured by formal logic rules. A computer program which is not running is a part of formal logic (the phrase in italics is important.)

    Another way of putting it is that human language has meaning, ie semantics, and not just syntax. Formal logic/rules is just syntax.

    Later versions of Searle emphasized that meaning requires intentionality (ie aboutness) and also that humans are conscious eg of the meaning or of using language meaningfully. As per the article, many take later Searle as emphasizing the human consciousness thrust of his argument.

    You can read all the replies and Searle’s counters in the article. There have been previous threads in TSZ where these are thrashed out. I still like the robot reply. (FWIW, a similar idea is what I take KN to be including as part of his dynamic causal loops in his post above)

    But I also think there is merit in Scott A’s questioning of the unjustified intuition underlying the no consciousness bit of Searle’s argument.
    https://www.quora.com/Whats-your-take-on-John-Searles-Chinese-room-argument

    Not too mysterious really. It all comes down to SOMEONE understanding it.

    What I think you are referring to is what I know of as the derived versus original intentionality separation that Searle also pushes. He agrees that the understanding/intentionality in the rules is derived from that of whoever built the rules. It is not only in the rules. Only conscious (and so necessarily biological) humans can have original intentionality, according to Searle.

  31. phoodoo: The guy inside speaks English, the guy inside has a translating program, or the guy in the room is calling his friend and telling him what the letters look like, and asking how to respond.

    Where is the complex intellectual mystery

    That Scott’s point, I think: if there is no behavioural difference, how can Searle justify saying that the person who does not understand Chinese but just follows the rules does not in fact understand Chinese.

    ETA: Searle might answer that we know than the I/O behavior — we also know the mechanism producing it and the fact that the mechanism is not biological.

    It’s Searle’s intuition, shared by many, that such a rule-following person would not understand.

    If the human already spoke English, then using English questions and replies would not capture the point of the thought experiment.

  32. BruceS,

    What I meant was, by using Chinese as the language he is sort of implying that one can just follow rules about making characters and generate answers-which isn’t the case.

    By the same token, if the person didn’t understand English, and you sent English questions under the door, you couldn’t just use some English rules , and generate coherent answers. At the root of it all, SOMEONE would have to understand English, and then supply the data of answers.

  33. phoodoo: ou couldn’t just use some English rules , and generate coherent answers.

    The premise of the thought experiment is could.

    It’s been a while since I looked that paper, but I think the rules were something like “if you see this squiggle, produce this other squiggle as output. As summarized in SEP, Searle’s current version just says rules as in computer programs.

    I think in the original version of the paper he restricted the questions to those pertaining to a specific, provided story so as avoid the objection that on finite set of rules could capture all of the possible questions and answers.

    (Aside: But the human brain is finite and we can generate an infinite number of sentences, so maybe the right rules could generate answers to any question? However, no one thinks any such rules are if-then rules. But maybe they are pattern matching rules associated with sensory/motor mental representations of experienced human and world interactions?)

  34. BruceS,

    Well, I am trying to imagine what a “rule” in a language would be. If one said in English, “The color of the door is green, isn’t it? What would the “rule” be which answers that?

    I don’t think there is a rule. I think a computer could just go through a series of possible answers and choose one, but is that the same thing as following a rule?

    Like when you talk to a navigation system, or to Siri, if it is given a question that it doesn’t know the answer to or has never been asked, it will just say, I don’t understand, or who is that, or what is that, or can you ask again…Like if you said-Siru, what’s up? If no one had programmed in what it is supposed to say, based on that set of words, it would just say, Can you repeat that, over and over. Until, one day a programmer made it say, Can I help you, or whatever.

    Are those rules, or are those just commands that someone programmed in to answer?

  35. phoodoo:
    BruceS,

    I don’t think there is a rule.I think a computer could just go through a series of possible answers and choose one, but is that the same thing as following a rule?

    Yes, I think computers follow a rule and I also think that is a necessary part of what physical computation is. But I agree that computers following a rule and humans following a rule may not be the same thing (may)..

    What it means humans to follow a rule is one of those things philosophers argue about.
    https://plato.stanford.edu/entries/wittgenstein/#RuleFollPrivLang

    For computers, rule-following is based in the end in the physical operation of hardware; it is not in itself following rules* (although human scientists may how science may describe it that way)..

    I know that people built that hardware and the micro-instruction set underlying CPU operations. But I am referring to the quantum physics of modern computer hardware.*

    Some of your post seems to be the topics of making a choice and perhaps even (shudder) free will. I’m not interested in going there; lots of stuff already on that on TSZ.

    ————————–
    * Now some would say that quantum-based hardware is following rules too because the universe is nothing more than a quantum computer implementing a program that produces the universe itself. That’s a topic for another thread.

    https://www.amazon.com/Programming-Universe-Quantum-Computer-Scientist/dp/1400033861.

  36. phoodoo: Are those rules, or are those just commands that someone programmed in to answer

    I did not answer that because I don’t know. Here is my guess; I did not bother googling “how does siri work” so feel free to do so and then correct me.

    AI language understanding generally involves deep learning, which is implemented on computers, so it is rule following in the end. But the rules are learned by the computer being exposed to language usage as training data (captured from internet, as you say).

    The rules are then encoded as weights in a hierarchical network of artificial neurons, not as a traditional programming language as used by people..

    I think that part of the understanding is to learn the answers to the question by theprovided input training data..

    Further, I suspect that when Siri says “I don’t know” or when it answer with a guess, then it is following something similar to a traditional programming language instruction which takes effect when the deep learning piece detects somehow that it cannot reliably pattern match the language in the question.

  37. phoodoo: I can not understand how the Chinese room argument, if you want to call it that, is even saying something.

    It is mainly making an intuitive argument that computation is purely syntactic and does not in any way depend on semantics. Searle believes that this is devastating to AI.

    Most mathematicians and computer scientists would completely agree that computation is purely syntactic. They do not agree that this is devastating to AI.

  38. The more I play around with Searle’s Chinese room argument, the more it looks really stupid.

    Searle is, after all, not a dualist — he’s a naturalist. He just thinks that brains have original intentionality. (I’m not making this up — he says exactly this.) It’s the task for neuroscience (he says) to explain brains have original intentionality. And because they have original intentionality, everything that is said or done with them — all of our language and actions — therefore has derived intentionality. Brains are genuine semantic engines!

    This is where his debate with Dennett becomes relevant, since Dennett thinks that nothing is a genuine semantic engine. For Dennett, rejecting Cartesianism about the mind means accepting that there aren’t any real semantic engines, just syntactic engines that are usefully described as having semantic properties.

    But if you look deeper into Searle’s argument for why computers can’t be semantic engines, when brains can be, one comes up empty. And then you realize: Searle never actually says that computers can’t be semantic engines. He says that programs cannot be. And that’s because a program is just a list of instructions.

    The upshot of the whole mess is this: the reason why programs cannot be semantic engines, and thereby have original intentionality, has nothing at all to do with physics or biology. It relies solely on the metaphysical truism that abstract objects have no causal powers. And that’s what a Turing machine, strictly defined, is: a logical machine, or an abstract object. (When Turing invented them, he invented the concept in order to solve a problem in pure mathematics!)

    In any event, no serious AI researcher — or AI critic — takes Searle’s argument seriously these days. Like Plantinga’s EAAN, it’s fun to play with but really misses the point of the whole debate, more clever than insightful.

  39. Kantian Naturalist: The upshot of the whole mess is this: the reason why programs cannot be semantic engines, and thereby have original intentionality, has nothing at all to do with physics or biology. It relies solely on the metaphysical truism that abstract objects have no causal powers.

    Yes, that sums it up nicely.

    Even if Searle’s intuition is right — that AI cannot have intentionality — he has failed to prove it. He has no more than his own assertion.

  40. KN,

    The upshot of the whole mess is this: the reason why programs cannot be semantic engines, and thereby have original intentionality, has nothing at all to do with physics or biology. It relies solely on the metaphysical truism that abstract objects have no causal powers. And that’s what a Turing machine, strictly defined, is: a logical machine, or an abstract object.

    Computers and programs, when physically instantiated, definitely have causal powers. We wouldn’t pay money for them otherwise.

    The thought experiment involves an instance of the abstract “person” category running an instance of the abstract “program” category. The instances are concrete, yet according to Searle, original intentionality is still absent.

    Thus the abstract vs. concrete question can’t be the relevant one.

    My reading of Searle is that the relevant question is whether you have semantics at the lowest level of the system. A computer manipulates symbols without regard to their meaning; a brain is semantic at its core (according to Searle). Thus the latter possesses original intentionality while the former does not.

    I’m with Dennett on this one. The brain is a syntactic machine, just like the computer and program. The intentionality we ascribe to it is really “as if” intentionality, not original intentionality.

  41. keiths: I’m with Dennett on this one. The brain is a syntactic machine, just like the computer and program. The intentionality we ascribe to it is really “as if” intentionality, not original intentionality.

    Yes, I’m slowly coming around to a Dennettian or Dennettian/Churchlandian position on this stuff . . . I like to think of Dennett’s point as rejecting the very distinction between original and derived intentionality. (Though the intentionality we ascribe is ascribed to the person, not to the brain.)

  42. BruceS,

    Well, I am still just not sure that there is “rules following” going on, as much as there is data matching. Either way, I don’t make much of Searle’s argument, but even though I think Scott A’s assessment of the argument is valid, I don’t go to this point:

    “But I always failed, because I couldn’t find a single MIT undergrad who thought Searle’s position made sense and would argue for it. With increasing desperation, I’d argue Searle’s position myself, just to try to get a rise out of the students—-but they’d calmly reply that, no, if a brain passing electrical signals around can be conscious, then a mechanical contraption passing slips of paper around can be conscious too … or at any rate, I hadn’t given them any real proposal to differentiate the one from the other. Why wasn’t that obvious?”

    I don’t get this part. Neither Searle’s argument, nor the students have come close to arguing ANYTHING about consciousness. The only argument being made is that if a data base is big enough (so far no one has made that data base) one could approximate the reply a conscious person would make in most situations.

    Not an important realization in my opinion.

    I say, ask a computer to tell a joke that has never been told. So far, they can’t.

  43. keiths: A computer manipulates symbols without regard to their meaning

    In this case, not necessarily. If you ask a computer who is Justin Bieber, it doesn’t manipulate words according to their meaning, it is matching words. That is a different idea-even if it looks the same to the end receiver. In same cases it may be using the meaning of a word, to match it to other replies, but I still am not so sure about that.

    keiths: The brain is a syntactic machine, just like the computer and program.

    But you still struggle with explaining a decision, so I don’t buy your assessment there. If one condition can equal two outcomes, then your reasoning doesn’t apply. Otherwise, two people asking Siri the exact same question, under the exact same conditions, might get two different answers.

    I am pretty sure that doesn’t happen. But it is a simple experiment. Just have two people stand next to Siri and ask the same question, see if you can get it to give different answers.

  44. phoodoo,

    But you still struggle with explaining a decision, so I don’t buy your assessment there.

    I don’t struggle with explaining decisions and choices, though it seems you struggle to understand me.

    If one condition can equal two outcomes, then your reasoning doesn’t apply.

    Think of a self-driving car in a particular condition and location. Now consider two cases. In case #1 there is a 10-minute traffic jam along the planned route. In case #2 there is no traffic jam. Is it really surprising to you that in case #1, the car can decide to follow a different route? To choose the fastest one? Does it really surprise you if in case #1 it keeps the current route?

    Otherwise, two people asking Siri the exact same question, under the exact same conditions, might get two different answers.

    I am pretty sure that doesn’t happen.

    It could easily happen. Suppose Siri were engineered to customize its responses depending on the user. Then Bob and Brenda could ask the same question — such as “Siri, where is the restroom?” — and get different answers.

  45. keiths,

    Why are you giving me examples of DIFFERENT situations giving different outcomes as examples of the SAME situation giving different outcomes?

    You are still struggling with this concept.

  46. phoodoo,

    Because you’ve been sloppy about specifying what you mean when you say “one condition, two outcomes.” Condition of what, precisely? The car? The universe?

    If the car is in the same condition but the universe is not, then it’s easy to see how the outcome can change: one condition, two outcomes. Agreed?

  47. keiths: Because you’ve been sloppy about specifying what you mean when you say “one condition, two outcomes.”

    Oh my heavens keith, one condition means one condition. Yes, EVERYTHING the same. Just like I proposed to your decision that you think you can make, given the exact same set of criteria. Like, Do you want chocolate or vanilla. We can all agree that on some days you might like chocolate more (maybe you have been eating vanilla everyday for a week straight) and other days vanilla. But can you, under only ONE condition, make two decisions? Of course not, if you believe your computer analogy.

    So yes, we need ALL conditions that the computer takes into account to be the same. Same sound of the voice (if that is what the computer is designed to account for) , same time (if we program it for time), same weather (again if its factors are programmed for weather) , on, and on. Same means same keiths.

    Now, if you believe that your brain is taking into account world events in Syria when it chooses chocolate, then indeed, the world events in Syria must be the same condition which cause you to choose chocolate instead of vanilla. If you believe the amount of isotopes on Pluto affects your decision, then yes, that is part of the condition.

    Why is same means same a hard concept? If you ask your computer which is the fastest way home, and on one day a bridge is closed and another day a bridge is open, we don’t expect to get the same result, if the bridge is involved. or if its snowing, and there is a traffic jam. Or if its 4 a.m. instead of 4 p.m.

    Same means same keiths. Whatever factors the “computer” is evaluating, must be the same.

  48. phoodoo,

    Oh my heavens keith, one condition means one condition.

    One condition of the car doesn’t mean one condition of the universe, phoodoo. You need to learn to think and write more precisely.

    But can you, under only ONE condition, make two decisions?

    If everything is the same, then the outcome of the decision will be the same (ignoring possible quantum indeterminism). That includes when humans are involved.

    We’ve been over this again and again, but apparently you need more repetitions.

Leave a Reply