2,657 thoughts on “Elon Musk Thinks Evolution is Bullshit.

  1. phoo,

    Your naiveté is charming, but this argument is old hat. It’s been discussed probably dozens of times at TSZ.

    I’m willing to walk you through it, but only if you commit up front to sticking with the discussion even when things don’t go your way (and they won’t). You have a habit of disappearing when the going gets tough — do I have your assurance that you won’t vanish in a cloud of vitriol when you start to lose this argument?

  2. keiths,

    Its a false premise, I haven’t lost anything, and I haven’t disappeared anywhere.

    Not responding to hand-waving bullshit is not giving in to anything.

    As it stands, I have displaced your myth that you can call the brain simple a matter of physical states, and also claim that you can choose which physical state it should be in.

    I won’t hold my breath waiting for you to dispel that, because logically it can’t be dispelled.

  3. keiths:

    Setting that complication aside, I think it’s clear that the guy would pass back a Chinese note saying “Run!”.He would not run himself unless he were smelling smoke.

    In other words, the guy doesn’t understand Chinese, but the system does.

    Bruce:

    Yes I agree that that is what would happen. But I also take as a moral that the system reply is lacking in some sense, because it would not run, even though it understands the Chinese, and even though its “body” (the room and the guy and the book) was in danger.

    The only reason the Chinese Room won’t run is because it isn’t set up that way in Searle’s thought experiment. If he had set it up with sensors and actuators, and expanded the rule set to include rules for responding to sensor indications and for manipulating the actuators, then the Room could in fact run from a fire.

    And that would be true even if the man himself had no idea what was going on, or why, and was just blindly following the rules.

  4. phoodoo,

    Again, I’m not going to spoon-feed you unless you commit to opening your mouth to let the choo-choo train in:

    I’m willing to walk you through it, but only if you commit up front to sticking with the discussion even when things don’t go your way (and they won’t). You have a habit of disappearing when the going gets tough — do I have your assurance that you won’t vanish in a cloud of vitriol when you start to lose this argument?

  5. phoo,

    If you’re afraid to even commit to a discussion, this is clearly not the topic for you.

    Your faith is too fragile.

  6. keiths,

    If you think you could have put together an argument worth countering me, you would have done it. Others can judge for themselves the value of your words.

    I don’t blame you for waving the white flag, your position is defenseless.

  7. keiths:
    keiths:

    The only reason the Chinese Room won’t run is because it isn’t set up that way in Searle’s thought experiment. If he had set it up with sensors and actuators, and expanded the rule set to include rules for responding to sensor indications and for manipulating the actuators, then the Room could in fact run from a fire.

    And that would be true even if the man himself had no idea what was going on, or why, and was just blindly following the rules.

    Well, if it could not run because it did not have the right actuators, the system should have replied “get me the hell out of here!”. After all, under the system reply, it is the system that understands. How can it understand and yet not act to save itself?

  8. Bruce,

    Well, if it could not run because it did not have the right actuators, the system should have replied “get me the hell out of here!”. After all, under the system reply, it is the system that understands. How can it understand and yet not act to save itself?

    It all depends on how the rules are written.

    Under some sets of rules, the system would reply “get me the hell out of here!”. Other rule sets would render it indifferent to its own survival. Still others would make it suicidal. It could even be homicidal, attempting to coax its questioners into staying so that they would die in the fire.

    Just as in the case of understanding and responding to Chinese, there’s nothing about any of those behaviors that couldn’t be implemented purely syntactically, in my opinion.

  9. Phoodoo is surely correct to argue that it cannot be the case that

    1. The brain is a deterministic physical structure. If brain is in state A at t1, and in state B at t2, the change in state is determined entirely by the laws of physics. Since the laws of physics hold necessarily, whatever a brain does, it could not have done otherwise.

    2. Persons are rational cognitive agents, capable of reflecting on what is truly best, revising their desires and needs accordingly, deciding on a course of action that is grounded in a rational line of thought, and agreeing to be held responsible for the consequences of that action. A person is therefore aware that, whatever she chooses to do, she could have done otherwise and is only held responsible for what she chooses to do.

    and

    3. The brain is the rational cognitive agent.

    It cannot be the case that all three are true, because then it is one and the same thing that both could have done otherwise and cannot have done otherwise.

  10. Kantian Naturalist:
    Phoodoo is surely correct to argue that it cannot be the case that

    1. The brain is a deterministic physical structure. If brain is in state A at t1, and in state B at t2, the change in state is determined entirely by the laws of physics. Since the laws of physics hold necessarily, whatever a brain does, it could not have done otherwise.

    But that’s an equivocal statement. A brain exists in order that it could do one of several things (at least) in most situations, and to choose which one is “best to do.” Obviously in retrospect, however, it could not have done anything but what it did (as it’s deterministic), yet it had options and, indeed, it had to determine what was (most likely) the best. It chooses in order to act deterministically in accordance with the situation.

    It could have chosen otherwise. Of course that’s only (in retrospect) if things had actually been different, hence if the causation of the choosing were different. You can only say that it couldn’t have done otherwise in a certain sense, but not in another sense of the word. Especially in prospect, one does not know how one will choose because one doesn’t know all of the causal forces, which is how it is inevitable that one could have chosen differently–things had to be weighed in order to find out how the causation of decision would work out.

    2. Persons are rational cognitive agents, capable of reflecting on what is truly best,revising their desires and needs accordingly, deciding on a course of action that is grounded in a rational line of thought, and agreeing to be held responsible for the consequences of that action. A person is therefore aware that, whatever she chooses to do, she could have done otherwise and is only held responsible for what she chooses to do.

    Naturally the person could have done otherwise. That’s the whole point of making a decision. How would anyone be held accountable unless it was deterministic, though, something that had to be caused by something in the person that is deemed to be socially unacceptable? To be sure, in retrospect a wrongful decision could only have gone one way, mainly because the brain acted in a manner that is socially not allowed. If it were only chance or “free will” that was involved, how would society hold the brain responsible? Your brain would never be in its right mind, because it wouldn’t be determined by “character” plus events.

    and

    3. The brain is the rational cognitive agent.

    It cannot be the case that all three are true, because then it is one and the same thing that both could have done otherwise and cannot have done otherwise.

    It is true that it both could have done otherwise and cannot have done otherwise, mainly because these words are being used with rather different meanings. In prospect you have little or no idea of what you will choose, but have to look at several options. In retrospect, of course it had to come out one way in the end (why do people try to influence decisions? Because causal forces work out in the brain), and then we judge the brain much as we would a computer and/or its software–as having chosen correctly or incorrectly given the circumstances.

    It’s often said that someone couldn’t do otherwise, because of how that person’s mind/brain works. That’s pretty much the point of judging how a person acted in a particular situation. It’s not very compatible with judicial theory of “free will,” but then it really doesn’t make much sense to hold anyone accountable for something not determined by their brain (given the circumstances and influences), either.

    Glen Davidson

  11. keiths:

    It all depends on how the rules are written.

    Well, as I recall the original paper, questions in Chinese were passed to the unilingual English-speaker in the room and answers in Chinese were passed out. The person in the room used some kind of lookup in a book to provide those answers.

    My “run” scenario does not really fit that Q&A protocol. I think the closest I could come would be to ask what would happen if the incoming question was
    “What would you do if I told you there was a real fire and this is not just another question?”
    and the man would use the book to say
    “I would ask you to get me out of here”

    That is not very helpful to the intuition I was trying to motivate.

    So let me put my point a different way. I don’t think the Chinese room is a useful way of checking for understanding, so I don’t think the system reply is relevant. According to my intuitions, understanding a language requires

    1. Causal interaction with some of the objects that language references.* If my memory of the movie of Helen Keller’s life is accurate, my point is illustrated by the scene where Helen “gets it” after the teaches dips her hand in water and signs “water” on her palm and then repeats that with other objects.

    2. Participation in a community of language users. That would include shared experiences and discussion regarding some parts of the world the language is used to refer to. I’d also expect that the entity in question demonstrates that it “knows” it is a separate entity, something part of a community yet separate from it.

    Of course, that is only my intuition on the matter. Yours may differ. I suspect that any arguments about who is right would remain arguments based on intuition. So likely not worth pursuing.

    (ETA: I consider Dennett’s giant robot intuition pump to be motivating a similar intuition regarding when one might ascribe “sorta/ersatz original intentionality”, which I would see as Dennett’s version of understanding meaning (p. 166 of the book).

    ————————-
    * To close a loop in the thread (ETA: actually a different thread): “causal interaction” would not rule out a community of BIVs interacting with a common electronic “world”, but it would rule out a short-lived BB or at person being fooled by a capricious, supernatural demon.

  12. I’ve been thinking about this passage from Dennett:

    keiths: How can meaning make a difference? It doesn’t seem to be the kind of physical property, like temperature or mass or chemical composition, that could cause anything to happen. What brains are for is extracting meaning from the flux of energy impinging on their sense organs, in order to improve the prospects of the bodies that house them and provide their energy. The job of a brain is to “produce future” in the form of anticipations about the things in the world that matter to guide the body in appropriate ways. Brains are energetically very expensive organs, and if they can’t do this important job well, they aren’t earning their keep. Brains, in other words, are supposed to be semantic engines. What brains are made of is kazillions of molecular pieces that interact according to the strict laws of physics and chemistry, responding to shapes and forces; brains, in other words, are in fact only syntactic engines.

    Any configuration of brain parts . . . will be caused by physicochemical forces to do whatever it does regardless of what the input means (or only sorta means). Don’t make the mistake of imagining that brains, being alive, or made of proteins instead of silicon and metal, can detect meanings directly, thanks to the wonder tissue in them. Physics will always trump meaning. A genuine semantic engine, responding directly to meanings, is like a perpetual motion machine — physically impossible. So how can brains accomplish their appointed task? By being syntactic engines that track or mimic the competence of the impossible semantic engine.

    I agree with 80% of this.

    Dennett is surely right that brains cannot grasp meanings directly, once that’s explicated carefully.

    What brains do not do is immediately apprehend intensional entities, whether Meinong’s ontologically queer objects or Fregean Sinne that exist in a ‘third realm’ distinct from both physical and mental objects. There’s just no way to reconcile any inflationary ontology of meaning with naturalism; there’s no magic fluid in the brain.

    That’s the 80% I agree with: one cannot be a naturalist and accept intensional entities, because brains cannot immediate apprehend intensional entities. (Or, if they do, it’s by virtue of some magical fluid or chunk of wonder tissue that is impervious to empirical discovery.)

    Now for the 20% I disagree with.

    Once we reject non-naturalism, there are many naturalistic options left on the table. My preference is be slightly more Deweyan and Gibsonian than Dennett, and say that meanings are constituted by brain-body-environment transactions. There’s nothing ontologically weird about meanings, but we don’t need to eliminate them — we just need to understand what they are.

    Once we see that meanings are constituted by brain-body-environment transactions, rather than non-naturalistic entities apprehended by immaterial minds or (even more mysteriously) by material brains, we can happily make sense of the idea that there are indeed genuine semantic engines: the whole living animal in its ongoing exchanges with its environment is the semantic engine.

    So that’s one set of complaints about how Dennett is using the term “semantic engine” there. He thinks there aren’t any really semantic engines, only good-enough semantic engine simulators. I think that there are genuine semantic engines, and they are animals.

    The other complaint about Dennett is his use of “syntactical”. I don’t understand what it means to say that physico-chemical forces are “syntactical” or that physico-chemical properties are “syntactical” properties. If he had said “causal” I’d be fine, and he seems to mean that at times, but then I don’t understand how “causal” can be interchangeable with “syntactical”.

  13. Kantian Naturalist:

    The other complaint about Dennett is his use of “syntactical”. I don’t understand what it means to say that physico-chemical forces are “syntactical” or that physico-chemical properties are “syntactical” properties. If he had said “causal” I’d be fine, and he seems to mean that at times, but then I don’t understand how “causal” can be interchangeable with “syntactical”.

    Perhaps “causal” might be of concern if one wanted to allow for mental contents, like beliefs, to be able to cause things. Of course, in that case the account of what causation is or how to establish it might differ between brain processes and (folk) psychology.

    I think Dennett’s use of “syntactical” might relate to Fodor’s LOT, which says that only the form/syntax of abstract symbols as configured in the brain can affect the operations of the LOT. Meaning must strictly locked into that form-based behavior for LOT to work.

  14. BruceS: A □ ⊃ B.

    Answering one of my own questions on Quine’s notation in Intensions Revisited after closer look at Sider:

    A ⊃ B is material implication, the usual if A then B which is same as ~A v B

    □( A ⊃ B ) is strict implication: in any possible world where A is true, so is B

    A □ ⊃ B is counterfactual implication: if A would have happened, then so would have B

    Still don’t know what Quine meant by A . ⊃ B or A . ⊃ . B. However, unlike the counterfactual notation, I have not seen that anywhere else. (It does seem Quine uses . for AND elsewhere, but I don’t know is that is relevant).

  15. BruceS: Perhaps “causal” might be of concern if one wanted to allow for mental contents, like beliefs, to be able to cause things. Of course, in that case the account of what causation is or how to establish it might differ between brain processes and (folk) psychology.

    Quite frankly I think Dennett should say that meanings do have causal efficacy, if we’re talking at the personal level. Semantic engines are as real as anything else is, if ontological commitments are specified relative to the stance taken. The only ontological commitments that are NOT stance-dependent are “real patterns”.

    I think Dennett’s use of “syntactical” might relate to Fodor’s LOT, which says that only the form/syntax of abstract symbols as configured in the brain can affect the operations of the LOT.Meaning must strictly locked intothat form-based behavior for LOT to work.

    If so, that would be a big problem — then Dennett’s entire position would be vulnerable to the enactivist criticisms of Turing machine functionalism leveled against Fodor’s LOT.

  16. Kantian Naturalist: I agree with 80% of this.

    I don’t.

    I sometimes think of arguing “there’s no such thing as meaning.” My point would not be to deny meaning. Rather, it would be to deny the thingness of meaning.

    The kinds of arguments that we see are infested with assumptions that meaning is a thing. You talk of intensional entities. Maybe there are no such things as intensional entities.

    That Dennett quote begins “How can meaning make a difference?”. Let’s turn it around and apply to a computer, with “How can truth values make a difference?”

    The computer works electrically, with units of electrical charge that we call “bits”. But we think of those as expressing truth values. Truth values don’t actually exist, except in our abstract accounts.

    With a computer, we think of the truth values as being carried by those electrical charges (bits). Why not, similarly, think of meaning as being carried by the neural pulses?

    In an earlier post, keiths says “The question here isn’t whether meaning can emerge from the operation of networks of neurons; it’s whether the operation of a neuron itself takes meaning into account.” But that seems backward. The neurons have no need to take meaning into account. Rather, meaning is required to take into account the neural behavior.

    Again, from that Dennett quote:

    Don’t make the mistake of imagining that brains, being alive, or made of proteins instead of silicon and metal, can detect meanings directly, thanks to the wonder tissue in them. Physics will always trump meaning.

    That’s where Dennett leads himself astray. In our computers, the bits are rigid designators. We design our logic chips to create this rigidity. By contrast, neural pulses are adaptive designators. The neurons are constantly adapting their responses (their trigger points).

    Yes, Dennett can convince himself that it is all physics. But it is really all biology. We cannot look at physics applied to protein molecules, without also taking into account that new protein molecules are being created and others are being “uncreated” as part of the continuing biology. And those changes are involved in the adaptive nature of neural pulses.

  17. Kantian Naturalist: is, if ontological commitments are specified relative to the stance taken. The only ontological commitments that are NOT stance-dependent are “real patterns”.

    That was my point about cause: one needs to give different accounts if one uses it both for cause in the brain (instead of calling that syntax) and cause of mental contents.

    I’m unclear whether Dennett’s version of real patterns escapes that requirement for differing accounts. Dennett does talk about taking the intentional stance with respect to a subpersonal hierarchy of components (all the way down to individual neurons), with the ascribed “beliefs” being simplified at each step. Presumably the nature of the real patterns would change in a similar way, as would the associated account of causation.

    If so, that would be a big problem — then Dennett’s entire position would be vulnerable to the enactivist criticisms of Turing machine functionalism leveled against Fodor’s LOT.

    I meant only that that is how he came to use the word that way, not that he believes anything like LOT, which he does not.

    If enactivism is modeled by DST, ie coupled, non-linear differential equations with a state space encompassing all of the brain, body, and world, then that system of equations could be modelled by a TM.

    An interesting issue would be whether that state space could be partitioned in such a way that the elements within the brain/body could be treated separately for some work. I’ve seen some work on DSTs in cognitive science talk about a basis of attraction representing the processing within the neural network. Since it is restricted to neurons, that type of basis of attraction implies there can be a separation of the state space used for the brain/body variables from the state space of the variables representing the world.

    On that note, I’ve never seen any enactivist work that takes the math of DST seriously enough to talk about the elements of that state space in detail. Are you aware of any work that does? The limited material I have seen uses DST more like a poetic metaphor for how the interactions of the brain, body, and world should be understood.

    The PP stuff is backed up by concrete math which has been implemented in connectionist models. I realize that the behavior of connectionist models of a neuron are a far cry from that of a real neuron, but it is a start and does give some credibility to the model.

    Has anything similar been done for enactivist models of any complexity?

  18. KN,

    Phoodoo is surely correct to argue that…

    Although I doubt that he knows the terminology, he’s arguing that:

    a) we aren’t “in charge” of our actions unless we have libertarian free will, and

    b) we don’t have libertarian free will if our brains operate strictly according to the laws of physics.

    He’s right about (b) but wrong about (a).

    First, Libertarian free will is incoherent, so we can’t have it under any scenario (including dualism).

    Second, we are “in charge” of our actions if we are the ones who choose them. Whether we “could have done otherwise” is irrelevant.

    Third, notice the implicit dualism in phoodoo’s argument. In essence, he’s saying “Well, if my actions are determined by physics, then I’m not choosing them.” But in making that claim he is assuming that he is not subject to the laws of physics — that is, he is assuming the truth of dualism.

  19. Bruce,

    Well, if it could not run because it did not have the right actuators, the system should have replied “get me the hell out of here!”. After all, under the system reply, it is the system that understands. How can it understand and yet not act to save itself?

    keiths:

    It all depends on how the rules are written.

    Under some sets of rules, the system would reply “get me the hell out of here!”. Other rule sets would render it indifferent to its own survival. Still others would make it suicidal. It could even be homicidal, attempting to coax its questioners into staying so that they would die in the fire.

    Bruce:

    Well, as I recall the original paper, questions in Chinese were passed to the unilingual English-speaker in the room and answers in Chinese were passed out. The person in the room used some kind of lookup in a book to provide those answers.

    My “run” scenario does not really fit that Q&A protocol.

    In Searle’s original paper, the questions (in Chinese) were about a story (in Chinese) that had previously been presented to him. He produced the answers (in Chinese) by following a set of mechanical rules, despite not knowing a word of Chinese himself.

    Nothing about that scenario precludes the possibility that the rules are written so that the Room can respond not only to questions about the story, but also to general questions or statements — including statements like “I smell smoke, and the fire alarm is going off!”

  20. Bruce,

    According to my intuitions, understanding a language requires

    1. Causal interaction with some of the objects that language references.* If my memory of the movie of Helen Keller’s life is accurate, my point is illustrated by the scene where Helen “gets it” after the teaches dips her hand in water and signs “water” on her palm and then repeats that with other objects.

    By that criterion a Swamp Man version of you would not understand English, despite being capable of conversing fluently in the language. Are you comfortable with that?

    2. Participation in a community of language users. That would include shared experiences and discussion regarding some parts of the world the language is used to refer to. I’d also expect that the entity in question demonstrates that it “knows” it is a separate entity, something part of a community yet separate from it.

    If I develop a private language, known to no one else, and use it when writing in my journal, would you argue that I don’t understand it? That doesn’t make sense to me.

  21. KN:

    Once we see that meanings are constituted by brain-body-environment transactions, rather than non-naturalistic entities apprehended by immaterial minds or (even more mysteriously) by material brains, we can happily make sense of the idea that there are indeed genuine semantic engines: the whole living animal in its ongoing exchanges with its environment is the semantic engine.

    But the “whole living animal in its ongoing exchanges with its environment” is still a purely physical system, following the meaning-insensitive laws of physics. As Dennett put it:

    Physics will always trump meaning. A genuine semantic engine, responding directly to meanings, is like a perpetual motion machine — physically impossible. So how can brains accomplish their appointed task? By being syntactic engines that track or mimic the competence of the impossible semantic engine.

    Semantics doesn’t add anything to the causal story. Animals aren’t semantic engines — they’re just syntactic engines that act sorta like semantic engines.

    KN:

    The other complaint about Dennett is his use of “syntactical”. I don’t understand what it means to say that physico-chemical forces are “syntactical” or that physico-chemical properties are “syntactical” properties. If he had said “causal” I’d be fine, and he seems to mean that at times, but then I don’t understand how “causal” can be interchangeable with “syntactical”.

    It isn’t. The syntactic/semantic distinction is separate from the causal/noncausal distinction.

    Dennett recognizes semantic engines as a logical possibility. If that possibility actually obtained, then meanings would have a causal role in the world.

    Dennett draws his semantic/syntactic distinction between scenarios in which meaning has a causal role and those in which it doesn’t.

  22. Neil,

    That Dennett quote begins “How can meaning make a difference?”. Let’s turn it around and apply to a computer, with “How can truth values make a difference?”

    The computer works electrically, with units of electrical charge that we call “bits”. But we think of those as expressing truth values. Truth values don’t actually exist, except in our abstract accounts.

    That’s right. Logic gates in a processor are no more sensitive to meaning than neurons in a brain.

    With a computer, we think of the truth values as being carried by those electrical charges (bits). Why not, similarly, think of meaning as being carried by the neural pulses?

    We can and do think of it that way. Dennett is just reminding us that this is only a stance. If we look at the underlying reality, we can see that meaning plays no causal role.

    That’s where Dennett leads himself astray. In our computers, the bits are rigid designators. We design our logic chips to create this rigidity. By contrast, neural pulses are adaptive designators. The neurons are constantly adapting their responses (their trigger points).

    The adaptation of a neuron is pure physics, insensitive to meaning.

    P.S. I’m still curious about this.

  23. Given his other commitments, Dennett should say that meanings do play a causal role with regard to the personal stance, but not with regard to the design stance or physical stance.

    With a few modifications, I could accept that as my own view perfectly well.

    What Dennett should not do, as I read his underlying stance/pattern distanction, is say (with Quine) that meanings are not real because there aren’t any relative to the physical stance. That position would make Dennett into a physicalist, and that’s not consistent with his constant attempts to navigate between realism and anti-realism.

    Granted, “semantic engine” is his term and he can use it as he likes. Nevertheless, there is a clear sense in which animals are genuine semantic engines as well as a clear sense in which nothing is (if naturalism is true). (Gibson vs Frege.)

  24. KN,

    What Dennett should not do, as I read his underlying stance/pattern distanction, is say (with Quine) that meanings are not real because there aren’t any relative to the physical stance. That position would make Dennett into a physicalist, and that’s not consistent with his constant attempts to navigate between realism and anti-realism.

    Dennett is a physicalist.

  25. keiths:

    By that criterion a Swamp Man version of you would not understand English, despite being capable of conversing fluently in the language. Are you comfortable with that?

    If I develop a private language, known to no one else, and use it when writing in my journal, would you argue that I don’t understand it?That doesn’t make sense to me.

    On Swamp Man: at its instant* of creation, yes “Conversing” takes things beyond that instant and would meet my criteria.

    On private languages: We’ve discussed them before and I’ve formulated decisive replies to all your previous points.

    However, to do so, I had to invent a private language of my own and did so in a manner which had no dependence on my ability to understand English or Fortran. That language would not mean anything to you or anyone else and further it cannot be translated to some other language, but I understand it perfectly.

    —————-
    * At that instant, an SM is essentially a BB which I mentioned in the earlier post. Some recent versions of SMs have quantum fluctuations as their origin stories to give them a patina of possibility.

  26. keiths:

    Dennett is a physicalist.

    But I don’t think he is reductionist about meaning (let alone an eliminativist). See Ch 33 of IP.

  27. keiths: Dennett is a physicalist.

    You might be right, but if you are, I’ve been reading Dennett all wrong & for a long time. In order for Dennett to be a physicalist, he would have to think that the physical stance has ontological priority over the design stance and the intentional stance. That doesn’t cohere with my understanding of Dennett. I thought he had been saying that which stance we adopt is a pragmatic choice, and that different stances are useful in different ways. The only criterion of usefulness here is whether adopting that stance allows for verifiable predictions.

    That said, Dennett is a verificationist (and a very sophisticated one). He objects to qualia because they cannot be verified, and objects to phenomenology generally for the same reason. But beliefs and desires can be verified — there are publicly available criterion for assessing whether the attribution of beliefs or desires is a good-enough interpretation of some pattern of behavior. That gives all the realism that we need — and also all the realism that also makes sense for us to have. Dennett is not a metaphysical realist, though I can’t tell if Putnam’s criticism of metaphysical realism influenced him directly or only indirectly via Rorty.

    BruceS: On that note, I’ve never seen any enactivist work that takes the math of DST seriously enough to talk about the elements of that state space in detail. Are you aware of any work that does? The limited material I have seen uses DST more like a poetic metaphor for how the interactions of the brain, body, and world should be understood.

    I think that enactivism is committed to a strong form of holism that makes it very difficult to operationalize, and that in turn really does endanger its status as a scientific theory. Shaun Gallagher has recently suggested that enactivism is better understood as a philosophy of nature than as a scientific theory. (I think this is correct.) Check out footnote #14 (I think) in the paper in philosophy of perception I sent you the other day.

  28. Kantian Naturalist:Shaun Gallagher has recently suggested that enactivism is better understood as a philosophy of nature than as a scientific theory. (I think this is correct.) Check out footnote #14 (I think) in the paper in philosophy of perception I sent you the other day.

    Thanks, I’d noticed that but it is in a book I don’t have access to.
    But in searching to see if there was a copy on the web, I came across this blog post by a neursoscientist who was mentored by Gallagher. He discusses the same issues we have been discussing in various exchanges:
    – how enactivism could fit with neuroscience
    – the relation of enactivism and PP
    – when DST is explanatory and when it is only predictive (Kaplan and Craver have a paper covering this in more detail)

    Gallagher shows up in the comments.

    enactive-bayesians-response-to-the-brain-as-an-enactive-system-by-gallagher-et-al

  29. keiths:

    In Searle’s original paper, the questions (in Chinese) were about a story (in Chinese) that had previously been presented to him. He produced the answers (in Chinese) by following a set of mechanical rules, despite not knowing a word of Chinese himself.

    Nothing about that scenario precludes the possibility that the rules are written so that the Room can respond not only to questions about the story, but also to general questions or statements — including statements like “I smell smoke, and the fire alarm is going off!”

    Right I actually knew that, but it was not relevant to the point I was making. (Plus I figured I’d leave an opening for you to comment).

    The point I was making: In my first post, I asked if the guy would physically run, and later amended this to the system physically running if it had actuators. But according to the protocol, the system cannot do anything except provide written responses to questions. So my posts where I talked about some entity taking action broke the rules.

  30. Neil Rickert:

    I sometimes think of arguing “there’s no such thing as meaning.”My point would not be to deny meaning.Rather, it would be to deny the thingness of meaning.

    The kinds of arguments that we see are infested with assumptions that meaning is a thing.You talk of intensional entities.Maybe there are no such things as intensional entities.

    Close enough to Quine for a ringer in philosophical horseshoes.

    The computer works electrically, with units of electrical charge that we call “bits”.But we think of those as expressing truth values.Truth values don’t actually exist, except in our abstract accounts.
    With a computer, we think of the truth values as being carried by those electrical charges (bits).Why not, similarly, think of meaning as being carried by the neural pulses?

    That’s also close enough to a philosopher to score, namely Dennett’s intentional stance. But the meaning he ascribes to neurons is a simplified version of that ascribed to whole persons.

    A different approach to agreeing with you would be from philosophy of language, which does recognize logical constants true and false, as well as meanings, as types of semantic values assigned in analysing natural or artificial languages (although not all of them would include sense/intensions, some would go directly to reference).

    (ETA: If memory serves, semantic values of T/F apply to whole sentences, so syntax comes into the picture for helping to determine that is appropriate type of semantic value to use.)

  31. Bruce:

    According to my intuitions, understanding a language requires

    1. Causal interaction with some of the objects that language references.* If my memory of the movie of Helen Keller’s life is accurate, my point is illustrated by the scene where Helen “gets it” after the teaches dips her hand in water and signs “water” on her palm and then repeats that with other objects.

    keiths:
    By that criterion a Swamp Man version of you would not understand English, despite being capable of conversing fluently in the language. Are you comfortable with that?

    Bruce:

    On Swamp Man: at its instant* of creation, yes “Conversing” takes things beyond that instant and would meet my criteria.

    No, it doesn’t meet your criterion. Suppose Swamp Man materializes next to me and I immediately ask him to name the last four US presidents. He replies, “Bush, Clinton, Bush, Obama.”

    My question is the first utterance he’s ever heard. He hasn’t causally interacted, either directly or indirectly, with those four men or with the US political system. By your criterion, he doesn’t understand the question, he doesn’t understand his own answer, and he doesn’t understand the English language.

  32. Bruce:

    2. Participation in a community of language users. That would include shared experiences and discussion regarding some parts of the world the language is used to refer to. I’d also expect that the entity in question demonstrates that it “knows” it is a separate entity, something part of a community yet separate from it.

    keiths:

    If I develop a private language, known to no one else, and use it when writing in my journal, would you argue that I don’t understand it? That doesn’t make sense to me.

    Bruce:

    On private languages: We’ve discussed them before and I’ve formulated decisive replies to all your previous points.

    However, to do so, I had to invent a private language of my own and did so in a manner which had no dependence on my ability to understand English or Fortran. That language would not mean anything to you or anyone else and further it cannot be translated to some other language, but I understand it perfectly.

    I’m not talking about a Wittgensteinian private language — just a plain ol’ private language. Perfectly effable, perfectly translatable, but a language whose “user community” has never included anyone but me.

  33. KN:

    What Dennett should not do, as I read his underlying stance/pattern distanction, is say (with Quine) that meanings are not real because there aren’t any relative to the physical stance. That position would make Dennett into a physicalist, and that’s not consistent with his constant attempts to navigate between realism and anti-realism.

    keiths:

    Dennett is a physicalist.

    Bruce:

    But I don’t think he is reductionist about meaning (let alone an eliminativist). See Ch 33 of IP [Intuition Pumps].

    Chapter 33 is an argument for the indispensability of the intentional stance, not for the causal power of meanings.

    Dennett makes it clear:

    Computers, as physical systems, must be, at best, syntactic engines, responding directly to physically transducible differences, not meanings. But both A and B have been designed to mirror as closely as possible the imaginary know-it-all, a semantic engine full of understood truths.

  34. KN,

    You might be right, but if you are, I’ve been reading Dennett all wrong & for a long time. In order for Dennett to be a physicalist, he would have to think that the physical stance has ontological priority over the design stance and the intentional stance.

    He does give ontological priority to the physical stance.

    The design stance and intentional stance are just useful approximations, applied when the physical stance is too unwieldy to be practical.

    From Intentional Systems Theory:

    Design-stance predictions are riskier than physical-stance predictions, because of the extra assumptions I have to take on board: that an entity is designed as I suppose it to be, and that it will operate according to that design—that is, it will not malfunction. Designed things are occasionally misdesigned, and sometimes they break…

    An even riskier and swifter stance is the intentional stance, a subspecies of the design stance, in which the designed thing is treated as an agent of sorts, with beliefs and desires and enough rationality to do what it ought to do given those beliefs and desires…

  35. keiths:

    Nothing about that scenario precludes the possibility that the rules are written so that the Room can respond not only to questions about the story, but also to general questions or statements — including statements like “I smell smoke, and the fire alarm is going off!”

    Bruce:

    The point I was making: In my first post, I asked if the guy would physically run, and later amended this to the system physically running if it had actuators. But according to the protocol, the system cannot do anything except provide written responses to questions. So my posts where I talked about some entity taking action broke the rules.

    There are no rules saying that the man can’t run, or pause to have lunch, or take bathroom breaks. But even if there were, why should we be bound by them? This is a thought experiment, and thought experiments are meant to be tweaked, expanded upon, and extended.

    Your objection was:

    Well, if it could not run because it did not have the right actuators, the system should have replied “get me the hell out of here!”. After all, under the system reply, it is the system that understands. How can it understand and yet not act to save itself?

    My point is that it could have run if it had the right sensors and actuators, and it could have replied “get me the hell out of here!”. There is nothing about mechanical rule-following that precludes such responses given the right set of rules.

    In other words, the system reply survives your challenge.

  36. keiths:

    Chapter 33 is an argument for the indispensability of the intentional stance, not for the causal power of meanings.

    Dennett makes it clear:

    Be that as it may, my comment was simply trying to say one can be a physicalist without being a reductionist. I take Dennett’s view that “meaning is ineliminable” as an expression of a type of non-reductionism of explanations involving meaning. In Ch 42, he makes similar comments about biological evolution not being reducible in some sense.

  37. keiths:

    No, it doesn’t meet your criterion. Suppose Swamp Man materializes next to me and I immediately ask him to name the last four US presidents.He replies, “Bush, Clinton, Bush, Obama.”

    My question is the first utterance he’s ever heard. He hasn’t causally interacted, either directly or indirectly, with those four men or with the US political system.By your criterion,he doesn’t understand the question, he doesn’t understand his own answer, and he doesn’t understand the English language.

    Understanding is not a yes/no concept.

    Here is a summary of my view on this exchange:

    First, I am assuming the Kripke/Putnam (KP) view that meaning (and hence understanding) depends partly on causal history and on the context of one’s environment and linguistic community. If you don’t agree with KP, you won’t accept my arguments.

    I think we have been discussing two separate but related issues:
    The first issue: What empirical tests can one use to attribute understanding to some entity, regardless of its internal structure.
    I think the Chinese Room test as Searle originally described it is inadequate because it does not allow us to assess the KP critieria.

    The second issue: given some specification for an entity, when can we predict that it will successfully meet the test I proposed? Is particular, what about an being created by a completely random event when that entity is stipulated to exactly match a human who has the right causal history and embedding in a linguistic commnity? Will it pass that test for understanding?

    Answering that question comes down to what one is willing to conclude about the nature and predicted behavior of such an entity. I’m reluctant to make empircal predictions based on definitions which rely on random events.

    Now suppose we consider Star Trek transporters. That seems similar to a swampman in some respects: a completely new being specified to be identical to an existing one*. Can we predict that transported JT Kirk still understands 23rd century English? I’d say yes, that if transporters are part of standard technology, then we have the empirical evidence to predict JT Kirk’s ability to understand.*

    What if JT Kirk was transported to 23rd century twin earth? I’d say we can predict JT’ will understand a language, but it won’t be precisely the same language as TE language, although if he stays there his understanding will eventually become that of the same language.

    Now what about the original random swampman. Without any conversing, won’t he (or “it?”) have thoughts that he does understand language, including memories of learning words and participating in a linguistic community? Two concerns: first, as Monty-Python have decisively shown, one can be wrong about one’s language abilities. Second, whether or not such a random entity has memories is open to similar causal-historical concerns to those I mentioned about meaning and understanding.

    —————–
    *I believe one episode had a throwaway line about using “Heisenberg compensators” to get around quantum limitations on cloning and identity; these limitations could be used to question the quantum fluctuation version of swampman.

  38. keiths:

    I’m not talking about a Wittgensteinian private language — just a plain ol’ private language.Perfectly effable, perfectly translatable, but a language whose “user community” has never included anyone but me.

    I admit my post on that was elliptical. So let me be more explicit.

    I don’t think one can invent a private language that that one claims to understand without relying on one’s understanding of some non-invented language, and that understanding must rely on the KP criteria mentioned in my previous post.

  39. Bruce,

    First, I am assuming the Kripke/Putnam (KP) view that meaning (and hence understanding) depends partly on causal history and on the context of one’s environment and linguistic community. If you don’t agree with KP, you won’t accept my arguments.

    It isn’t just that I don’t accept your criteria for what constitutes “understanding a language”. I’ve also explained why I don’t accept them by providing examples where a) your criteria are not met, and yet b) it seems perverse to deny that understanding is taking place.

    First, Swampman. At the moment of his creation, Swampman is physically identical to his non-swamp counterpart in every respect. Assuming you are not a dualist, would you agree that every behavior exhibited by Swampman is identical (modulo quantum indeterminism) to the behavior that would have been exhibited by non-Swampman under identical conditions?

    As a physicalist, I think their behaviors would be identical. They start in the same physical state, their environmental “inputs” are identical, and so they will proceed through the same sequence of physical states (again, modulo quantum indeterminism). That means that not only will their behaviors be identical, but also their subjective experiences.

    If two physically identical beings are conversing identically in English and having the same subjective experiences, how does it make sense to say that one of them understands English and the other doesn’t?

    It’s not like there’s some non-physical residue of causal history that attaches to one but not the other, somehow granting true understanding to the attachee.

  40. keiths:

    As a physicalist, I think their behaviors would be identical.

    One can be a physicalist in some sense while still holding positions like mine on content externatlity. I see my views as similar to Dennett’s, as expressed in his Swampman and twin earth sections of IP:
    – It is questionable practice to draw scientific/empirical conclusions from thought experiments like Swampmen and Cow Sharks
    – the normative (not behavioral) aspects of meaning and intentionality depend on context and causal history

    it seems perverse to deny that understanding is taking place.

    Just because I am a content externalist does not mean I am a flasher!

    I suspect that is my last word on the subject for this thread, if you know what I mean.

  41. BruceS: So let me put my point a different way. I don’t think the Chinese room is a useful way of checking for understanding, so I don’t think the system reply is relevant. According to my intuitions, understanding a language requires

    1. Causal interaction with some of the objects that language references.* If my memory of the movie of Helen Keller’s life is accurate, my point is illustrated by the scene where Helen “gets it” after the teaches dips her hand in water and signs “water” on her palm and then repeats that with other objects.

    2. Participation in a community of language users. That would include shared experiences and discussion regarding some parts of the world the language is used to refer to. I’d also expect that the entity in question demonstrates that it “knows” it is a separate entity, something part of a community yet separate from it.

    keiths: I’m not talking about a Wittgensteinian private language — just a plain ol’ private language. Perfectly effable, perfectly translatable, but a language whose “user community” has never included anyone but me.

    This is interesting I think. Do you make a new version of ubby-dubby (call it “keithy-weithy”) private? I mean, if I were to turn English into Walto-Palto by adding “altie-palto” after every syllable would it be right for me to say that there’s no “community of users” until somebody has sussed this out? Bruce seems to be following Wittgenstein in saying that intentions don’t matter so much–it’s community rules of use that matter.
    So I’d think I’d just beg the question here and say that the “privacy” of the language is a function of the extent to which there are no community rules regarding correct usage, and since keith’s language is not only translatable into English but has actually been derived from it, Voila! Not private.

  42. Bruce,

    One can be a physicalist in some sense while still holding positions like mine on content externatlity. I see my views as similar to Dennett’s, as expressed in his Swampman and twin earth sections of IP:
    – It is questionable practice to draw scientific/empirical conclusions from thought experiments like Swampmen and Cow Sharks

    Dennett really dropped the ball on that one. An exchange of ours from a couple of years ago:

    Bruce:

    Dennett in Intuition Pumps dismisses the argument by saying history does matter, and the thought experiment is too divorced from reality to have force.

    keiths:

    I think he’s copping out. Thought experiments needn’t be realistic to be effective. Twin Earth isn’t very realistic, but Dennett certainly acknowledges its importance. He even refers to his “two-bitser” intuition pump as “the poor man’s Twin Earth”!

    Bruce, today:

    – the normative (not behavioral) aspects of meaning and intentionality depend on context and causal history

    I don’t think so. Suppose that two (non-identical) Swampmen poof into existence, both speaking the same heretofore nonexistent language. The linguistic norms of the language are shared between them, but they depend neither on context nor on causal history.

    Also, you didn’t answer my key question:

    If two physically identical beings are conversing identically in English and having the same subjective experiences, how does it make sense to say that one of them understands English and the other doesn’t?

  43. walto: A

    Hi Walt:
    I was looking again at Quine’s Intensions Revisited paper you mentioned. Quine uses some notation I am unfamiliar with

    I don’t know what Quine meant by A . ⊃ B or A . ⊃ . B. I have not seen that anywhere else. (It does seem Quine uses . for AND elsewhere, but I don’t know is that is relevant).

    Do you happen to know what he uses it to mean?

  44. keiths:

    I’m finished for now with this topic Keith; but if you want to go another round with someone, SEP has more here and here (last paragraph of 4.1), with arguments both for and against me.

    In particular the argument in the first link from those who deny phenomenal externalism is bothersome for me (since externalism about phenomenality is a vexing position for me, at least it sometimes seems that way).

  45. walto,

    Do you make a new version of ubby-dubby (call it “keithy-weithy”) private? I mean, if I were to turn English into Walto-Palto by adding “altie-palto” after every syllable would it be right for me to say that there’s no “community of users” until somebody has sussed this out?

    Yes, although it obviously wouldn’t take very long for someone to figure it out if they had examples to study. But if I were using the language only to write in my journal, as suggested earlier, and if I were successfully keeping the journal away from prying eyes, then yes, there would be no “community of users” for the duration. It would truly be a private language. If you asked anyone but me what the rules were, they wouldn’t be able to tell you.

    Bruce seems to be following Wittgenstein in saying that intentions don’t matter so much–it’s community rules of use that matter.

    And in my case there are no “community rules of use”, unless you consider me a community of one.

    So I’d think I’d just beg the question here and say that the “privacy” of the language is a function of the extent to which there are no community rules regarding correct usage, and since keith’s language is not only translatable into English but has actually been derived from it, Voila! Not private.

    Even if that were true, it would only apply to private languages that are derived from non-private languages. Suppose I build my language de novo?

  46. keiths: Even if that were true, it would only apply to private languages that are derived from non-private languages. Suppose I build my language de novo?

    That’s just what Wittgenstein says can’t be done. Some character in one of the Alice books tells Alice that we can mean anything we want: with words, it’s a matter of who’s to be boss. But Wittgenstein didn’t agree with that. (I don’t think Carroll did either–Alice is certainly non-plussed.) Wittgenstein spends a lot of time on the issue of private rule-making. He comes down against.

  47. walto,

    Wittgenstein spends a lot of time on the issue of private rule-making. He comes down against.

    Wittgenstein was skeptical of linguistic rule-following for public languages as well as private.

    But if you do have an argument against private rule-making, whether from Wittgenstein or anyone else, could you summarize it?

Leave a Reply