2,657 thoughts on “Elon Musk Thinks Evolution is Bullshit.

  1. KN, to walto:

    The criticism of the Cartesian theater has nothing to do with theories of perception per se.

    walto,

    Now that both KN and I are telling you this, don’t you think it’s time for you to go back and read Dennett’s account of the Cartesian Theater, but with a little more care this time?

    You’re getting Dennett wrong, just as you got Descartes wrong during your meltdown yesterday.

  2. KN,

    To that extent, I think keiths is consistent when he subscribes to Dennett on intentionality and consciousness but accepts sense-datum theory.

    I’m not advocating sense data theory; in fact, I’ve been careful to avoid the phrase “sense data” (except to point out that walto is confusing it with “sensory information”).

    I’m simply saying that the brain receives sensory information.

  3. BruceS: One can claim that SM lacks the causal history needed to provide the required norms for intentionality, hence it cannot have beliefs of any sort, hence it cannot have knowledge.

    Just re-browsing the comments. I think this focuses on a point that I’ve been trying to make. Causal history, with regard to human memory for instance, exists in the present in the brain (in the physical brain state). If you allow the logical possibility of (re)-creating identical copies, then if the copy is perfect, the copy has exactly the same brain state as if that causal history happened and their is no way to tell by examining the copy, say by interviewing, whether that memory was “real” or “copied”.

    Other than refuting dualism, still not sure what the point of swampmen and p-zombies is.

  4. walto: I don’t think so. I wouldn’t care about the other one too much, and the other one wouldn’t care too much about me.

    Well, indeed. I suspect we might try to kill each other to get our life back. The fact (heh) is that we would each utterly honestly believe we were the true Alan and that the other is a copy.

    What good would it do YOU if somebody were to create a replica of you at the moment of your death?

    If I try to take the scenario seriously for a moment, my point is that a perfect copy, a physically identical entity, would be me. I don’t think there is any ME other than the physical state of my body at the present moment.

    It might do your family and friends some good (especially if they didn’t realize)–but it wouldn’t be you living longer–just a copy.

    How could they realise? And not just a copy, at the moment of (re)-creation, I am me. (If such thought experiments make sense at all.)

  5. keiths: As I keep pointing out, the laws of physics don’t “care” about history. They just care about the current physical state plus any interactions with the environment.

    Precisely! (That cost a little effort 🙂 )

  6. Alan Fox: Just re-browsing the comments. I think this focuses on a point that I’ve been trying to make. Causal history, with regard to human memory for instance, exists in the present in the brain (in the physical brain state. If you allow the logical possibility of (re)-creating identical copies, then if the copy is perfect, the copy has exactly the same brain state as if that causal history happened and their is no way to tell by examining the copy, say by interviewing, whether that memory was “real” or “copied”.

    Other than refuting dualism, still not sure what the point of swampmen and p-zombies is.

    But more than one causal history can produce the same brain state. Or, if you accept the terms of SM, a given brain state can happen by blind luck..

    That is why brain state alone cannot tell us about the reality of memory.

    That is also why brain state alone cannot tell us what SM means by “water”. Does it mean water or twater? Or nothing at all if we have not reason to specify the context, ie earth versus twin earth versus the many other possibilities that would produce the same brain state.

    If SM had a causal history associated with our earth, then it might be appropriate to say we should use the norms of our earth to determine what it means by “water”. But it does not.

    Now Davidson (and I) extend that point to say none of the structure with which it produces language-like behavior has a causal history that can be thought of as learning and using a language, therefore we should question whether any of its predicted behavior would constitute meaningful use of language.

    I admit that is a bigger step. One could accept the water/twater stuff but still say it does inherently understand English because the causal structure it has will produce successful-enough language behavior on either earth or twin earth or all the various other such possibilities.

    BTW, there was a causal history behind the “Tunnel” question, but it would not be apparent if you have not seen the series. In memory of Dr Liddle and her site rules, I say no more.

  7. BruceS: I admit that is a bigger step. One could accept the water/twater stuff but still say it does inherently understand English because the causal structure it has will produce successful-enough language behavior on either earth or twin earth or all the various other such possibilities.

    Bingo! You found me!

  8. BruceS: But more than one causal history can produce the same brain state.

    Possibly but if one allows the possibility (as a thought experiment) that absolutely identical physical copies of a living human will have the identical brain-state and, thus, set of memories.

    Or, if you accept the terms of SM, a given brain state can happen by blind luck.

    Not sure if there is an agreed set of terms!

  9. BruceS: BTW, there was a causal history behind the “Tunnel” question, but it would not be apparent if you have not seen the series. In memory of Dr Liddle and her site rules, I say no more.

    I can probably persuade a friend to send me downloads (still available on SKY) of the most recent series if that will help decrypt! 🙂 I was impressed by Clémence Poésy in Birdsong. Very intense.

  10. BruceS: Now Davidson (and I) extend that point to say none of the structure with which it produces language-like behavior has a causal history that can be thought of as learning and using a language, therefore we should question whether any of its predicted behavior would constitute meaningful use of language.

    This is my issue with thought experiments. Allow the possibility of perfect copies of complex dynamic living entities and you have to accept there is no way to tell. Where it gets you is another question.

  11. walto: Bingo!You found me!

    Yes, I understood that pretty early in the exchange, sorry if I did not make it clear.

    The exchange started with the Chinese Room. In that context, I don’t think the system as specified by Searle’s original formulation understands, but I do think that a system which created such a rule book by learning and using language in a language community would understand.* I also think that an entity that was given the rule book then used it and updated it for a sufficiently long time would understand.

    That was the starting point of the conversation in this thread.

    The rest of what I say in the thread is meant to stay consistent with that intuition.

    (ETA: I have noticed the issue with the rule book in Searle’s original formulation having a causal history. Probably needs to have been created at random to fully meet SM conditions).

    And I won’t deny there is some intellectual fun to be had from maintaining such an unintuitive position. But I am sincere in thinking it has merit.

    ——————
    * Of course, I don’t think that is even close to a good model for understanding language and the human mind.

    And speaking of rule following: how would Kripenstein’s rule following paradox fit in? In particular, would any solution to it apply to SM at its moment of creation?

    Or speaking of behavioral-based attribution of meanings, how would stimulus meaning fit in? And would Quine’s solution to the issues he raises in Ontological Relativity apply to SM?

    Interesting issues that occurred to me but which we should probably save for an intellectually-rainy day.

  12. Alan Fox: This is my issue with thought experiments. Allow the possibility of perfect copies of complex dynamic living entities and you have to accept there is no way to tell. Where it gets you is another question.

    Yes, that is a good point.

  13. Alan Fox: I can probably persuade a friend to send me downloads (still available on SKY) of the most recent series if that will help decrypt! I was impressed by Clémence Poésy in Birdsong. Very intense.

    Only if you are interested in what I might be driving at. But the brief summary of the French detective’s character in Wiki should do.

  14. @ walto

    You asked upthread about earlier TSZ posts and discussions. If you go to your dashboard, click posts, then published posts, you should be able to select categories. “Mind and Brain” finds most on the subject, I think. Lizzie wrote one on consciousness frinstance: Conching

    ETA: Zombie Fred

    quote:

    If a zombie robot (Fred) behaved exactly like a conscious person, to the point of being indistinguishable from a conscious person,
    Fred would necessarily be as conscious as a conscious person

  15. keiths: So what exactly is this metaphysical tether? How do you know it exists?

    That gets to the nub of the issue so I will respond.
    I’ve said from the start that my position depends on semantic externalism. Brain state alone does not determine meaning. I am not saying that meaning does not supervene on the physical (whatever that is). For Burge scenarios, it supervenes on the brain states of the linguistic community; for Putnam and twin earth scenarios, it supervenes on those brain states and on their physical context. For me supervenience does not imply reducibility to the laws of physics.

    The claim that what a single person means will adjust to fit its context is an extension of that viewpoint. I’ve also maintained that understanding is not yes/no, and being vague about the timeframe for that adjustment is consistent with that position.

    FWIW, I interpret Dennett as agreeing with me (ETA: on my answer to metaphysical tether, not necessarily my entire position). In IP Ch 29 he raises the horses/shmorses issue without explicitly answering it. Now I take his answer to where our intentionality derives from at the end of the chapter to be it derives from our evolutionary history. By extension, a person’s meaning for “horse” derives from his or her history in a linguistic community and its physical context.

    I take the constraints on meaning he talks about in the next (Crossword) chapter on Quinian indeterminacy to include that linguistic and physical context.

    Consider this passage from him; in particular, how it applies to the semantics of human language.

    There is no way to capture the semantic properties of things (word tokens, diagrams, nerve impulses, brain states) by a micro-reduction. Semantic properties are not just relational but, you might say, super-relational, for the relation a particular vehicle of content, or token, must bear in order to have content is not just a relation it bears to other similar things (e.g., other tokens, or parts of tokens, or sets of tokens, or causes of tokens) but a relation between the token and the whole life and counterfactual life of the organism it serves and that organisms requirements for survival and its evolutionary ancestry. (Dennett, 1981, as reprinted in The Intentional Stance , 1987, p65)

  16. BruceS: I am not saying that meaning does not supervene on the physical (whatever that is).

    I’d suggest the physical is everything that can be known by experience, observation and experiment, everything for which there is evidence of existence – or as I may have remarked – what I call reality. The rest is imaginary. But that doesn’t make one a determinist or reductionist, it just makes one not a dualist.

    ETA or reductionist

  17. BruceS: Yes, I understood that pretty early in the exchange, sorry if I did not make it clear.

    The exchange started with the Chinese Room.In that context, I don’t think the system as specified by Searle’s original formulation understands, but I do think that a system which created such a rule book by learning and using language in a language community would understand.*I also think that an entity that was given the rule book then used it and updated it for a sufficiently long time would understand.

    That was the starting point of the conversation in this thread.

    The rest of what I say in the thread is meant to stay consistent with that intuition.

    (ETA:I have noticed the issue with the rule book in Searle’s original formulation having a causal history.Probably needs to have been created at random to fully meet SM conditions).

    And I won’t deny there is some intellectual fun to be had from maintaining such an unintuitive position.But I am sincere in thinking it has merit.

    ——————
    *Of course, I don’t think that is even close to a good model for understanding language and the human mind.

    And speaking of rule following:how would Kripenstein’s rule following paradox fit in?In particular, would any solution to it apply to SM at its moment of creation?

    Or speaking of behavioral-based attribution of meanings, how would stimulus meaning fit in?And would Quine’s solution to the issues he raises in Ontological Relativity apply to SM?

    Interesting issues that occurred to me but which we should probably save for an intellectually-rainy day.

    This an extremely interesting and insightful post, Bruce. I hadn’t considered the connections between, e.g. the ‘system response’ to Searle’s puzzle and putnam’s stuff on natural kinds and division of linguistic labor before. If that’s mostly original, I think you should consider trying to publish it.

    I also think it’s interesting where our intuitions converge and diverge here. E.g., I think I’ve been more latitudinarian about what it means to understand English than you have been on this thread, but I never felt much love for the systems response, while you indicate you’re ok with it. We both might have some ‘splainin’ to do if there are contradictions hanging around there. (I note that keiths seems fine with taking the behaviorist pellet in both areas. That easy consistency may be a nice consequence of accepting some of the apparent absurdities and other contradictions of his Dennettian Cartesianism.)

    It’s a very meaty post!

  18. BruceS: And would Quine’s solution to the issues he raises in Ontological Relativity apply to SM?

    As you know (since I pointed it out in my rejoinder to Harman and elsewhere), I think that ‘solution’ is a big mess. It won’t work for SM either, I’m afraid.

  19. Quine wrote:

    …nothing happens in the world, not the flutter of the eyelid, not the flicker of a thought, without some redistribution of microphysical states…

    Quine, W.V. 1981. Theories and Things. Cambridge: Harvard University Press.

  20. Alan Fox:
    Quine wrote:

    Quine, W.V. 1981. Theories and Things. Cambridge: Harvard University Press.

    True dat. And non-controversial. What’s hard is figuring out what, if anything of interest, follows from it. E.g. If you take the Kripke-putnam view of meaning, you can accept Quine’s claim simply by denying that ‘understanding’ is a ‘happening’.

  21. walto:

    I also think it’s interesting where our intuitions converge and diverge here. E.g., I think I’ve been more latitudinarian about what it means to understand English than you have been on this thread, but I never felt much love for the systems response, while you indicate you’re ok with it.

    It’s a very meaty post!

    Not the systems response, no. That is the one I reject. It says the static system which does not interact with the world and does not update the rule book would understand. I don’t think that.

    The robot response is the one I favor. It says interacting with the world and a community is necessary for understanding. It is meatier because, as SEP points out, that would involve making and eating hamburgers, or at least talking to people about that.

    Dennett also relies on robots acting as independent agents in a community to provide an intuition pump for his views on intentionality. But he does not use that as part of his reply to CR, I believe.

    I do have my name on a paper in a referred journal. The paper was published in the 80s. I had a rather trivial insight about exponential smoothing and forecasting, which a university prof we were using as an external consultant thought worth publishing. The referees thought it was too trivial to publish, but the consultant, who wrote the paper and simply added my name to author list, managed to get it by that referee, based on his reputation in the field, I imagine. I cannot remember his name or the paper’s title. (ETA: Just for a laugh, I searched for au:”my name here” on JSTOR. And there it was!)

    ETA: The supervenience stuff is similar to views KN has promoted here as I understood them, though his detailed explanation and motivation differs, I believe. He would most likely be very unhappy with me omitting the body state. Mea culpa on that.

  22. walto: Kripke-putnam view of meaning

    I think the understanding of the evolution of language and the evolution of the necessary adaptations for language (and the necessity of social behaviour to encourage both) is hugely more interesting than p-zombies.

  23. BruceS: The robot response is the one I favor. It says interacting with the world and a community is necessary for understanding.

    But is it SUFFICIENT?

  24. BruceS: Not the systems response, no. That is the one I reject. It says the static system which does not interact with the world and does not update the rule book would understand. I don’t think that.

    The robot response is the one I favor. It says interacting with the world and a community is necessary for understanding.

    We have different understandings of this.

    I take the Systems Reply to be, in effect: If the system gets the behavior right, it will have understanding and the understanding will be in the system as a whole.

    Looked at that way, the Robot Reply is just the Systems Reply, but with a broader meaning of getting the behavior right.

    I take Searle’s claim to be: even if you get the behavior right, the system will not have understanding.

    My personal view: if you get the behavior right, of course the system will have understanding. However, you will never get the behavior right so this is mostly an argument about angels dancing on heads of pins.

  25. The Turing test suggests that an entity capable of interacting in a language community is sufficient.

    I suspect there are good reasons why there are no obvious examples of language competent robots or programs.

    By competent, I mean capable of interacting in a broad way. It’s pretty easy to emulate a mentally or emotionally handicapped person, and not too difficult to be an encyclopedia, but this is not really human interaction.

  26. Alan Fox: I think the understanding of the evolution of language and the evolution of the necessary adaptations for language (and the necessity of social behaviour to encourage both) is hugely more interesting than p-zombies.

    P-zombies are not part of what is involved in the discussion as I understand it, although it is true that in both PZs and SMs cases we have creatures that share physical state with some “real” human.

    For Putnam, the issue is meaning which he claims does not supervene on brain state, not even the brain states of a whole linguistic community. But Putnam does not say meaning-properties do not supervene on those properties studied by physics (he’d likely not be happy with the phrasing). However, he does think they cannot be reduced to descriptions in physics or in any natural science for that matter.

    For zombies, the issue is whether qualia supervene in all possible worlds on physical properties as studied by current physics. Chalmers says no. Hence physicalism defined in terms of what current physics studies is false according to him.

  27. petrushka:
    The Turing test suggests that an entity capable of interacting in a language community is sufficient.

    Yes, out of consistency I also reject the Turing test for understanding, unless we specify that the entity under test has a relevant causal history. Not sure if that implies I reject it for human intelligence as well, which I think is how the original Mr T intended it. Maybe he saw them as the same.

  28. Neil Rickert: We have different understandings of this.

    I take the Systems Reply to be, in effect:If the system gets the behavior right, it will have understanding and the understanding will be in the system as a whole.

    Looked at that way, the Robot Reply is just the Systems Reply, but with a broader meaning of getting the behavior right.

    Actually, if the book had a causal history relating to a Chinese speaking community, I might have to accept that the system reply works. To fully reflect the SM scenario, you’d have to specify that the book exists purely by blind luck.

    Anyway, good to see you are paying attention. Did you get the H/T?

  29. BruceS: Yes, out of consistency I also reject the Turing test for understanding, unless we specify that the entity under test has a relevant causal history.Not sure if that implies I reject it for human intelligence as well, which I think is how the original Mr T intended it.Maybe he saw them as the same.

    Would you think that, given determinism, the relevant causal history would be enough to drop the Turing Test? I.e., same relevant causal history, same behavior, so no further test is necessary? Or do you take the term “same history” to be consistent with such identical history happening to two entirely different things (say, a human being and a bowl of chalk dust)?

  30. walto: But is it SUFFICIENT?

    Yes, I think so. In other words, I am taking a functionalist view of understanding as long as the causal relations captured in the functionalism have the right causal history. Blind luck won’t do. I realize that hurts one’s intuition. But I think it is needed to explain how understanding and meaning can work for real-life cases.

    Of course, it is a different, empirical question about how “deep” one has to go into the specific biochemical structure of humans in order to reproduce that causal structure.

    And I am not saying that I am a functionalist about phenomenal experience.

  31. Neil Rickert: It seems to me that we ascribe “understanding” based on how well people respond.

    Can you make that view consistent with, e.g., an understanding paralytic?

  32. Bruce,

    That is also why brain state alone cannot tell us what SM means by “water”. Does it mean water or twater? Or nothing at all if we have not reason to specify the context, ie earth versus twin earth versus the many other possibilities that would produce the same brain state.

    I would say it means both. Or rather, that it sorta means both, which, oddly enough, is a more precise statement.

    In a true (but impossible) semantic engine, the ‘water’ state would mean either water or twater, but not both. The “metaphysical tether” would be well-defined, and it would attach to twater for the Twin Earthlings, but to water for us.

    But we are syntactic engines — physical systems that operate according to the laws of physics, which do not take meaning into account. The metaphysical tether, if it exists at all, is invisible to physics. There is no fact of the matter about whether a two-bitser’s state means “detected a quarter” versus “detected a quarter balboa”; it’s a syntactic engine that responds equivalently to both. Likewise, there is no fact of the matter about whether our ‘water’ brain state really means water, but not twater, or vice-versa. We respond equivalently to both.

    More on this later when I address your response to my post about metaphysical tethers.

  33. walto: Would you think that, given determinism, the relevant causal history would be enough to drop the Turing Test?I.e., same relevant causal history, same behavior, so no further test is necessary?Or do you take “same history” to be consistent with it happening to two entirely different things (say, a human being and a bowl of chalk dust)?

    Based on 60 seconds of thought, given the right causal history, determinism, and the same substrate for interacting with that causal history, then we should have enough to predict whether something has understanding of A language.

    If I have created a problem for myself by this position, I suspect you or Keith will let me know!

    Now if we do a switch of linguistic communities, then it would not be the same language initially that I understand as that new linguistic community.

    Exchanging me and me twin on twin Earth would be an example, as applied in my initial state.

    More fun: remove my brain and add it to a community of envatted brains who speak at vattish “version” of English. Same applies. That is, initially, I understand English, but not vattish.

    This despite the fact that I would predict that envatted me would meet Quine’s conditions for fluid and successful exchanges with the community. So that is another problem for my intuition.

    AFAIK, Quine never talked about SM scenarios, only learning language or translating language in the context of a language community.

    And FWIW, as mentioned in Ch 1 and detailed in Ch 16 of Pursuit of Truth, he moved away from his austere position in W&O, and had to bring empathy and similar evolutionary history into his explanation of the sharing of stimulus conditions for different people as that is involved in stimulus meaning.

  34. Alan Fox:
    BruceS,

    Thanks, Bruce.

    ETA, if I say “Asperger’s” am I getting warm?

    If I said “yes”, would that be specific enough to break the rules?

  35. keiths:
    KN,

    I’m not advocating sense data theory; in fact, I’ve been careful to avoid the phrase “sense data” (except to point out that walto is confusing it with “sensory information”).

    I’m simply saying that the brain receives sensory information.

    It is certainly true that the play of energies across sensory receptors triggers the propagation of neuronal signals across many cortical and subcortical structures, which might be thought of as perturbing the brain’s endogenous dynamics.

    If that’s all you mean by “the brain receives information,” then your claim is a basic scientific truth that hardly anyone would dare to contest.

    However, as I see it, there is a massive gulf between this perfectly acceptable claim and the claim that “any knowledge claim based on the veridicality of our senses is illegitimate, because we can’t know that our senses are veridical.”

    The claim that “we cannot know that our senses are veridical” stands in need of argument that neuroscience alone cannot provide. Indeed, it seems like it has to be an a priori claim, since it would be a bit odd to argue on empirical grounds that we cannot know that the senses are veridical.

    The closest we’ve come to the requisite a priori argument — either here, or in the history of philosophy — is that we can always stipulate an infinite disjunct (‘my senses are veridical unless I’m a brain in a vat, or a sentient AI, or a figment in the mind of God, or . . . ‘).

    But this is inadequate to generate the skeptic’s conclusion. All it shows is that the senses are not necessarily veridical, or veridical in every possible world. Indeed, they are not. But it tells us nothing about whether the senses are veridical in this world, the actual world.

    But now notice that when it comes to the actual world — picked out from the class of all possible worlds — conceptual specification cannot do that. We need to include indexicals like “here,” “there”, “now”, and “then”. To account for those terms, we need to be using our senses.

    Two further questions:

    1. Does empirical inquiry tend to support the hypothesis that our senses are generally reliable?

    2. Is it viciously circular to vindicate the senses by means of the senses?

    I take the answer to (1) to be “yes”. Sure, there are optical illusions, hallucinations, pathologies, etc. — all of which are identifiable as such by virtue of being placed against a background of general perceptual reliability. And the thesis of general perceptual reliability is supported by a nice consilience against ecological psychology, neuroscience, and evolutionary theory.

    As for (2), I guess the answer is a shrug of the shoulders — a “so what?” As Hume points out, it’s no less viciously circular to vindicate reason by means of reason than to vindicate the senses by means of the senses.

    On the one hand, there’s no deductively valid proof which yields the reliability of the senses or of reason as its conclusion. On the other hand, when we attend carefully to the structure of our actual epistemic practices in everyday and scientific contexts, where the general reliability of our perceptual systems is a background assumption, we find that we are capable of generating new knowledge. So the idea that the senses are generally reliable is perfectly good by pragmatist lights.

    And, as stressed above, the idea that an animal’s senses as perceptual systems are generally reliable about the structure of its environment is perfectly consistent with the idea that there is a play of energies across sensory receptors triggers the propagation of neuronal signals across many cortical and subcortical structures. As BruceS already noted, the crucial thing here is to keep distinct the personal and subpersonal levels and to stress that both are equally (if you like) real.

    walto: What I know about Dennett on perception I mostly know from ‘his Quining Qualia’–and nearly all of his complaints can also be made against sense-data. Actually, qualia ARE sense-data in spite of being properties according to early Russell. Dennett is very dismissive of these items, just as he is of Descartes and the other Cartesians who join him in relying on such ‘data’ to doubt the existence of cows.

    Yes, I think that Dennett would probably follow Sellars in thinking of sense-data as one of the empiricist versions of the Myth of the Given.

  36. Bruce,

    Yes, out of consistency I also reject the Turing test for understanding, unless we specify that the entity under test has a relevant causal history.

    Yes, accepting the Turing test would be inconsistent. By your lights, Swampman doesn’t understand English even if he’s capable of passing the most demanding fluency test. Understanding English is not an ability, it’s a particular kind of history.

    Swamp Shakespeare can bowl you over with his plays, but he doesn’t understand English. Swamp Jeff Gordon can win at Talladega, but he doesn’t know how to drive a car, or even what a car is. A swamp surgeon can save your life, but he doesn’t know what a human body is or how to operate on one.

    To which one can only respond, “Seriously, Bruce?”

  37. walto:

    My own inclination is to put my money on the nose of Dretske’s representational horse.

    I believe he relied on evolutionary history in his reply to SM, at least for norms for mental representations: From the SEP article on Telelogical Theories of Meaning:

    . Dretske (1996) argues the case with another imaginary example. Twin-Tercel, a random replica of his old Tercel, comes about as the result of a freakish storm in a junk yard. It is molecule-for-molecule identical to his old Tercel, except that its “gas-gauge” does not move in relation to the amount of gas in its “tank”. We might be tempted to say that the thing is broken, but Dretske says that there is no basis for saying that it does not work because to say that it does not work implies that it was designed to do something it cannot do and it was not designed to do anything. If we should reform our intuitions in the one case, perhaps we should also reform them in the case of Swampman’s intentionality, he says.

  38. Neil Rickert: It seems to me that we ascribe “understanding” based on how well people respond.

    Is “understanding a language” different from being subject to the ascription of doing so by a language community?

    I tried to motivate an answer or “yes, it is” by the art experts example above; in particular, how their ascription of the creator understanding aesthetics might change given their knowledge of the causal history of the creation of the painting.

  39. KN,

    It is certainly true that the play of energies across sensory receptors triggers the propagation of neuronal signals across many cortical and subcortical structures, which might be thought of as perturbing the brain’s endogenous dynamics.

    If that’s all you mean by “the brain receives information,” then your claim is a basic scientific truth that hardly anyone would dare to contest.

    Anyone, that is, except for walto. He actually thinks it’s an instance of the Cartesian theater fallacy.

Leave a Reply