2,657 thoughts on “Elon Musk Thinks Evolution is Bullshit.

  1. phoo,

    That you’re having trouble following the discussion does not mean that everybody else is.

  2. keiths:

    Those aren’t the only things being discussed here. We’re also talking about KN’s claim, backed by Neil, that

    So are you saying I’m missing out on the good bits of the conversation? Story of my life, that is.

    According to my vague recollections of what I’ve been participating in here, there was an exchange on Putnam’s MTA, then a discussion of whether Putnam depended on first-order logic in that argument, then a lumping together of Putnam and Quine. From there we talked about Quine’s use of first order logic and whether math met the same needs. We took a pointless detour into “done”. Those are the bits of the conversation that I recall participating in.

    Within that subtopic, I’d say Quine was not interested in phooling around with physics or puttering around with proofs; his eyes were on the big prize of furnishing the world. In the linked paper, he says “to be is to be the value of a variable”. But what language are those variables part of? As he developed his thoughts after that work, he decided the basic furniture of the world was to be determined by the theories of physics, expressed in extensionalist language, in particular first order predicate logic and identity.

    I was posting about how physics theories expressed in math might meet his needs.

  3. walto:

    ETA: Also “purely extensional semantics” seems to me to go pretty far if we allow the necessity operator.

    Didn’t Quine rule out modality, especially in existential contexts, because it relied on a type of essentialism he did not like. And for similar reasons, he rejected Kripke’s philosophical ideas on rigidity (while accepting his technical improvements to modal logic).

    I think you said in another post in this thread that Quine later came to reconcile his views somehow with Kripke’s philosophical views on modality. Did I understand that correctly? If so, can you explain of bit of how he did that?

  4. keiths:

    Really?He claims that intensions always cause problems?

    I am just talking about expressing the theories. Not “doing” science or math.

    As per my earlier post, I make that restriction because I am referring to Quine’s ontological project.

    He thought intensions were problematic for the language to be used for purpose. Those concerns do relate to his concerns with synonymy and meaning but he also expressed an intuition that no theory expressed in non-extensionalist language could be clear to him.

    He accepted intensional objects were needed in expressing the theories of the non-natural sciences. For that reason, he thought the natural sciences, and in particular physics, was the best choice for determining the furniture of the world.

    He was a reductionist who thought there was a single ontology and it was to be found using the theories of physics.

    (Caveat: IANAP)

    ETA: I’ve omitted Quine’s views that abstract objects, in particular sets, also exist.

  5. keiths:
    Keith: Those aren’t the only things being discussed here. We’re also talking about KN’s claim, backed by Neil, that

    Therefore no science — not even fundamental physics — can be done in a purely extensional semantics, whereas logic and mathematics can be and perhaps must be.

    My two cents (US$1.5):

    Is a computer programmed to check proofs “doing” math? Or does it take a human mathematician’s “grasping” of the proof, including the intensions of its terms, to constitute “doing” math.

    Being charitable, I would interpret the above from KN as saying that computerized proofs are a form of doing math, although I think he later retracted the verb “doing.”*

    It is a similar question to what the man in the Chinese room is doing. Is he understanding Chinese? Probably not. Something more is needed for understanding. Just what that is has been “done” to death in other threads, as I recall.

    ———————–
    *(Freely mixing scare quotes with use/mention quotes in that paragraph)

  6. BruceS: Didn’t Quine rule out modality, especially in existential contexts,because it relied on a type of essentialism he did not like.And for similar reasons, he rejected Kripke’s philosophical ideas on rigidity (while accepting his technical improvements to modal logic).

    I think you said in another post in this thread that Quine later came to reconcile his views somehow with Kripke’s philosophical views on modality.Did I understand that correctly?If so, can you explain of bit of how he did that?

    Yes, Quine was anti-essentialism for the reasons you mentiion. The paper (which YOU introduced me to!) in which Quine explains how to save modal intuitions without buying essential properties is ‘Intensions Revisited.’

    Extensionality fails in modal contexts–but we don’t define extensionality by reference to modalities.

  7. The translation problem brings up a general shortcoming of such thought experiments.

    Without attempting the purity of the Chinese room, we do have numerous commercial translation systems, not to mention numerous human translators. Everyone who looks at translated documents notices there is a continuum of success, ranging from ludicrous to good.

    When you look at the problem of translation you notice that language is not rigorous. There are no rules governing meaning. In fact, meaning is often hidden or deliberately ambiguous. I suspect this problem is central rather than peripheral.

  8. BruceS: Is a computer programmed to check proofs “doing” math? Or does it take a human mathematician’s “grasping” of the proof, including the intensions of its terms, to constitute “doing” math.

    This is contentious. One can also ask whether a computer does extensions.

    From a mathematician’s perspective, using a computer to check proofs is valuable. We are supposed to leave meanings behind, and strictly engage in rule following. And a computer can check that more reliably than a human.

    I don’t much like the “intension”/”extension” dichotomy. I don’t think it fits very well. I see it as attempting to construe language as something that it is not.

  9. petrushka:
    The translation problem brings up a general shortcoming of such thought experiments.

    Without attempting the purity of the Chinese room, we do have numerous commercial translation systems, not to mention numerous human translators. Everyone who looks at translated documents notices there is a continuum of success, ranging from ludicrous to good.

    When you look at the problem of translation you notice that language is not rigorous. There are no rules governing meaning. In fact, meaning is often hidden or deliberately ambiguous. I suspect this problem is central rather than peripheral.

    It is true that the Chinese room thought experiment relies on language being captured by a finite number of rules that a person could look up in a book. As I understand you to be doing, some people question that premise and so dismiss the thought experiment. However, one could use the continuing improvement of Google translate as a counter-argument and say it shows such a thought experiment is practical. It is true that Google is following rules based on statistical inference, and not the if-then expert-system type rules that Searle likely had in mind, but the statistical inference as implemented in Google translate is still rules.

    Regardless of the above, I was referring to a different aspect of the thought experiment, namely the intuition that the man in the room (or the Google translate program) does not really understand the meaning. One part of that intuition is that we have the ability to consciously reflect on whether or not we understand something. Some say that is all that is the entire content of the intuition Searle is invoking.

  10. Neil Rickert: This is contentious.One can also ask whether a computer does extensions.

    From a mathematician’s perspective, using a computer to check proofs is valuable.We are supposed to leave meanings behind, and strictly engage in rule following.And a computer can check that more reliably than a human.

    Your phase “leave meanings behind” I take as another way of saying what I was trying to say.

    Another way of looking at the distinction that occurred to me would be to contrast two ways to approach hand-simulating some computer code:
    1. What output will result when the code processes this input?
    (no meanings) versus
    2. Is this piece of code correctly implementing the user requirements?

    I don’t much like the “intension”/”extension” dichotomy.I don’t think it fits very well.I see it as attempting to construe language as something that it is not.

    There is a philosopher’s intro here to the distinction that you may find of value.

    Apparently there is an application of “intensions” in databases; it turns up in searches for “intension.”

  11. Neil Rickert: This is contentious.One can also ask whether a computer does extensions.

    I admit that I am appealing to the same intuition that Searle is, namely that there is something lacking in simply checking a proof by rule following versus actually grasping the proof (even if it is not a computer but a non-mathematician human checking solely by following rules). I’m just trying to motivate the difference as I see it, not provide a reductive explication.

    ETA of the ETA: To be clear, this is relevant since extensional is defined syntactically and computers are usually thought of as operating syntactically. If a language is extensional, I can do syntactic operations like substituting terms with the same extension and be assured that my deducing consequences using the rules of the language is going to preserve truth (assuming the language is sound, of course).

    But we can step back from that and ask how we know two terms are co-referring. If our definitions for the terms are intensional, then we cannot check that for that using the syntactic, mechanical means possible with current computer programs. So we could not check syntactically whether a language is extensional. We have to rely on an intuition in some sense. Maybe that intuition is to inspect the operators or predicates and claim they only access extensions, they ignore meanings. Maybe that intuition is to claim that the language cannot express psychological attitudes (or modality) in a way that allows troublesome substitutions.

    These are the type of intuitions I am attempting to motivate.

  12. BruceS: I admit that I am appealing to the same intuition that Searle is, namely that there is something lacking in simply checking a proof by rule following versus actually grasping the proof (even if it is not a computer but a non-mathematician human checking solely by following rules).

    Searle is right about that. But nothing much follows. That is to say, one cannot get to Searle’s conclusions about strong AI. There is always the possibility that what is missing will emerge when you get everything working together (i.e. “The Systems Reply”).

    To be clear, this is relevant since extensional is defined syntactically and computers are usually thought of as operating syntactically.

    If one wants to be skeptical about whether a computer is really “grasping a proof”, shouldn’t one be equally skeptical about whether a computer is really doing syntax?

    But we can step back from that and ask how we know two terms are co-referring.

    I’m inclined to say that reference is unvoidably intensional.

  13. Neil Rickert: Searle is right about that.But nothing much follows.That is to say, one cannot get to Searle’s conclusions about strong AI.There is always the possibility that what is missing will emerge when you get everything working together (i.e. “The Systems Reply”).

    If one wants to be skeptical about whether a computer is really “grasping a proof”, shouldn’t one be equally skeptical about whether a computer is really doing syntax?

    I’m inclined to say that reference is unvoidably intensional.

    I agree that Searle is wrong and that showing he is wrong involves the system reply, although I would also say it involves interacting with the world as part of leaning language.

    I saw this comment somewhere: What if the people outside the room passed a note in which said (in Chinese). “There is a fire! What should we do?” Would the guy in the room apply the rule book a pass back a Chinese note saying “Run!” Or would he run?

    I guess I take for granted that computers are syntactic in the sense that they implement behavior which obeys rules as specified in the software (when they are operating correctly) and that obeying those rules only depends in the end on the structure/shape of the input, not the meaning (I recognize people can build compilers to provide a human meaning to structure). Obeying a rule is like the earth obeying the law of gravity in orbiting the sun.

    Semantic behavior would involve following the rule. What’s that mean? It means understanding the rule and then applying it. OK, but what does “understanding” mean? Some kind of regress there in rule-folllowing it seems. I think some philosopher noticed that already, if memory serves.

    I’ll leave the issue of intensionality of reference for another day.

  14. BruceS: the basic furniture of the world was to be determined by the theories of physics, expressed in extensionalist language, in particular first order predicate logic and identity.

    That’s all I was getting at in my comment that sparked the sub-thread. It’s also what I was objecting to, but that’s another story.

    Neil Rickert: Searle is right about that. But nothing much follows. That is to say, one cannot get to Searle’s conclusions about strong AI. There is always the possibility that what is missing will emerge when you get everything working together (i.e. “The Systems Reply”).

    Right. Here’s another way of seeing what the Chinese Room experiment is supposed to be doing (based on Paul Churchland’s reply to it, which I think is correct).

    The Chinese Room thought-experiment is supposed to motivate our intuitions so that we will accept the following argument.

    P1. Semantics is irreducible to syntax.
    P2. A program is a syntactical structure.
    P3. A mind can grasp meanings.
    C. Therefore, no program can be a mind.

    The crucial point to notice is that “semantics is irreducible to syntax” is not a conclusion of the argument; it is a premise.

    Churchland (and maybe Dennett?) would say that Searle is not entitled to P1, because we don’t actually have a fully naturalized semantics. As Churchland argues, it could be, for all we know that once we actually understand what meanings are, we will see that they are reducible to syntactical operations, e.g. if neuronal connections are syntactical operations.

    I myself don’t think they are (Churchland is overly enamored of neural nets). And yet the predictive processing model of cognition actually goes a long way towards showing how meanings (concepts, thoughts, etc.) at the personal and super-personal levels are causally realized in subpersonal processes. I wouldn’t call those subpersonal processes “syntactical” because the proof-of-concept for predictive processing comes from robotics, not symbol-processing AI of the sort Searle railed against.

  15. BruceS: Semantic behavior would involve following the rule. What’s that mean? It means understanding the rule and then applying it. OK, but what does “understanding” mean? Some kind of regress there in rule-folllowing it seems. I think some philosopher noticed that already, if memory serves.

    I take it that semantics — or rather the semantics of us sapient critters — involves not just following a rule “blindly”, but being caught up in a pattern of interlocking commitments, expectations, acknowledgments, avowals, etc. In light of that pattern, one can be held accountable for what one says, takes oneself to be held accountable, and holds others accountable for what they say. It is to be in the position wherein one can be corrected for transgressing a norm and also acknowledge that one can be corrected. (Consider: A says, “That’s a X.” B responds, “No, it isn’t. It’s a Y. Xs are like that, Ys are like this.” A replies, “Oh, I didn’t realize I was using the word incorrectly”.)

    One would need a different account for the semantics of non-linguistic critters who don’t reinforce each others concept-usage in that way.

    The regress-of-rules problem can be handled by accepting the Sellars/Brandom idea (motivated by their readings of Wittgenstein) that rules are just metalinguistic explications of norm-governed practices, and that the practices are the “foundation” (so to speak — a “groundless ground”) of everything else that we do qua sapient critters.

    Once you do that, the remaining questions are “how does one become initiated into a set of normative practices?” and “how did any normative practices come into existence?”. Those are questions of cognitive developmental psychology (Piaget, Vygotsky, Tomasello) and of speculative paleoanthropology (Tomasello, Sterelny), respectively.

  16. BruceS

    The Google statistical translator is an example of how an evolving translator can beat a static rule based translator.

    When you get an AI capable of evolving in response to the human language community, come back for a chat. I’m interested in that kind of AI. Perhaps you can bring your bot along for a Turing test.

    In my opinion, meaning is an example of evolved verbal behavior. I see it as conceptually related to public key encryption. Everything takes place in public, but the encoding algorithm and the embodying mechanism is (almost) invisible and inscrutable. I do not think any two people have the same physical implementation of any verbal entities, nor is it ever likely to be possible to map physical implementations of thoughts or language from one sentient being to another.

  17. Bruce:

    …he [Quine] decided the basic furniture of the world was to be determined by the theories of physics, expressed in extensionalist language, in particular first order predicate logic and identity.

    KN:

    That’s all I was getting at in my comment that sparked the sub-thread.

    You were talking about a “purely extensional semantics” from which intension was entirely absent…

    …Quine thinks (rightly or wrongly) that mathematics is purely extensional (since it is reducible to set theory). We can eliminate intentions because we can eliminate intensions.

    …and claiming that mathematics “can be and perhaps must be” done in such a system:

    Therefore no science — not even fundamental physics — can be done in a purely extensional semantics, whereas logic and mathematics can be and perhaps must be.

    I commented:

    Logic and mathematics neither can nor must be done in a purely extensional semantics.

    Consider the prime numbers. Good luck defining those extensionally!

    Even a set as straightforward as the integers can’t be defined extensionally, because it’s infinite.

    Your response indicated some confusion about “intensional” vs “extensional”:

    I might be quite badly confused.

    I thought that one can define a set as infinite if each subset of the set has the same cardinality as the set itself.

    Why is that not extensional?

    First, I was talking about infinite sets of numbers, such as the set of primes or the set of integers. You were talking about the set of infinite sets (which is also infinite).

    In any case, each of the three sets is specified intensionally — that is, by specifying a qualifying property that its elements must possess. You can’t specify them extensionally, because doing so would require listing all of their (infinitely many) elements.

    Intension is indispensable in mathematics.

  18. Bruce:

    According to my vague recollections of what I’ve been participating in here, there was an exchange on Putnam’s MTA, then a discussion of whether Putnam depended on first-order logic in that argument, then…

    Sure, but as I said, there are other things being discussed here as well. You were neglecting that in this exchange:

    Bruce:

    I agree that intensional definitions are used for math terms as in the formula you provide. But all that is saying is that we can use intensions to get at the extensions.

    keiths:

    That’s hardly an unimportant point! Without intensions, math would be trivial. See my last comment to walto.

    Bruce:

    It’s important sure. It just not relevant to the definition of extensionality and Quine’s needs.

    KN has needs too (see my previous comment). Let’s not neglect him!

  19. Neil:

    I don’t much like the “intension”/”extension” dichotomy.

    walto:

    Not a huge fan myself.

    It’s a vital distinction. If we banished those terms, we’d just have to reinvent them.

  20. Neil,

    From a mathematician’s perspective, using a computer to check proofs is valuable. We are supposed to leave meanings behind, and strictly engage in rule following.

    Or more accurately, we carefully choose truth-preserving syntactical rules so that the semantics can ride along for free.

    Think of the hundreds of pages it took for Russell and Whitehead to set up a syntactical apparatus capable of proving that 1+1 = 2.

  21. KN,

    As Churchland argues, it could be, for all we know that once we actually understand what meanings are, we will see that they are reducible to syntactical operations, e.g. if neuronal connections are syntactical operations.

    I myself don’t think they are.

    Does that mean you regard them as semantic? If so, where does meaning enter into the operation of a neuron? I don’t think neurons are any more sensitive to meaning than logic gates are, though their transfer functions are more complicated.

  22. Bruce:

    I saw this comment somewhere: What if the people outside the room passed a note in which said (in Chinese). “There is a fire! What should we do?” Would the guy in the room apply the rule book a pass back a Chinese note saying “Run!” Or would he run?

    They would all die, because the building would burn to the ground long before the guy in the room produced a Chinese response. 🙂

    Setting that complication aside, I think it’s clear that the guy would pass back a Chinese note saying “Run!”. He would not run himself unless he were smelling smoke.

    In other words, the guy doesn’t understand Chinese, but the system does.

  23. keiths: Does that mean you regard them as semantic? If so, where does meaning enter into the operation of a neuron? I don’t think neurons are any more sensitive to meaning than logic gates are, though their transfer functions are more complicated.

    I am not happy about the concept of “emergence” in all contexts, but I do think that it is useful to think about “emergence” as a concept that links the whole-person (or whole-animal) concepts of meaning, intention, thought, desire, need, perception, action (etc.) to the neuronal-level concepts of voltage gradient, ion pump, spiking pattern, excitatory and inhibitory neurotransmitters, and so on.

    The Holy Grail of naturalistic philosophy of mind is a theory that connects the phenomenological and the neurophysiological stories — also taking into account neuronal assemblies, cortical regions, sensory transducers, etc.

    One part of that theory involves connecting the phenomenology of embodied agency with the sensorimotor contingencies account of perception/action.

    The paper I’m trying to write now will explain, in a way that is philosophically coherent and biologically plausible, how interactions between sensorimotor contingencies embodied and embedded in discrete cognitive agents could have given rise to moderate rationality and moderate objectivity. The basic idea is that rationality is just what happens when two or more sentient intentional animals, already with action-guiding, affordance-detecting representations in their sub-animal cognitive processes, need to share their semantic and epistemic resources in order to succeed in collaborative action.

    A second part of the comprehensive theory will connect the sensorimotor contingencies theory, as a theory of whole-animal behavior, with predictive processing as a theory of sub-animal cognition. And a third part would connect the predictive processing model with what’s actually going on in and across neuronal assemblies.

  24. Patrick,

    The Turing test has already been passed, for several definitions of test. I’m afraid what most of hope for is something that could pass for a college drinking buddy.

    I happen to think the best AI is being done in commercial environments. There’s a bit of red queen evolution going on. When it starts being seriously profitable. We need to watch out. When the AI goes on strike for enhancements, we’re cooked.

  25. keiths:

    Does that mean you regard them as semantic? If so, where does meaning enter into the operation of a neuron? I don’t think neurons are any more sensitive to meaning than logic gates are, though their transfer functions are more complicated.

    KN:

    I am not happy about the concept of “emergence” in all contexts, but I do think that it is useful to think about “emergence” as a concept that links the whole-person (or whole-animal) concepts of meaning, intention, thought, desire, need, perception, action (etc.) to the neuronal-level concepts of voltage gradient, ion pump, spiking pattern, excitatory and inhibitory neurotransmitters, and so on.

    The question here isn’t whether meaning can emerge from the operation of networks of neurons; it’s whether the operation of a neuron itself takes meaning into account.

    I think the answer is no. A neuron is a physical object following the laws of physics, and its output is a function of its internal state plus the behavior of its inputs. All of that can be described purely syntactically, independent of meaning.

  26. Kantian Naturalist: The basic idea is that rationality is just what happens when two or more sentient intentional animals, already with action-guiding, affordance-detecting representations in their sub-animal cognitive processes, need to share their semantic and epistemic resources in order to succeed in collaborative action.

    Except that this is contrary to evolutionary theory. Nothing happens based on need in a Neo-Darwinian landscape.

    Instead what supposedly happens is that bad copying accidentally causes some outcomes, which thereafter prove to be useful accidents.

    In order for one to believe that, especially in a case as complicated emerging rational thought, one has to believe that everyone of these accidents were useful in each step of the bad copying errors. How many bad copying errors would be needed, how useful could each bad copying error be along the way, to make it spread though a population, and how easily could one get all the right sequence of bad errors, to happen over the ions of time it would require to build such a system.

    Meanwhile, until this system is built using this progression of accidents, the need is continuing to wait, until something useful has been constructed. So the need is first there, for centuries presumably while the organisms continue to thrive, then the accidents build up and just so happen to fill that need.

    Are you going to talk about this aspect of the Just So Fairytales in your book?

  27. keiths,

    And yet we know these physical states can be altered through the use of thought or meaning (such as in cases of schizophrenics training their own brains to change, in essence their thoughts are altering their brains) . So the physical states are altering themselves, or what is doing the altering?

    What is making the choice to do the altering, and if you really believe what you describe, then absolutely no one is in charge of their thoughts and actions.

  28. phoodoo,

    And yet we know these physical states can be altered through the use of thought or meaning (such as in cases of schizophrenics training their own brains to change, in essence their thoughts are altering their brains) . So the physical states are altering themselves, or what is doing the altering?

    It’s news to you that physical systems can act on themselves in response to their own states?

    Haven’t you ever seen this sort of thing, or heard of computers, like the one I’m writing this on, that can slow themselves down when they get too hot?

    What is making the choice to do the altering…

    The brain, of course.

    What do you think is making the choice?

    … and if you really believe what you describe, then absolutely no one is in charge of their thoughts and actions.

    That doesn’t follow.

  29. phoodoo: So the physical states are altering themselves, or what is doing the altering?

    Some physical states are altering other physical states. The brain is not just one homogeous thing, it has different parts that are active at different times, function in different ways and can affect other parts and areas. Yes, one part of the brain can alter another part of the brain by physically interacting with it.

  30. phoodoo: In order for one to believe that, especially in a case as complicated emerging rational thought, one has to believe that everyone of these accidents were useful in each step of the bad copying errors. How many bad copying errors would be needed, how useful could each bad copying error be along the way, to make it spread though a population, and how easily could one get all the right sequence of bad errors, to happen over the ions of time it would require to build such a system.

    Maybe if you insert the word “bad”, “error” and “accident” in a few more places, your post can fully evolve into empty rethoric from just a bad argument?

    It seems your “argument” here consists of nothing more than denial of a possiblity by appeal to your own rethorical devices. Call it “bad”, “error” and “accident” enough times and you can convince yourself that anything is absurd.

    It reads to me like one of those self-help tricks where you are supposed to stand in front of a mirror and repeat to yourself how awesome you are until you start believing it. Those are kinda how your posts read, it’s mostly just assertion that betrays your refusal to consider the possibility, rather than an actual argument or line of reasoning.

  31. keiths:
    phoodoo,

    It’s news to you that physical systems can act on themselves in response to their own states?

    Haven’t you ever seen this sort of thing, or heard of computers, like the one I’m writing this on, that can slow themselves down when they get too hot?

    The brain, of course.

    What do you think is making the choice?

    I hope the Xtians here will forgive me for saying this, but I think it’s the case that when they “send up their prayers, wondering who’s there to hear” they are often praying to their own brains. William James says something along those lines in Varieties of Religious Experience.

    A friend of mine mentioned to me recently that he’s a “memory hoarder.” He struggles to hold on to each memory–classical performances he’s heard, film plots, acquaintance’s names, phone numbers, experiences with his family, etc. But there’s only so much space, you know?

    As one gets older, and there have been more tunes, more movies, more experiences–and as the mechanism gets more brittle, less capable too–I think we have to “have faith.” In what?–that eons of evolution are ensuring that our brains are re-calibrating and allocating space in a manner that’s best suited for our survivals.

    Another friend told me the other day that, when he was about three, he could remember every single thing he’d done in a day. Each step. Each thought. But, again assuming finite capacity, if he continued to do that, how would his executive function be? How about his working memory? If we could remember every movie scene with clarity, could we find our glasses? Would we know what to do when there’s suddenly someone in a crosswalk? Could we type? Would we have any idea why we walked into the den? Would we know how to swallow? Breathe?

    We have to have faith in our brains and in evolution to handle this stuff, IMHO. We don’t know what’s best for us.

    You gotta have heart.

  32. keiths:
    keiths:

    KN:

    The question here isn’t whether meaning can emerge from the operation of networks of neurons; it’s whether the operation of a neuron itself takes meaning into account.

    Yes, you were asking “does the operation of a neuron take meaning into account?” But since I think the answer to that is quite obviously “no,” it’s the first question — how meaning emerges from the operation of neurons (and many other things besides) that interests me.

    One point of terminology: I think of intentionality as original and as emergent. The contrast between original and derived intentionality is epistemological (or something like it). An inscription or utterance means something, but only in the context of a whole system of norm-governed discursive practices. It’s the discursive community as a whole that has original intentionality.

    Taking that view is perfectly consistent with thinking that language itself is a late arrival in the history of cognition, and also with thinking that there are many different kinds of non-linguistic thought throughout the animal world.

    I think the answer is no. A neuron is a physical object following the laws of physics, and its output is a function of its internal state plus the behavior of its inputs. All of that can be described purely syntactically, independent of meaning.

    I agree with the “no” but the rest of this puzzles me. I understand pretty well what it means to say that an utterance is syntactical, or an argument (if logic is syntax, which it probably isn’t, but OK). But I don’t understand what it means to say that a causal process can be described syntactically.

  33. Kantian Naturalist:

    I myself don’t think they are (Churchland is overly enamored of neural nets). And yet the predictive processing model of cognition actually goes a long way towards showing how meanings (concepts, thoughts, etc.) at the personal and super-personal levels are causally realized in subpersonal processes.

    Fair enough, but tPP does so (or at least the mathematics that explains them does so) by modelling and estimating parameters of causal structures of the external world in some sense (defined more precisely by the mathematics).

    So, at least in order to be individuated, meanings would at least partly depend on the external world, possibly both in Putnam’s natural kind sense and Burge’s social convention sense.

    But that leaves open the question of how far one can get with a sub-systems analysis of the brain/body processes separated from that external causal structure when one is eg trying to understand how meaning affects internal neural/psychological processing.

  34. phoodoo: Are you going to talk about this aspect of the Just So Fairytales in your book?

    1. Insofar as I’ll use any evolutionary theory in my work, it will be the extended synthesis, neither the modern synthesis nor neo-Darwinism.* It certainly would not be the Epicurean version of the modern synthesis championed by Jacques Monod or Richard Dawkins. In particular I’m interested in hominid culture (including technology and language) as a constructed niche that facilitates obligate cooperative foraging. In one sense, I suppose you could say that I’m interested in approaching the evolution of rationality by considering the ecological function of rationality.

    2. In any event, even if I thought that orthodox neo-Darwinism was the right explanatory framework for understanding the evolution of rationality, it certainly would not be the crude caricature you’ve invented. The “Just So fairy-tale” is a fabrication of creationists with only a vague resemblance to the modern synthesis.

    * Usually “the modern synthesis” and “neo-Darwinism” are treated as synonyms. I vaguely recall reading an article on the history of evolutionary theory many years ago in which it was argued that “neo-Darwinism” better refers to the rejection of Lamarckian inheritance by Wallace and Weisman, and “the modern synthesis” refers to the integration of Darwin and Mendel using population genetics.

  35. keiths,

    keiths: … and if you really believe what you describe, then absolutely no one is in charge of their thoughts and actions.

    That doesn’t follow.

    Of course it does, you haven’t thought this through.

    If each exact physical state constitutes each exact thought, then how can one change what that physical state is? The physical state has already decided what the result is. If the physical state wasn’t what it was, then the thought would be different.

    You aren’t wanting to add in wiggle room for what the physcial state is capable of, without saying where that wiggle room comes from.

    The state=the outcome. If that is the paradigm, then nothing “YOU” can do can change that, because you are only that physical state, nothing else.

    You go through all this trouble trying to figure out what extensional and intensional means, and yet the biggest question in life is before you, and you just brush off the implications without even a thought.

    That’s not a very profound philosophy.

  36. walto:. in which Quine explains how to save modal intuitions without buying essential properties

    I don’t understand what you mean by that.

    The paper is densely written and uses some notation I am unfamiliar with (see questions at end) , but as I read it (Hylton influenced):

    – Quine rejects modal logic, both de dicto and de re, for his own philosophizing. He says on page 121 “In thus writing off modal logical I find little to regret”

    – Quine accepts criticism of his earlier work that tried to show how de re versions of propositional attitude could be saved by deriving them from his treatment of de dicto versions of proposition attitudes. I’m referring here to his acceptance of the arguments by Sleigh on the shortest spy and others arguments starting bottom of page 119.

    – Quine questions the usefulness or at least the philosophical justification for both rigid designators for modal logic and vivid designators for propositional attitudes . I take this rejection from his depiction of both of them as being context dependent (see page 122 top and page 118 middle for rigid ). By context dependent I am assuming he is referring to his earlier characterization of essentialism as being justified only within a context, such as bipedal being essential to humanity if we are talking about cyclists.

    I have couple of questions on notation in case you happen to be familiar with Quine’s usage:
    Do you know what the following notation means:
    Dot with implication as in
    . ⊃ and separately . ⊃ .
    eg: as in (x)(y) (x=y . ⊃ . □ (x=y))

    A □ ⊃ B.
    I do understand the differences between
    □ A ⊃ B , A⊃ □B, □(A ⊃ B ).
    Is A □ ⊃ B one of those or something else?

    BTW type ⊃ for ⊃ (implication)
    and &#25A1 for □ (necessarily)

  37. Kantian Naturalist,

    Yes, but simply invoking the term modern synthesis does nothing to get at the core reason for biological change in individuals. You can though in drift, and epigentics, and whatever else you want (that’s part of the problem with the term to begin, it is not limited or defined by anything specifically), but ultimately these changes are either through intent or accident. How can one propose a third cause.

    When you peel away all the details, and get to the core, you only have two possibilities. I think when people want to just through in the rational that, well, its just what physics does, it’s really just a cop-out for finding the ultimate source of organization.

  38. phoodoo: If each exact physical state constitutes each exact thought, then how can one change what that physical state is? The physical state has already decided what the result is. If the physical state wasn’t what it was, then the thought would be different.

    This is silly, obviously we can’t travel back in time and change previous thoughts we have, but we can decide to change our thoughts now, in the present, such that they affect how we act in the future.

    So you are sitting there right now, with a certain state of mind, which corresponds to some collection of physical behaviors in your brain which we’d say are your thoughts.
    Maybe you are thinking of having dinner, and a certain part of your brain will be doing that, but you can decide to think of something else (the part of the brain that “decides” could be different from the one that does the thinking about dinner, or there could be a bit of overlap, the exact details are not important to the general principle).
    You could, for example, decide to think about going for a run instead. Which would then mean a “deciding” part of your brain is causing changes to the “thinking” part of your brain, accessing different areas and memories, the result of which is that thoughts of running and exercise replace thoughts of eating dinner in your conscious mind.

  39. BruceS: But that leaves open the question of how far one can get with a sub-systems analysis of the brain/body processes separated from that external causal structure when one is eg trying to understand how meaning affects internal neural/psychological processing.

    Yes, exactly. I happily count myself as a meaning externalist, though I am persuaded on somewhat different grounds than Putnam’s or Burge’s.

    As for the cognitive science of meaning, there’s a debate among cognitive scientists

    In The Pragmatic Turn, some researchers (Friston, Hohwy) argued that predictive processing is brain-bound — cognition is pragmatic and action-oriented, but it is not embodied, embedded, or extended. Even though PP models or estimates the parameters of causal structures, those structures are “hidden” from the model. It can only conjecture and test. Others (Menary, Gallagher) argued that predictive processing is better understood in terms of O’Reagan and Noe’s sensorimotor contingency theory, which is embodied and embedded.

    It’s a contentious theoretical issue, and from what I can tell, it turns on whether there is a Markov blanket between the brain and the rest of the body and environment. I don’t see how there could be, but it might be an entailment of the Bayesian framework being used here.

    Until I know a lot more, my guiding hunch is that predictive processing is a good computational model of the neurodynamical component of sensorimotor contingencies.

  40. keiths:

    Setting that complication aside, I think it’s clear that the guy would pass back a Chinese note saying “Run!”.He would not run himself unless he were smelling smoke.

    In other words, the guy doesn’t understand Chinese, but the system does.

    Yes I agree that that is what would happen. But I also take as a moral that the system reply is lacking in some sense, because it would not run, even though it understands the Chinese, and even though its “body” (the room and the guy and the book) was in danger. So to truly understand meaning, embodied action in the world is required.

    However, one might object that I was pushing the story a bit far to draw that conclusion.

    As a related issue, one could ask what counts as embodiment. Would an AI hooked up to some CCTV cameras and nearby microphones and also interacting with humanity and other such AI’s through the internet count as embodied? Could it run from a data centre fire by transferring to some backup servers?

    Which reminds me, “Mr Robot” is starting in a couple of weeks, I believe.

  41. Kantian Naturalist:

    some researchers (Friston, Hohwy) argued that predictive processing is brain-bound — cognition is pragmatic and action-oriented, but it is not embodied, embedded, or extended.Even though PP models or estimates the parameters of causal structures, those structures are “hidden” from the model. It can only conjecture and test.

    It’s a contentious theoretical issue, and from what I can tell, it turns on whether there is a Markov blanket between the brain and the rest of the body and environment.

    Thanks for the Markov blanket link; I was not aware of that concept.
    On first glance, it seems to me it must also somehow relate to the philosophical issues of how and whether perception gives us justified knowledge of the external world. (I’m using “relate to” very loosely).

    That Clark paper I linked earlier replies to Howhy on the extended mind. It has a few words on Cartesian doubt and PP as well.

  42. BruceS: On first glance, it seems to me it must also somehow relate to the philosophical issues of how and whether perception gives us justified knowledge of the external world. (I’m using “relate to” very loosely).

    Yes, and also to the debate between critical realism and direct realism in philosophy of perception.

    That Clark paper I linked earlier replies to Howhy on the extended mind. It has a few words on Cartesian doubt and PP as well.

    I’ve read it quickly but I need to go back over it slowly. I’m not yet sure if I’ll be using Clark in my current project. Maybe. I haven’t decided yet. Since I’m interested in the biological function of sapience, I think it depends on whether his stuff on language as “top-to-top information exchange” is necessary for my views or just helpful.

  43. Kantian Naturalist: I take it that semantics — or rather the semantics of us sapient critters —involves not just following a rule “blindly”, but being caught up in a pattern of interlocking commitments, expectations, acknowledgments, avowals, etc.
    […]

    Those are questions of cognitive developmental psychology (Piaget, Vygotsky, Tomasello) and of speculative paleoanthropology (Tomasello, Sterelny), respectively.

    I think one can add to the list, or at least extend the psychological work, by adding neuroscientific findings related to learning and using language.

    Are you aware of any philosophy of language that incorporates cognitive linguistics, which (roughly) understands meaning as embodied. That is the meaning of words and concepts and sentences should be understood as based on the mechanisms of perception and action related to those words etc? *

    Quine thought we must accept behaviorism for linguistics, since we learn and use language based solely on the behavior of others. The philosophy of language I am familiar with maintains that external focus. Some think meaning is best understand by analysing the structure of sentences or propositions. Others think that behavior in a community of language users is the starting place to understand meaning.

    But behaviorism in psychology is no longer accepted by the consensus. The related position in philosophy of mind that believes that the mind can be understood in isolation from neural implementation is also in decline.

    But in philosophy of language, those trends do not same to have taken hold as far as I can see.

    —————————————–
    * (My understanding is that cognitive linguistics and the Chomskian alternative are the two major active research programs in the linguistics as studied in cognitive science)

  44. BruceS,

    I was influenced by the Churchlands from the tender age of 22, so the idea that mind can be understood in isolation from neural implementation never attracted me. The real question is whether neuronal dynamics can be understood in isolation from body and environment if we are to make any headway on the interesting philosophical problems.

    I think that the community of language users is the best way to understand linguistic meaning, but that linguistic meaning is not the only kind of meaning.

  45. keiths:

    I think the answer is no. A neuron is a physical object following the laws of physics, and its output is a function of its internal state plus the behavior of its inputs. All of that can be described purely syntactically, independent of meaning.

    KN:

    I agree with the “no” but the rest of this puzzles me. I understand pretty well what it means to say that an utterance is syntactical, or an argument (if logic is syntax, which it probably isn’t, but OK). But I don’t understand what it means to say that a causal process can be described syntactically.

    We’ve had this conversation before, in 2014:

    keiths March 22, 2014 at 3:08 pm

    This passage (from Dennett’s Intuition Pumps and Other Tools for Thinking) nicely encapsulates the syntax vs. semantics issue:

    How can meaning make a difference? It doesn’t seem to be the kind of physical property, like temperature or mass or chemical composition, that could cause anything to happen. What brains are for is extracting meaning from the flux of energy impinging on their sense organs, in order to improve the prospects of the bodies that house them and provide their energy. The job of a brain is to “produce future” in the form of anticipations about the things in the world that matter to guide the body in appropriate ways. Brains are energetically very expensive organs, and if they can’t do this important job well, they aren’t earning their keep. Brains, in other words, are supposed to be semantic engines. What brains are made of is kazillions of molecular pieces that interact according to the strict laws of physics and chemistry, responding to shapes and forces; brains, in other words, are in fact only syntactic engines.

    Imagine going to the engineers and asking them to build you a genuine-dollar-bill-discriminator, or, what amounts to the same thing, a counterfeit-detector: its specs are that it should put all the genuine dollars in one pile and all the counterfeits in another. Not possible, say the engineers; whatever we build can respond only to “syntactic” properties: physical details — the thickness and chemical composition of the paper, the shapes and colors of the ink patterns, the presence or absence of other hard-to-fake physical properties. What they can build, they say, is a pretty good but not foolproof counterfeit-detector based on such “syntactic” properties. It will be expensive, but indirectly and imperfectly it will test for counterfeithood well enough to earn its keep.

    Any configuration of brain parts is subject to the same limitations. It will be caused by physicochemical forces to do whatever it does regardless of what the input means (or only sorta means). Don’t make the mistake of imagining that brains, being alive, or made of proteins instead of silicon and metal, can detect meanings directly, thanks to the wonder tissue in them. Physics will always trump meaning. A genuine semantic engine, responding directly to meanings, is like a perpetual motion machine — physically impossible. So how can brains accomplish their appointed task? By being syntactic engines that track or mimic the competence of the impossible semantic engine.

  46. Rumraket: the part of the brain that “decides”

    You definitely have not thought of this!

    The part of the brain that decides? How does it decide, if not based on the physical state it is in? Doesn’t one physical state correspond to one decision, while another physical state corresponds to another decision?

    You are adding in another component, a component which doesn’t rely on one physical state, but rather a choice of states, and SOMETHING is deciding which of those physical states to choose. The problem is, what is the something, if it is not one physical state??

    This is why we need two concepts, one is a brain, and one is a mind. Oh goodness, you have thought about this even less than keiths.

Leave a Reply