Obscurantism

The subject of obscure writing came up on another thread, and with Steven Pinker’s new book on writing coming out next week, now is a good time for a thread on the topic.

Obscure writing has its place (Finnegan’s Wake and The Sound and the Fury, for example), but it is usually annoying and often completely unnecessary. Here’s a funny clip in which John Searle laments the prevalence of obscurantism among continental philosophers:

John Searle – Foucault and Bourdieu on continental obscurantism

When is obscure prose appropriate or useful? When is it annoying or harmful? Who are the worst offenders? Feel free to share examples of annoyingly obscure prose.

408 thoughts on “Obscurantism

  1. Bruce,

    Based on my reading of that, I think applying “sorta” to neurons is compatible with the agency Dennett attributes to them. Having a “sorta desire” is a sorta want.

    Again, we’re not disagreeing about whether the intentional stance can be applied to neurons. You, Dennett and I all agree that it can.

    The disagreement is over how it can be applied to neurons. You said, incorrectly paraphrasing Dennett:

    …that one should take the intentional stance towards neurons, because neurons want to join coalitions with other neurons to “push” their point of view (ie their source of activation) towards “brain fame” = consciousness.

    I see that as an overreach, because neurons aren’t concerned with coalitions, “brain fame”, or consciousness.

    I go with Dennett’s more modest formulation:

    At the cell level, the individual neurons are more exploratory in their behavior, poking around in search of better connections, changing their patterns of firing as a function of their recent experience.

  2. Neil,

    A flip-flop is in a stable state. The electrical flows are what keep it in that stable state. From the point of view of the stable process, the electrical flows can be reasonably said to be meaningful, though not in a conscious sense.

    The flop remains in its current state because the flows and voltages are there, not because they mean anything. Physics doesn’t care what (or whether) they mean.

    The flop output might mean “cache request outstanding”, “CPU throttled”, “interrupt pending”, or thousands of other possibilities. None of it matters to the physics.

  3. Bruce, to walto:

    I am not sure if you plan to pursue this right now, but if you do, I’d be interested in your thoughts on a disagreement between Dennett and Dretske.

    In that page 7 paragraph I noted, Dretske states he agrees with Millikan but that Dennett does not agree with him (Dretske). That confused me since Dennett seems to be in close agreement with Millikan.

    Bruce,

    I think the main difference between Millikan and Dennett is that for Millikan, the causal history is all-important; it determines the true meaning of a representation. So for her, the word (or concept) ‘water’ in the mind of an Earthling truly means H2O and only H2O. Transport the Earthling to Twin Earth and ‘water’ still means H2O only, not the functionally equivalent local liquid XYZ.

    For Dennett, the meaning depends on the context. Transport the Earthling to Twin Earth and ‘water’ changes its meaning. It now refers to the local liquid. Likewise, transport the two-bitser to Panama and state Q now means “saw a quarter-balboa” instead of “saw a quarter”.

    However, that makes it all the more mystifying that Dennett chooses to punt on the Swamp Man question. He argues (correctly, I think) that the two-bitser’s Q state can mean “saw a quarter-balboa”, despite the fact that there is no causal connection between quarter-balboas and the two-bitser’s design. It seems that he could just as easily argue that the Swamp Man’s concept of “dog” means the same thing as everyone else’s, despite not being causally connected to real dogs.

  4. keiths:

    I think the main difference between Millikan and Dennett is that for Millikan, the causal history is all-important; it determines the true meaning […]

    For Dennett, the meaning depends on the context.

    Keith:

    I understand Dennett somewhat differently.

    Based on my initial reading of the paper I linked in the post to Walt, as well as the Quinian crossword puzzle chapter in IP, I think Dennett believes meaning is indeterminate in principle. We can usually, however, determine that there is only a single possible meaning by imposing the applicable, simultaneous constraints, such as the evolutionary history of an agent in its original environment. I then read him as saying the those constraints don’t apply in the twin water thought experiment, so that meaning to the transported person is best thought of as indeterminate.

    On the other hand, Dennett says Dretske still thinks there is an answer to the meaning question on twin earth (either “water” does mean the same or it does not as on earth to the transported person). Dennett says that implies that Dretske does believe in original intentionality.

    Since Walt has access to Dretske’s full book and I do not, I wondered if Walt read Dretske the way Dennett does.

    I have not read enough Millikan to comment in her position, which I suspect is quite subtle. I do think her concept of “proper function” relates to those constraints that Dennett mentions.

    My understanding of the history of the relation between her ideas and Dennett’s is that while she asked for Dennett’s input on her initial papers, it is now she who is doing the heavy philosophical lifting in these issues. Dennett now follows her lead. Hence I was surprised that Dretske, at least in the brief passage I can access, implies that he can agree with Millikan and not Dennett.

  5. I read the “two” papers (most of one of them is included verbatim in the other), though I skimmed the last half-dozen pages of the longer one. Dennett covers a tremendous amount of ground at a dizzying pace.

    He responds to my threshold concern that the whole discussion is a matter of the genetic fallacy in his very brief discussion of Kripke toward the end. My thought was that “derived from” shifts from “deducible from”–when we talk about semantics being derivable from syntax, to “is a causal descendent of” when he discusses original and secondary intentionality, evolution, etc. And, it seemed to me that Dretske makes the same error.

    That Dennett is aware of this complaint becomes clear first in his discussion of Fodor, and, more pointedly, in his discussion of Kripke. But he goes so fast that I don’t have a clear sense what to think the matter. I’d probably have to read one of his books to get a more step-by-step treatment of the issues from him.

    When I get fairly deep into these kinds of quagmires, I tend to get increasingly mysterian about the whole matter, but I can sometimes fight my way out with the help of nearby shmorse.

  6. Neil Rickert: At the level of physics, the operation of the computer is semantic.It operates on electrical charges or currents according to their real world properties.So that’s completely semantic.

    I’m unclear on how or even if representations fit into your use of the concepts of meaning and semantics.

    For example, there is a natural meaning in how smoke means fire and one could say smoke represents fire. Similarly, there is a meaning we impose on representations captured by word symbols.

  7. keiths:
    You’re both misunderstanding the meaning of ‘syntactic’ in the context of these discussions.

    Keith:
    I raised the points as a teaser for the CR link where other concerns with Searle’s argument are discussed.

    In particular, two issues:
    1. Focusing on syntax can tempt one to ignore that fact that computers are physical entities executing a process. Physically, they don’t follow rules at all; they obey the laws of physics.

    2. For computers, we design/impose the rule following interpretation on the entity operating according to physics. But in doing so to explain intentionality/meaning, we have to think carefully about the entity that is germane to the issue. Yes, the CPU follows rules. But the meaning is not in the CPU. It is in the virtual entity created from the CPU, the rules, and the process of following them. Similarly, meaning it is not in the brain. It is in the mind/person and its causal interactions with an environment.

    That is my understanding of the systems reply to Searle.

    More at the CR link. I won’t attempt to do more to repeat what is already there.

  8. walto:

    He responds to my threshold concern that the whole discussion is a matter of the genetic fallacy in his very brief discussion of Kripke toward the end.My thought was that “derived from” shifts from “deducible from”–

    I don’t follow your concerns in detail, but I have the impression that whereas you want to deduce or possible derive meaning from syntax, Dennett et al want to explain how meaning/intentionality arise from the evolution of an agent and its causal interactions with the environment.

    “Derive” versus “explain”. Formal logic versus scientific, ampliative explanation. Maybe the issue is that you have a different goal than Dennett does.

  9. BruceS: I don’t follow your concerns in detail, but I have the impression that whereas you want to deduce or possible derive meaning from syntax, Dennett et al want to explain how meaning/intentionality arise from the evolution of an agent and its causal interactions with the environment.

    “Derive” versus “explain”.Formal logic versus scientific, ampliative explanation.Maybe the issue is that you have a different goal than Dennett does.

    Right.

    I think, though, that Dennett argues that it is a mistake to claim that those two can be kept separate, and, I take it, Dretske agrees with him about that (and Kripke doesn’t). But that’s where Dennett goes too quickly for me. I like to see arguments in a numbered premise format so I can see if I think they are actually sound. I don’t want to say that Dennett hand-waves, because I don’t want to suggest he’s actually done anything wrong, but he certainly flies. His differences with Kripke and Fodor are important: if we want to assess who we think is right (if anybody), I think we need to tread slowly. Dennett dispenses with them both in a couple of pages.

  10. BruceS: I’m unclear on how or even if representations fit into your use of the concepts of meaning and semantics.

    I’m going by Wittgenstein’s “meaning is use”, interpreted rather broadly.

    If a stable process is using signals to maintain its stability, I take that to be semantic (or proto-semantic, if you prefer). It’s meaning might only be about internal states. My view is that meaning starts with self and expands outward to eventually involve world states that impact internal states.

  11. Neil Rickert: I’m going by Wittgenstein’s “meaning is use”, interpreted rather broadly.

    If a stable process is using signals to maintain its stability, I take that to be semantic (or proto-semantic, if you prefer). It’s meaning might only be about internal states. My view is that meaning starts with self and expands outward to eventually involve world states that impact internal states.

    Would anyone care to interpret this for me?

  12. Neil is, ironically, defining ‘meaning’ so broadly as to render it meaningless. To him, everything is meaningful because it means itself.

    I’ll stick with the standard definitions, which are far more useful.

  13. Bruce,

    Based on my initial reading of the paper I linked in the post to Walt, as well as the Quinian crossword puzzle chapter in IP, I think Dennett believes meaning is indeterminate in principle. We can usually, however, determine that there is only a single possible meaning by imposing the applicable, simultaneous constraints…

    That’s why I say that to Dennett, meaning depends on context. If a representation picks out one and only one referent in the world, then we can safely say that the referent is meant by the representation. Dennett’s point is that there is no single true meaning. Other referents are possible in principle (XYZ on Twin Earth) or in reality (quarter-balboas in Panama).

    …such as the evolutionary history of an agent in its original environment.

    That idea is from Millikan, not Dennett. Millikan sees the evolutionary history as fixing the true meaning of an otherwise indeterminate representation. Dennett has no problem with representations that shift their meanings, as the two-bitser’s Q-state does when it goes from meaning “saw a quarter” in the US to “saw a quarter-balboa” in Panama. The fact that the two-bitser was designed to recognize quarters, not quarter-balboas, does not mean that the true meaning of the Q-state is “recognized a quarter”, in his view. There is no true meaning; just a couple of meanings that work, depending on context.

  14. keiths, to Neil and Bruce:

    You’re both misunderstanding the meaning of ‘syntactic’ in the context of these discussions.

    Bruce:

    I raised the points as a teaser for the CR link where other concerns with Searle’s argument are discussed.

    In particular, two issues:
    1. Focusing on syntax can tempt one to ignore that fact that computers are physical entities executing a process. Physically, they don’t follow rules at all; they obey the laws of physics.

    The physical level is syntactic, by the philosophers’ meaning of the term in these discussions. That’s why Dennett can say things like this:

    Brains, in other words, are supposed to be semantic engines. What brains are made of is kazillions of molecular pieces that interact according to the strict laws of chemistry and physics, responding to shapes and forces; brains, in other words, are in fact only syntactic engines.

    In these discussions, it’s best to think of “syntactic” as meaning “operating without regard to meaning at the current level or above.” Don’t think of it as strictly involving the manipulation of symbols according to formal rules — that’s true in other contexts, but not here.

    Physics is syntactic because (contra Neil) it operates without regard to any assigned meanings (at any level).

    The human in the Chinese Room operates syntactically with respect to higher levels, because the meanings of the Chinese symbols don’t factor into his manipulations of them. He’s just following the rules.

    Looking downward, however, we can of course see that the human is operating semantically in interpreting the rules as they are expressed in English.

    Searle’s claim is that we cannot build semantics at a higher level from pure syntax at a lower level. Whether semantics exists at still lower levels is irrelevant. The human understands the meaning of English words, but that doesn’t mean that his manipulation of the Chinese symbols depends on the meaning of those symbols.

  15. keiths:
    keiths, to Neil and Bruce:

    Bruce:

    The physical level is syntactic, by the philosophers’ meaning of the term in these discussions.That’s why Dennett can say things like this:

    In these discussions, it’s best to think of “syntactic” as meaning “operating without regard to meaning at the current level or above.” Don’t think of it as strictly involving the manipulation of symbols according to formal rules — that’s true in other contexts, but not here.

    Physics is syntactic because (contra Neil) it operates without regard to any assigned meanings (at any level).

    The human in the Chinese Room operates syntactically with respect to higher levels, because the meanings of the Chinese symbols don’t factor into his manipulations of them.He’s just following the rules.

    Looking downward, however, we can of course see that the human is operating semantically in interpreting the rules as they are expressed in English.

    Searle’s claim is that we cannot build semantics at a higher level from pure syntax at a lower level.Whether semantics exists at still lower levels is irrelevant.The human understands the meaning of English words, but that doesn’t mean that his manipulation of the Chinese symbols depends on the meaning of those symbols.

    In some ways this thread mirrors the one about whether consciousness exists. As you point out, Dennett says,

    Brains, in other words, are supposed to be semantic engines. What brains are made of is kazillions of molecular pieces that interact according to the strict laws of chemistry and physics, responding to shapes and forces; brains, in other words, are in fact only syntactic engines.

    Many people (including Dennett) would say that, well, SOMETHING is a semantic engine. Kripke adds that we know what we are thinking about with with a fairly high degree of certainty, even if we know almost nothing about that thing. Similarly, the question about whether we’re conscious seems almost as silly and paradoxical as Moore’s “It’s raining but I don’t believe it.”

    I think one of the baffling things about Dennett’s view is his claim that there’s no fact of the matter regarding what someone means. Even if meaning were context dependent, we’d normally say there is a fact of the matter–it just depends on the context. In context A the fact is that F; in context B the fact is that G. But Dennett seems to weave in the sorites problem (fuzziness), so that nothing clearly means anything (except, I guess, his papers).

    I don’t know if you know Putnam’s cherry/cat paradox: it has a similar effect on me.

  16. walto: Right.

    I don’t want to say that Dennett hand-waves, because I don’t want to suggest he’s actually done anything wrong, but he certainly flies.His differences with Kripke and Fodor are important: if we want to assess who we think is right (if anybody), I think we need to tread slowly. Dennett dispenses with them both in a couple of pages.

    I have that trouble with a lot of the primary literature in philosophy because it assumes the reader is already familiar with the basic positions and the published arguments about them. I am sure Dennett assumes the same about his past books on the issues in this paper.

    If you are interested, Millikan has a more detailed review of her position (which I believe Dennett would basically agree with) in this chapter she wrote for a recent Oxford Handbook (PDF).

    The first few pages are a nice summary of background and motivation for the approach. Then the going gets a bit tougher (ie I have not read past the first few pages).

    No numbered premises, though, I am afraid. There is a numbered diagram. But it is not in the pdf….

    I am not clear on why you referenced the genetic fallacy in your previous post, but the Millikan article does try explain in more detail the role of evolutionary and learning history.

  17. Neil Rickert: I’m going by Wittgenstein’s “meaning is use”, interpreted rather broadly.

    If a stable process is using signals to maintain its stability, I take that to be semantic (or proto-semantic, if you prefer). It’s meaning might only be about internal states.My view is that meaning starts with self and expands outward to eventually involve world states that impact internal states.

    Thanks Neil.

    If you have internal stability, does that imply the need for feedback control mechanisms to maintain that stability? I mention this because such mechanism will (may?) include a representation

    For example, I believe we have an internal representation in the brain that is the feeling of thirst; that representation reflects the internal changes that results from need for water and causes us to drink which closes the feedback control loop.

    My point is it is the representation that led to the action and so it was part of the feedback control mechanism.

  18. walto: .Even if meaning were context dependent, we’d normally say there is a fact of the matter–it just depends on the context. In context A the fact is that F; in context B the fact is that G.But Dennett seems to weave in the sorites problem (fuzziness), so that nothing clearly means anything (except, I guess, his papers).

    I don’t know if you know Putnam’s cherry/cat paradox: it has a similar effect on me.

    As I am sure you know, Dennett is following Quine with his indeterminacy of interpretation.

    In Intuition Pumps, Dennett makes his reasoning a bit clearer, I think. He says that indeterminacy is “negligible in practice” because there are so many independent constraints to be satisfied that only one interpretation can fit all of them.

    Here constraints include the evolutionary history and nature of our brains/bodies, our personal histories, our embedding in a linguistic community (ch 30).

    I am not sure if that is what you had in mind by “context”.

    So as I read him, if someone is transported from earth to twin earth, then the meaning of the transported person’s use of “water”, at least right after the transport, becomes indeterminate because the constraints argument does not work for this environment.

    Thanks for the cherry/cat reference. I had not heard of it, but of course Mr. Google has. The paper I found explaining it made use of possible worlds, permutations of objects in them, and isomorphisms. My mathematical background helps me understand that stuff without too much head-hurting. Or at least that math background could help, if it were not so stale…

    By the way, why do people always pick on cats as the stooges for their paradoxes?

  19. BruceS: I am not clear on why you referenced the genetic fallacy in your previous post

    The traditional question about whether reference/meaning can be derived from syntax isn’t a causal question. It’s about how one can break out of what appears to be a closed loop and get to an actual item in the world. But the discussion shifts to which intentionality is “derived” and which original–and suddenly we are talking about causes. That looks like a classic example of the genetic fallacy.

    However, as I said, Dennett is aware of that complaint and responds to it toward the end of the paper when he talks about Kripke and Fodor. I assume Dretske (who also might be accused of that fallacy) would make a similar response. But Dennett goes so fast there. (I haven’t read anymore Dretske.)

    Thanks for the Millikan link. I’ve never read a word of her stuff.

  20. walto: I think one of the baffling things about Dennett’s view is his claim that there’s no fact of the matter regarding what someone means.

    Dennett is surely correct about that.

    Facts are objective things. Meaning is subjective. Dennett’s statement would seem to follow.

  21. BruceS: If you have internal stability, does that imply the need for feedback control mechanisms to maintain that stability?

    Very likely, yes.

    I mention this because such mechanism will (may?) include a representation

    I’ll disagree.

    It seems reasonable to say that feedback is representational. But when you say “a representation” you are implying that representation has to come in discrete units. And I disagree with that.

  22. Neil Rickert:
    It seems reasonable to say that feedback is representational.But when you say “a representation” you are implying that representation has to come in discrete units.And I disagree with that.

    I don’t think a representation has to come in discrete units. It can be analog, such as the (old-fashioned?) thermostats which represented room temperature in the length of metal strips.

    Whether the brain uses analog (eg frequency of spiking) or digital (eg pattern of neural interconnections) or both types of representation, is an open question still, I believe.

    Just to be clear, I think that some feedback mechanisms involve a representation of some other (re-presented0 things. But not all of them. Organisms with no or primitive nervous systems/brains might use ongoing, direct interaction with the environment, rather than representing it.

  23. Neil Rickert: Dennett is surely correct about that.

    Facts are objective things.Meaning is subjective.Dennett’s statement would seem to follow.

    I don’t see that it follows. Say that tastes in foods are subjective, that doesn’t mean that there’s no fact of the matter of whether I like okra or not.

  24. walto: The traditional question about whether reference/meaning can be derived from syntax isn’t a causal question. It’s about how one can break out of what appears to be a closed loop and get to an actual item in the world.But the discussion shifts to which intentionality is “derived” and which original–and suddenly we are talking about causes. That looks like a classic example of the genetic fallacy.

    Is “derived” being used in two senses? One sense is logical: deriving Pythagoras’s theorem from the axioms of Euclidean geometry. Another sense is a word symbol deriving its intentionality from human’s (purported) originality intentionality. Those two seem different to me.

    For that logical sense of derive, asking to derive semantics from syntax seems pointless to me. It is like someone writing down biochemical laws, equations, and blueprints in a book and asking a chemist to (logically) derive life from the book

    But what a chemist would do to show in scientifically acceptable way how life is explained by biochemistry would be to build a mechanism which chemically* interacted with the world, according to the blueprints and laws in the book and claim it was alive.

    —————————
    * I stuck that weasel word “chemically” in to sidestep arguments about whether a computer simulation of those laws and blueprint in some virtual world is alive.

  25. walto: Say that tastes in foods are subjective, that doesn’t mean that there’s no fact of the matter of whether I like okra or not.

    You’ve switched from “taste” to “like” there. There is something objective about what people like, because we see their preferences as public behavior.

  26. Bruce,

    By the way, why do people always pick on cats as the stooges for their paradoxes?

    There’s nothing paradoxical about dogs. What you see is what you get.

    Cats retain an air of mystery.

  27. Neil Rickert: You’ve switched from “taste” to “like” there.There is something objective about what people like, because we see their preferences as public behavior.

    But there is nothing objective about what people mean? Their understandings aren’t expressed in their behaviors?

  28. BruceS: Is “derived” being used in two senses? One sense is logical: deriving Pythagoras’s theorem from the axioms of Euclidean geometry. Another sense is a word symbol deriving its intentionality from human’s (purported) originality intentionality. Those two seem different to me.

    Eggs Ackley! That’s the genetic fallacy claim–that “derive” is being equivocated upon. The fact that semantics somehow “emerges” from physics would seem to be irrelevant to Searle’s CR argument.

    But Dennett claims (and I think Millikan and Dretske would agree) that it really isn’t irrelevant–that’s the confusing part.

  29. walto: But there is nothing objective about what people mean? Their understandings aren’t expressed in their behaviors?

    We know roughly what people mean. We cannot make it precise. That’s the slipperiness of “meaning.”

    I might say: “When John says red he means red.” Doesn’t that belong in the tautology thread?

  30. walto: Eggs Ackley!That’s the genetic fallacy claim–that “derive” is being equivocated upon.The fact that semantics somehow “emerges” from physics would seem to be irrelevant to Searle’s CR argument.

    Searle’s thought experiment with a person simulating a computer to interact with the world is not about syntax, which is simply static structural rules. He has already included rule following and interchange with the world. Hence it is appropriate to start with that scenario in the reply.

    Does rule following behavior which causally interacts with the world produces semantics? The systems reply says it does — within the overall system, not the person following the rules. Searle tries to avoid that by saying the everything could me memorized by that person but that really changes nothing — there is still a virtual entity in the memorizers head which understands Chinese.

    I agree the person would not be aware of those meanings. But then again, we are not aware of most of the mental processes which allow us to understand meaning:

    He later says that a computer could just be splotches on the wall which people interpret as doing computation. But as detailed in the SEP, rule following behavior has to involve causality and in particular counter-factuals, so that type of reply does not help his case.

    What Millikan and Dennett are doing is extending the argument from computers following rules we design to people following rules designed by evolution. And they address subtler issues such as misrepresentation.

    I don’t see how “derived” is a useful term to apply. What Searle thought experiment is about is explaining how rule-following behavior which causally interacts with the world produces semantics.

    As I also mentioned in another post, sometimes Searle says a key issue is awareness. But I think that is separate from semantics.

  31. BruceS: Searle’s thought experiment with a person simulating a computer to interact with the world is not about syntax

    I disagree. I think it’s precisely about syntax. The guy in the room memorizes all the rules for intersubstitution of one Chinese expression for another. He may also learn all the rules of grammar. But….does he understand any Chinese? Searle’s question is about getting to reference, to the outside world, from substitution rules of language alone. Similar (but earlier) fights have been about whether ‘”Chicago” means Chicago’ is an empirical statement or not, whether designation rules are strictly formal.

    ETA: As there are some big time experts on tautology in a nearby thread, we probably should ask them this question about designation rules. No doubt they’d also hit this one out of whatever park they’re inhabiting.

  32. Neil Rickert: We know roughly what people mean.We cannot make it precise.That’s the slipperiness of “meaning.”

    I might say:“When John says red he means red.”Doesn’t that belong in the tautology thread?

    Here’s how this has gone so far.

    You said that meaning is subjective, so that we may infer that Dennett is right–that there’s no fact of the matter regarding what somebody means.

    I replied that I didn’t think that followed, since tastes are subjective, but we can’t infer from that that there’s no fact of the matter regarding people’s tastes.

    You then said There is something objective about what people like, because we see their preferences as public behavior.

    And to that I replied that we may also learn what people mean from their public behavior.

    Now you say: We know roughly what people mean. We cannot make it precise. That’s the slipperiness of “meaning.”

    Again, I see no disanalogy between meaning and tastes. In any case the question isn’t whether other people can understand what you mean, it’s whether there’s a fact of the matter regarding what you mean.

  33. BruceS: What Searle thought experiment is about is explaining how rule-following behavior which causally interacts with the world produces semantics.

    What matters here is that they are SYNTACTICAL rules only that are being learned and followed. If the rules included designation rules (assuming they aren’t thought to be analytic–see above), then there would be no mystery to meaning and an easy answer to the question “Does the guy understand Chinese?” (He would.)

  34. walto: You then said There is something objective about what people like, because we see their preferences as public behavior.

    Yes, but is there a fact of the matter as to what I mean by “like”?

    The distinction between what I like, and what I mean by “like” is more or less the distinction between reference and meaning.

    And to that I replied that we may also learn what people mean from their public behavior.

    Public behavior demonstrates reference, which is not the same as meaning.

  35. BruceS: Does rule following behavior which causally interacts with the world produces semantics? The systems reply says it does — within the overall system, not the person following the rules.

    Keep in mind that Searle rejected the Systems Reply. Searle’s response, as best I recall, was that if the system as a whole had semantics, that would not be on account of the computation alone (the rule following behavior).

    However, this does remind of something. At one time, some students at NorthWestern U. put a radio near a computer to pick up the electrical noise. Then they ran a program such that the noise became music. I guess that’s rule following behavior that causally interacts with the world.

  36. Neil Rickert: Yes, but is there a fact of the matter as to what I mean by “like”?

    The distinction between what I like, and what I mean by “like” is more or less the distinction between reference and meaning.

    Public behavior demonstrates reference, which is not the same as meaning.

    I agree with Quine that public behavior doesn’t ‘demonstrate’ reference either.

  37. walto,

    Many people (including Dennett) would say that, well, SOMETHING is a semantic engine.

    Dennett says (and I agree) that we are “sorta” semantic engines — good enough for government work — but not genuine semantic engines, which are impossible:

    Any configuration of brain parts is subject to the same limitations. It will be caused by physiochemical forces to do whatever it does regardless of what the input means (or only sorta means). Don’t make the mistake of imagining that brains, being alive, or made of proteins instead of silicon and metal, can detect meanings directly, thanks to the wonder tissue in them. Physics will always trump meaning. A genuine semantic engine, responding directly to meanings, is like a perpetual motion machine — physically impossible. So how can brains accomplish their appointed task? By being syntactic engines that track or mimic the competence of the impossible semantic engine.

    walto:

    I think one of the baffling things about Dennett’s view is his claim that there’s no fact of the matter regarding what someone means. Even if meaning were context dependent, we’d normally say there is a fact of the matter–it just depends on the context. In context A the fact is that F; in context B the fact is that G. But Dennett seems to weave in the sorites problem (fuzziness), so that nothing clearly means anything (except, I guess, his papers).

    I take Dennett to be saying that while a representation may pick out one referent in the actual world, that doesn’t mean that the referent is the “one true meaning” of the representation. When I refer to “Elvis Presley”, I’m indicating the guy who lived in Graceland, appeared on the Ed Sullivan show, etc. There is (was) only one such guy in our world. However, there are other possible men, who never existed, who could fit my concept of “Elvis Presley”. (That’s one of the points I was trying to make in the God and Identity thread.)

    Here’s Dennett, from chapter 30 of Intuition Pumps:

    The reason we don’t have indeterminacy of radical translation is not because, as a matter of metaphysical fact, there are “real meanings” in there, in the head (what Quine called the “museum myth” of meaning, his chief target). The reason we don’t have indeterminacy in the actual world is that with so many independent constraints to satisfy, the cryptographer’s maxim assures us that it is a vanishingly small worry. When indeterminacy threatens in the real world, it is always just more “behavioral” or “dispositional” facts — more of the same — that save the day for a determinate reading, not some mysterious “causal power” or “intrinsic semanticity”. Intentional interpretation almost always arrives in the limit at a single interpretation, but in the imaginable catastrophic case in which dual interpretations survived all tests, there would be no deeper facts to settle which was “right”. Facts do settle interpretations, but it is always “shallow” facts that do the job.

  38. keiths: A genuine semantic engine, responding directly to meanings, is like a perpetual motion machine — physically impossible. So how can brains accomplish their appointed task? By being syntactic engines that track or mimic the competence of the impossible semantic engine.

    I wonder at this — why can’t there be something that responds directly to meanings? Because that’s incompatible with metaphysical naturalism?

    I can see how someone might reason as follows:

    (1) there is no room for intensional entities in the world described in a wholly extensional language;
    (2) a wholly extensional language is sufficient for natural science;
    (3) so there is no room for intensional entities in a scientific description of the world;
    (4) a genuine semantic engine would be responsive to intensional entities without any causal mediation;
    (5) so there is no room for genuine semantic engines in a scientific description of the world;
    (6) although there is room for mimics of semantic engines.

    I find (4) questionable; it’s not clear why there couldn’t be genuine semantic engines that are directly responsive to meanings by way of causal mediation. (To see how the “directly” and “mediation” are not in conflict, reflect on Davidson’s distinction between sensations as epistemic intermediaries and causal intermediaries — although to my knowledge that distinction, with regard to sensations, was first developed by Roy Wood Sellars.)

    However, I find (2) entirely questionable here. It is probably true that fundamental physics can be conducted in an extensional language (at any rate I can’t think of any reason why it couldn’t be), but biology cannot be. I say that because biological concepts such as “function” and “goal” are intensional locutions. To insist that scientific descriptions must be extensional is to deny the possibility of a science of life, or put otherwise, it is to insist, on completely a priori semantic grounds, that biology must be reducible to physics.

    I find it intolerable to insist that our epistemology be constrained by our semantics in that fashion. Our epistemology should be constrained by successful science, not by Quinean anxieties about intensional entities! We have no reason to believe that successful intertheoretic reduction of biology to physics is anywhere on the horizon, and while it might perhaps come about, it will come about through the hard work of philosophy of science and not through a dogmatic, a priori aversion to intensional discourse.

  39. Kantian Naturalist: I can see how someone might reason as follows:

    (1) there is no room for intensional entities in the world described in a wholly extensional language;
    (2) a wholly extensional language is sufficient for natural science;
    (3) so there is no room for intensional entities in a scientific description of the world;
    (4) a genuine semantic engine would be responsive to intensional entities without any causal mediation;
    (5) so there is no room for genuine semantic engines in a scientific description of the world;
    (6) although there is room for mimics of semantic engines.

    I see (2) as highly doubtful, even for physics.

    This reminds me of a poster on usenet, who complained about intension. His suggestion was that we should only use FOPC (first order predicate calculus) as our language. I challenged him to write his next usenet post in FOPC. He never did.

  40. keiths:
    walto,

    Dennett says (and I agree) that we are “sorta” semantic engines — good enough for government work — but not genuine semantic engines, which are impossible:

    walto:

    I take Dennett to be saying that while a representation may pick out one referent in the actual world, that doesn’t mean that the referent is the “one true meaning” of the representation.When I refer to “Elvis Presley”, I’m indicating the guy who lived in Graceland, appeared on the Ed Sullivan show, etc.There is (was) only one such guy in our world.However, there are other possible men, who never existed, who could fit my concept of “Elvis Presley”.(That’s one of the points I was trying to make in the God and Identity thread.)

    Here’s Dennett, from chapter 30 of Intuition Pumps:

    Thanks for the quotes. I guess where I mostly differ from Dennett is that I’d say that the basic datum that needs to be explained is our semantical enginehood (whatever, precisely, that consists in). Any denial of that, or claim that it’s only “sorta” is, to me, akin to the question on the other thread of whether we’re conscious (something whose denial is a lot like Moore’s “It’s raining but I don’t believe it.” paradox).

    What I mean by “horse” may be context-dependent or even completely indeterminate, but I have to mean something (though maybe not much) even for that to be the case. What ever that is, is what requires explanation, not proposed elimination.

  41. KN,

    I wonder at this — why can’t there be something that responds directly to meanings?

    I suppose it’s logically possible, but no one has ever discovered such a thing. What could it be? What mechanism could possibly bridge the gap between the semantic and the physical? The laws of physics are meaning-independent. Meaning has no influence over them.

    To argue that nonphysical meanings have physical effects is as implausible as arguing that nonphysical souls have physical effects. There’s no evidence for it, and not even a glimmer of a possible mechanism.

    Take anything that appears to respond to meanings and drill down. By the time you get to the level of physics, all apparent meaning has vanished, and all that’s left is pure syntax — particles, fields and forces all operating according to the (non-semantic) laws of nature.

    Because that’s incompatible with metaphysical naturalism?

    No. The same problems arise even if you assume dualism. Suppose there’s a nonphysical realm. How does it communicate with the physical? Why do we never see exceptions to the laws of physics, caused by the influence of nonphysical events?

    Also, the Dennett-style view of intentionality doesn’t require us to assume metaphysical naturalism. It just depends on the fact that the physical world appears to be causally closed, which can be true even if metaphysical naturalism is not.

  42. walto,

    Thanks for the quotes. I guess where I mostly differ from Dennett is that I’d say that the basic datum that needs to be explained is our semantical enginehood (whatever, precisely, that consists in). Any denial of that, or claim that it’s only “sorta” is, to me, akin to the question on the other thread of whether we’re conscious (something whose denial is a lot like Moore’s “It’s raining but I don’t believe it.” paradox).

    What I mean by “horse” may be context-dependent or even completely indeterminate, but I have to mean something (though maybe not much) even for that to be the case. What ever that is, is what requires explanation, not proposed elimination.

    There’s a big difference between what Graziano is doing and what Dennett is doing. Graziano is actually denying that we have subjective experiences. Dennett is not denying that we experience ourselves as true semantic engines; he’s merely denying that we are true semantic engines. To deny the veridicality of an experience is far less extreme than denying the existence of the experience. (I doubt that many people deny the existence of near-death experiences. It’s just that some of us doubt that they mean what many of the experiencers take them to mean.)

    He’s also not denying that our representations often pick out unique referents in the world. All he’s denying is that there is a single, true meaning — “intrinsic semanticity” — that metaphysically tethers a representation to its one true referent.

  43. keiths: Graziano is actually denying that we have subjective experiences.

    Is it as strong as that? I get the impression that he thinks concepts such as “consciousness” are as useful in elucidating how brains work are as I think the concept “quale” is (which is not very).

  44. Again, it’s precisely what “sorta” semantic engines that we are that needs to be explained. That’s the data. Whether or not there are other, perhaps better, semantic engines around, whether or not ours is faulty in some respects compared to Dennett’s conception of how an ideal engine might be expected to churn out successful meanings, strikes me as only mildy interesting, and mostly off topic. I take what whatever it is that we do with respect to meaning to be the paradigm case here. That’s what that needs explaining, because that’s how we understand what meaning and reference IS. If AI produces machines that do something different though in many respects “better”–the mystery will continue.

    Re the Graziano, my comparison may well be flawed: I confess that I haven’t read it. The notion seems ridiculous and confused to me. Before I read that I want to get to the bottom of the disappearance (and perhaps non-existence in the first place!) of certain socks.

  45. Neil Rickert: Keep in mind that Searle rejected the Systems Reply.Searle’s response, as best I recall, was that if the system as a whole had semantics, that would not be on account of the computation alone (the rule following behavior).

    However, this does remind of something.At one time, some students at NorthWestern U. put a radio near a computer to pick up the electrical noise.Then they ran a program such that the noise became music.I guess that’s rule following behavior that causally interacts with the world.

    Well, yes, Searle rejected it. That does not mean it is wrong! Or at least part of the way to understanding intentionality.

    I am not sure whether the radio stuff was a joke, but my point about rule following behavior was in the context of my overall response to Walt “derive semantics from syntax”. The point was that the CR experiment as originally described was not about deriving syntax from semantics, it was about Searle postulating a rule-following human, saying it would not understand Chinese, and saying (mistakenly) that therefore nothing about the room understood Chinese.

    You don’t have to go by memory on this.. This is a great summary at the SEP link I posted. Most of my points are taken from there where they are explained in more detail.

  46. walto: What matters here is that they are SYNTACTICAL rules only that are being learned and followed.If the rules included designation rules (assuming they aren’t thought to be analytic–see above), then there would be no mystery to meaning and an easy answer to the question “Does the guy understand Chinese?” (He would.)

    I never claimed the guy understands Chinese. That is Searle’s mistake according to the systems reply. A bit more on this at my above reply to Neil.

    Another way to look at it. Our minds understand. Our brains do not. Mind is a process “running” on the brain, to put it very crudely. It is not just the structure of the brain and the biochemical rules it obeys.

    In the CR, the guy is just the brain part of that analogy.

    ETA: I have not read all the posts to see if this poitn is made already, but I also wanted to say that the reason the word “engine” in the Dennett stuff is important because of the need to consider ongoing processing of the brain or person in the room.

  47. Alan Fox: Is it as strong as that? I get the impression that he thinks concepts such as “consciousness” are as useful in elucidating how brains work are as I think the concept “quale” is (which is not very).

    I have not kept up with the other thread, but if you read his book as I have, he does not claim there is no such thing as consciousness or subjectivity.

    He is trying to explain the basic fact of awareness. He explicitly states he is not addressing the contents of awareness, eg qualia. He also spends a whole chapter explaining how subjective awareness arises in his model. (ETA: More exactly, why awareness of our own attention is more “vivid” than our awareness of other peoples).

    I posted a bit about the book based early in that other thread.

    The Times article is a popularization by him, probably even further editted for space. Further, it was likely worded with controversial tone to attract readership. I agree it may be open to misinterpretation because of those factors.

  48. Kantian Naturalist: IOur epistemology should be constrained by successful science, not by Quinean anxieties about intensional entities!

    I see you have brought “intensionality” into the conversation (ie with an s) which will only serve to confuse philosophical duffers like me.

    I am going to assume it is close enough for this conversation to say that “intensional” entities are those that entities that “respond to meaning” using your other phrase.

    First point: I understand Millikans project as biological, not reduction to physics. How can the existence of intensional entities be explained by biological evolution? Are you claiming that life requires intensionality from the start, that is that the first forms of life were intensional entities?

    Second point: I think that on even days of the week, Dennett would say that people are intensional entities. On odd days, he would say they act “as if” they were intensional entities. My point is that based on L&R’s analysis of his “Real Patterns” he is not clear on whether the science of the personal level, which presumably would involve intentionality, refers to anything that is a “real” pattern.

    But if the patterns are real, then would that not meet you part way, at least, in providing the real intensional entities? With that in mind, his personal/subpersonal analysis and his “sorta” operator are a philosophical start at the intertheoretic reduction you mention.

  49. I find the phrase “mind is a process running on the brain” unhelpful. I’m not aware of any phrase or metaphor that would be helpful.

Leave a Reply