Obscurantism

The subject of obscure writing came up on another thread, and with Steven Pinker’s new book on writing coming out next week, now is a good time for a thread on the topic.

Obscure writing has its place (Finnegan’s Wake and The Sound and the Fury, for example), but it is usually annoying and often completely unnecessary. Here’s a funny clip in which John Searle laments the prevalence of obscurantism among continental philosophers:

John Searle – Foucault and Bourdieu on continental obscurantism

When is obscure prose appropriate or useful? When is it annoying or harmful? Who are the worst offenders? Feel free to share examples of annoyingly obscure prose.

408 thoughts on “Obscurantism

  1. petrushka: I would argue that we have nothing equivalent to Newton’s laws with regard to how brains work.

    Science is systematic. You probably cannot systematize brains because they are all different.

  2. petrushka:

    I have compared the problem to that of OOL. We know a lot of chemistry, but we can’t make life from first principles. Nor can we make AI.

    I do agree human brains and how they contribute to human beings are harder subjects of study than are physics or chemistry. Most physicists and chemists would agree, I’d venture.

    But are you saying that we cannot now make AI or that we cannot ever make AI? The first is obvious; the second needs evidence and argument if that is what you are claiming.

    “A lot can happen between now and never”

    (– guess the G of T source)

  3. walto:
    Thanks.I have those two Dretske books, though I haven’t read them.I do know the “Experience as Representation” paper very well (and like it a lot)–I even considered including it in my Hall book–but it’s not really relevant to this particular issue, I don’t think.

    Now I’ve lost the plot. I thought the issue was just about why natural-law type meaning were needed to get the evolution of mental representation started, which I’ll admit is not rocket science (ie Newton’s laws are not involved). And there are more subtle issues involved than just relying on natural law to have representation vehicles vary with their content through causation.

    Wasn’t Dretske the source of a related microorganism representation example (magnetic particles in microorganisms representing the direction for presence of oxygen)? I’m going by memory on that. It might work better as an example than chemical gradient I think, because of the internalized, causally-varying representation.

    Now I admit that a few grains of magnetic material is not face-recognition, but we we all have to start somewhere.

    What issue am I missing that you think the exchange with Keith is about?

  4. BruceS: Now I’ve lost the plot.I thought the issue was just about why natural-law type meaning were needed to get the evolution of mental representation started, whichI’ll admit is not rocket science (ie Newton’s laws are not involved). And there are more subtle issues involved than just relying onnatural law to have representation vehicles vary with their content through causation.

    Wasn’t Dretske the source of a related microorganism representation example (magnetic particles in microorganisms representing the direction for presence of oxygen)?I’m going by memory on that. It might work better as an example than chemical gradient I think, because of the internalized, causally-varying representation.

    Now I admit that a few grains of magnetic material is not face-recognition, but we we all have to start somewhere.

    What issue am I missing that you think the exchange with Keith is about?

    I’m not sure what your asking here. As I said, I haven’t read those books so they may indeed discuss the teleological/evolutionary meaning stuff that I take keiths to be talking about above. But that paper about representation is not on that subject and is consistent with the view that semantics is not derivable from syntax.

  5. walto,

    So, suppose we want to respond to Searle’s Chinese Room argument, according to which one can never get to semantics from syntax, because, on his view, there’s what amounts to an impregnable barrier.

    He likely wouldn’t object to these, call them, “proto-meanings” that bacteria use and that can be replicated by machines. There’s no question that THOSE are entirely syntax. There’s also no question that our own full-blooded meanings evolved from something like that utilized by the bacteria. Hence…..

    Interesting. Is that the strategy?

    Yes. The ‘impregnable barrier’ isn’t so impregnable after all. Semantics can emerge from pure syntax, and indeed must have done so during the process of evolution.

  6. keiths:

    I’m comfortable with that idea, and I’m even comfortable saying that neurons ‘decide’ to fire under certain conditions, but to say that neurons

    …want to join coalitions with other neurons to “push” their point of view (ie their source of activation) towards “brain fame” = consciousness.

    …is going way too far. A neuron knows nothing of coalitions or consciousness or “brain fame”.

    Bruce:

    Yet that is one way of describing a common working theory of neuroscientists to help explain consciousness or at least the way awareness and attention work together. Granted that it is just a way of thinking about it, not the detailed model that the scientists test.

    My objection isn’t to the theory per se, but to the way you are describing it. To say that a neuron “decides to fire” or “wants to fire” under certain circumstances seems like an acceptable application of the intentional stance, but to say that it “wants to join coalitions”, etc., doesn’t.

    And to be fair to Dennett, he doesn’t quite say what you are attributing to him. He says that neurons “form coalitions”, not that they “want to join coalitions”, and the difference is important.

  7. Bruce,

    Teleosemantics — roughly, meaning has arisen naturally from the representation mechanism evolving within an agent to serve some other consumer within the agent and being selected by evolution because it increases fitness of the agent. I think Keith’s point is that such a thing would not be possible if there was no natural way to form somewhat reliable causative links between content and its representation in some vehicle in the agent.

    That’s right. If the representation isn’t causally tied to its referent, then any “stands for” relation that obtains is purely coincidental and likely cannot be systematically exploited by evolution.

    The Swamp Man stuff that I bring up as a part of a joke from time to time is an allusion to a common counter argument to this position. The teleosemantics position is that history matters: things have meaning because of the evolutionary history of the mechanism.

    I actually don’t think that the history matters, as long as the “stands for” criterion is satisfied. The meaning of the word “dog” is the same to the Swamp Man and his predecessor, even though there is no causal link between actual dogs and the Swamp Man’s “dog” concept.

    It’s just that the “stands for” criterion is overwhelmingly unlikely to be met if there isn’t a causal link. In any realistic evolutionary scenario, there will be such a link, and it is the (possibly less than perfect) reliability of that link that is being exploited by evolution.

  8. keiths:
    walto,

    Yes. The ‘impregnable barrier’ isn’t so impregnable after all.Semantics can emerge from pure syntax, and indeed must have done so during the process of evolution.

    Thanks. I like that strategy. It’s important to restrict the meaning of “emerge” there, because I’m guessing Searle and many of those convinced by his argument, will have no problem agreeing that our ability to cogitate emerged from other things at some point during the evolutionary journey.

    But maybe that can be accomplished. I will look at the Dretske books Bruce mentioned. Do you have other suggestions?

  9. walto: I’m not sure what your asking here.As I said, I haven’t read those books so they may indeed discuss the teleological/evolutionary meaning stuff that I take keiths to be talking about above.But that paper about representation is not on that subject and is consistent with the view that semantics is not derivable from syntax.

    Thanks, that make sense.

    I have not read the book either, and was going more by the assumption that naturalization (as per the title) could only work if you invoked evolution somehow. I think Millikan (and Dennett) might do that more explicitly, at least from looking and a various summaries of Dretske’s book online.

    ETA: “Derivable” is a slippery word. If one means derivable by deduction, it seems right to me to say that you cannot logically derive meaning from structure. But if the explanation involves “functioning for an agent to achieve its goals in the world” then maybe that adds something not available from formal deduction.

    ETA 2: And to finish my point properly, that type of functioning would “emerge” from evolution, assuming it increased the fitness of the organism.

  10. keiths:

    I actually don’t think that the history matters, as long as the “stands for” criterion is satisfied.The meaning of the word “dog” is the same to the Swamp Man and his predecessor, even though there is no causal link between actual dogs and the Swamp Man’s “dog” concept.

    Keith:
    I think the philosophical subtleties start to bite when you add the restriction that the explanation of how representations work must also explain why we make mistakes. We think we see a dog in dim light but it is actually a wolf. If a representation is based on a reliable causal natural law (like tree rings representing age), then why would it fail at times?

    I cannot do justice to all the philosophical arguments back and forth, but there are many philosophers who think the swamp man example combined with those philosophical nuances is a serious challenge Millikan/Dennett stuff I outlined.

    The SEP article I linked goes into the details, if you are interested.

    Dennett in Intuition Pumps dismisses the argument by saying history does matter, and the thought experiment is too divorced from reality to have force. Dretske talks about a “swamp-photo” in the paper of his I linked and suggests it is not a real photo. Similarly, a swamp man version of me might think he was me, but he would not be. Anyway, enough of that merry-go-round for now.

    ETA: Wanted to mention that the swamp man argument probably does not work against a star trek transporter functioning normally, since there is still a causal history from Captain Kirk on the transporter deck to Captain Kirk on the planet’s surface. But if the transporter fails and we end up with two Captain Kirks, one still on the deck and one on the planet, then maybe one at least is a swamp man analog? Anyway, I think that example came up long ago in a thread far, far away…

  11. keiths:
    To say that a neuron “decides to fire” or “wants to fire” under certain circumstances seems like an acceptable application of the intentional stance
    […]

    And to be fair to Dennett, he doesn’t quite say what you are attributing to him.He says that neurons “form coalitions”, not that they“want to join coalitions”, and the difference is important.

    Keith:

    To be consistent with Dennett, I should have said “sorta want”, not simply “want”

    But as I read the whole chapter, Dennett does claim the intentional stance can be applied to neurons; that is that the neurons behavior can be successfully modeled and predicted by assuming they have a limited form of agency.

    (ETA: Is saying neurons “want to fire” a valid way of applying the intentional stance to neurons? I think it might be mixing the neuron and sub-neuron level).

    Whether Dennett is an instrumentalist about that or whether he believes that the success of the model means the limited agency is a real property of neurons — well that is a different discussion.

  12. keiths:

    Yes. The ‘impregnable barrier’ isn’t so impregnable after all. Semantics can emerge from pure syntax, and indeed must have done so during the process of evolution.

    walto:

    Thanks. I like that strategy. It’s important to restrict the meaning of “emerge” there, because I’m guessing Searle and many of those convinced by his argument, will have no problem agreeing that our ability to cogitate emerged from other things at some point during the evolutionary journey.

    True. I should stress that I mean weak emergence, and that the result does not qualify as what Searle would call “original intentionality.”

    But maybe that can be accomplished. I will look at the Dretske books Bruce mentioned. Do you have other suggestions?

    I’ve found Dennett’s “two-bitser” thought experiment to be very useful. The original essay is here, and the concept shows up in his later works including Intuition Pumps.

  13. walto:
    Thanks.

    FWIW, I found on online precis of the Dretske 95 book. Among the excerpts is page 7 para 2 where he states explicitly acceptance of the Millikan et al approach and the role of evolution. However, he also says “more detail in chapter 5” which I don’t have access to.

    (The excerpt is a pdf image so I cannot post it here but you have the book….)
    .
    Now back to the regular show on the tautology that disproves 150 years of science. It’s amazing to me how this topic (and a few like it) can bring back the crowds. Although the posts on the history of the term are interesting.

  14. Perhaps irrelevant, but my 18 month old grandson is visiting. Out cat is in hiding, but the kid sees the food bowl and says, “kitty.” The pet water fountain is also kitty. He does this with other words, like mama.

    I raised two children and don’t recall seeing this before.

  15. BruceS: FWIW, I found on online precis of the Dretske 95 book.Among the excerpts is page 7 para 2 where he states explicitly acceptance of the Millikan et al approach and the role of evolution.However, he also says “more detail in chapter 5″ which I don’t have access to.

    Which of the two Dretske books are you referring to above?

  16. walto: Which of the two Dretske books are you referring to above?

    Naturalizing the Mind which the precis excerpts (PDF) says is 97 but which was 1995 according to sep. Maybe the paperback is 1997.

  17. Thx. I really like Dretske, but my view is based only on a bunch of his papers: I’ve never read any of his books, in spite of owning them all. He died just as I was finishing my Hall book, and I ended up not trying to contact the publisher of “Experience as Representation” for inclusion, but I think it would have made the book considerably better. His paper is maybe the clearest statement–though in a slightly more extreme form–of the representationism that I believe Hall is responsible for first expressing comprehensively (though I understand a couple of Medieval philosophers may have said something in that ballpark).

    I think I’m always going to regret that lacuna.

  18. A delayed response to this.

    walto: So, suppose we want to respond to Searle’s Chinese Room argument, according to which one can never get to semantics from syntax, because, on his view, there’s what amounts to an impregnable barrier.

    Does Searle really talk of “an impregnable barrier”?

    I’ve been taking Searle’s argument to be that syntax and semantics are very different kinds of things, that there’s something like a category mistake (in Ryle’s terminology) involved in the idea of going from syntax to semantics. I wouldn’t think that “impregnable barrier” was the right way of describing that.

    keiths: By ‘precognitive meaning’ I mean meaning in the absence of thought, e.g. when a bacterium interprets a chemical gradient as meaning that a food source can be found by swimming in a certain direction (aka ‘chemotaxis’).

    That seems about right to me.

    I think walto was responding to this when he said (of Searle):

    He likely wouldn’t object to these, call them, “proto-meanings” that bacteria use and that can be replicated by machines. There’s no question that THOSE are entirely syntax.

    I’m not sure where walto gets that idea. I do not see anything syntactic about the behavior of bacteria.

  19. Neil Rickert,

    I don’t know, actually. Could we not simulate the activity of a bacterium?

    BTW, I do think Searle would accept the notion of an “impregnable barrier” between syntax and semantics. You likely remember that he says a Chinese city is no closer than than a Chinese room: no piling on of syntax ever gets you to semantics on his view. I guess another way of saying that is that they are different “categories.” That comes to much the same thing, I think.

  20. As far as I understand Searle (which is not very far), the “you can’t get to semantics from syntax” is not a conclusion of his, but a premise.

    The Chinese Room thought-experiment is designed to get us to accept the intuitiveness of that premise, rather than an argument which yields that assertion as a conclusion. If Searle has any actual arguments that give us “you can’t get to semantics from syntax” as a conclusion, I’m not aware of them.

    Granted, I’m inclined to share his intuition — but that’s hardly a reason to think it’s correct. For all we know, it could be that a correct theory of semantics would show that it is constructible from syntax after all. If that theory turns out to be “counter-intuitive,” so much the worse for our “intuitions”!

  21. Neil,

    Does Searle really talk of “an impregnable barrier”?

    Not in those exact words, but he does say this:

    Syntax by itself is neither constitutive of nor sufficient for semantics.

  22. Bruce,

    But as I read the whole chapter, Dennett does claim the intentional stance can be applied to neurons; that is that the neurons behavior can be successfully modeled and predicted by assuming they have a limited form of agency.

    Yes, but at a different level than the one you were suggesting.

    You wrote:

    I think Dennett now says that one should take the intentional stance towards neurons, because neurons want to join coalitions with other neurons to “push” their point of view (ie their source of activation) towards “brain fame” = consciousness.

    Dennett agrees that neurons form coalitions, but he doesn’t say that they “want” or “sorta want” to join coalitions. His application of the intentional stance to neurons is more modest:

    At the cell level, the individual neurons are more exploratory in their behavior, poking around in search of better connections, changing their patterns of firing as a function of their recent experience.

    He explicitly states that they are ignorant of what goes on at the higher levels:

    …you might think of them as nerve cells in jail cells, myopically engaged in mass projects of which they have no inkling, but ever eager to improve their lot by changing their policies. At higher levels, the myopia begins to dissipate, as groups of cells — tracts, columns, ganglia, “nuclei” — take on specialized roles that are sensitive to ever-wider conditions, including conditions in the external world.

    It makes sense. A neuron doesn’t “know” (or “sorta know”) that it’s part of a ganglion any more than a logic gate “knows” that it’s part of an ALU.

  23. keiths: Not in those exact words, but he does say this:

    Syntax by itself is neither constitutive of nor sufficient for semantics.

    Yes, I agree he says that. But I think “impenetrable barrier” gives a wrong impression of his view. “Impenetrable barrier” suggests two things side by side, but separated by a barrier. But I think he is instead saying that the two concepts are orthogonal (or something like that).

  24. keiths:

    I actually don’t think that the history matters, as long as the “stands for” criterion is satisfied.The meaning of the word “dog” is the same to the Swamp Man and his predecessor, even though there is no causal link between actual dogs and the Swamp Man’s “dog” concept.

    Bruce:

    I think the philosophical subtleties start to bite when you add the restriction that the explanation of how representations work must also explain why we make mistakes. We think we see a dog in dim light but it is actually a wolf. If a representation is based on a reliable causal natural law (like tree rings representing age), then why would it fail at times?

    The causal relationship doesn’t always have to be reliable (or even to exist at all — consider unicorns). It’s just that evolution will favor it when it is.

    We can establish an artificial chemical gradient in a petri dish and watch as the bacteria swim toward the nonexistent food. There is a “stands for” relation — the gradient stands for the food — but the food isn’t real. It’s an illusion created by the experimenter.

    I cannot do justice to all the philosophical arguments back and forth, but there are many philosophers who think the swamp man example combined with those philosophical nuances is a serious challenge Millikan/Dennett stuff I outlined.

    Since I don’t insist on a causal linkage between represenation and referent, I think I’m off the hook.

    Dennett in Intuition Pumps dismisses the argument by saying history does matter, and the thought experiment is too divorced from reality to have force.

    I think he’s copping out. Thought experiments needn’t be realistic to be effective. Twin Earth isn’t very realistic, but Dennett certainly acknowledges its importance. He even refers to his “two-bitser” intuition pump as “the poor man’s Twin Earth”!

  25. Neil,

    Yes, I agree he says that. But I think “impenetrable barrier” gives a wrong impression of his view. “Impenetrable barrier” suggests two things side by side, but separated by a barrier. But I think he is instead saying that the two concepts are orthogonal (or something like that).

    Shrug. I knew what walto was getting at, and now you do too.

  26. As for how Searle thinks that original intentionality can arise without being reducible to syntax, I don’t know. I’ve ordered his book and I’ll let you all know what I learn.

    I do remember reading somewhere that he thinks that desires, like hunger, are intrinsically about their objects, but if so, I don’t know how he justifies this.

  27. walto:
    Thx.I really like Dretske, but my view is based only on a bunch of his papers: I

    I am not sure if you plan to pursue this right now, but if you do, I’d be interested in your thoughts on a disagreement between Dennett and Dretske.

    In that page 7 paragraph I noted, Dretske states he agrees with Millikan but that Dennett does not agree with him (Dretske). That confused me since Dennett seems to be in close agreement with Millikan.

    Dretske’s cites Dennett’s Evolution, Error and Intentionality (a book chapter of Dretske’s citation) for the source of the disagreement with Dennett.

    As I read it, Dennett says there that Dretske believes in original intentionality and that is their source of disagreement.

    If you do decide to look further at the Dretske stuff, I’d be interested in your thoughts on this issue. Is Dennett right about Dretske’s views on intentionality?

  28. keiths:

    The causal relationship doesn’t always have to be reliable (or even to exist at all — consider unicorns).It’s just that evolution will favor it when it is.

    Keith:
    I agree that evolution is important.

    It’s the details of explaining how it works, how semantics “emerges”, that need to be addressed. As part of doing so, one has to explain why mental representations can be unreliable but still increase fitness somehow. One also needs to address the disjunction problem.

    I am not saying these are unsolvable issues, only that the arguments these issues raise need to be confronted.

  29. keiths:

    A neuron doesn’t “know” (or “sorta know”) that it’s part of a ganglion any more than a logic gate “knows” that it’s part of an ALU.

    Keith:
    I’ve always found the term “sorta” to be a bit vague — maybe Dennett needs it to be that way to work in all the circumstances he uses it.

    On page 97 of Intuition Pumps he says
    “We use the intentional stance to keep track of the beliefs and desires (or ‘beliefs’ and ‘desires’ or sorta beliefs and sorta desires of the (sorta-) rational agents at every level, from the simplest bacterium through all the discriminating, signaling, comparing, remembering circuits that comprise the brains of animals from starfish to astronomers”

    Based on my reading of that, I think applying “sorta” to neurons is compatible with the agency Dennett attributes to them. Having a “sorta desire” is a sorta want.

    I do agree the subpersonal level does not “know” about the personal level and that is true for all sublevels of the subpersonal. I understand that to be a key point of his model. You do have to stay within the level when applying the intentional stance.

    How do you understand “sorta”?

  30. BruceS: As part of doing so, one has to explain why mental representations can be unreliable but still increase fitness somehow.

    If fitness is defined by reproduction, beer goggles can increase fitness.

  31. Kantian Naturalist:

    The Chinese Room thought-experiment is designed to get us to accept the intuitiveness of that premise,

    I have not read all the papers, but if the history outlined in the SEP CR article is accurate, Searle switches the intuition he is appealing to. In later papers, it is claimed that

    Searle links intentionality to awareness of intentionality, in that intentional states are at least potentially conscious.

    My suspicion is that many people reviewing the CR experiment mix “knowing meaning” with the “awareness that one knows meaning”. The two are different and I think one can have the first without the second (eg in animals). Or even the second without the first (temporarily anyway, as in — I know I know your name, just give me a moment…)

  32. BruceS: It’s the details of explaining how it works, how semantics “emerges”, that need to be addressed.

    I’m inclined to say that semantics doesn’t emerge. It is there from the get-go. Or, if you like, it is an aspect of homeostasis.

    It is syntax that emerges. In its own way, syntax depends on semantics. That is it depends on a very narrowly constrained semantics of syntax.

    Hmm, perhaps it’s because I’m a mathematician that I see it that way.

    From my point of view, a computer does syntax only in the sense of derived intentionality. We take the computer operations to be syntactic, but they are really electromagnetic and the idea that they are syntactic is an interpretation that we find it useful to impose on the computer.

  33. I don’t understand how you are using either “syntax” or “derived,” Neil. The question that Searle and his critics seem to me to be interested in is whether all the formation rules for some language can produce a single reference or designation rule. Can one who knows no Chinese, e.g., come to understand what a single Chinese word means, by memorizing a Chinese dictionary or are the interchange rules provided by a dictionary a kind of “closed loop” that don’t get you a single smidge of meaning?

  34. BruceS: I am not sure if you plan to pursue this right now, but if you do, I’d be interested in your thoughts on a disagreement between Dennett and Dretske.

    In that page 7 paragraph I noted, Dretske states he agrees with Millikan but that Dennett does not agree with him (Dretske).That confused me since Dennett seems to be in close agreement with Millikan.

    Dretske’s cites Dennett’s Evolution, Error and Intentionality (a book chapter of Dretske’s citation) for the source of the disagreement with Dennett.

    As I read it, Dennett says there that Dretske believes in original intentionality and that is their source of disagreement.

    If you do decide to look further at the Dretske stuff, I’d be interested in your thoughts on this issue.Is Dennett right about Dretske’s views on intentionality?

    I’ll try to read the Dennett/Haugeland paper as well as “Evolution, Error, and Intentionality” over the next few days and give my impression. I glanced at the Dretske book and there are only a couple of brief references to Dennett there, so I don’t know if I’ll be able to get a sense of exactly how he responds to Dennett’s critique. But if I can suss it out, I’ll report on that too.

  35. I’ll also try to catch up on the papers being discussed here, since I actually do find these issues about semantics and syntax central to my concerns. I’ve been a bit busy lately with finishing the book (!), grading, and going back on the tenure-track job market.

  36. walto: … come to understand what a single Chinese word means, by memorizing a Chinese dictionary or are the interchange rules provided by a dictionary a kind of “closed loop” that don’t get you a single smidge of meaning?

    I am not sure of the exact parameters of the CR, but I think it has to go beyond memorizing a dictionary of even solely memorizing anything.

    Why can’t the questions include personal questions? For example, how big is your family? Where did you go to school? When did you lose your virginity?

    So there would have to be a fully constructed person embedded somehow in that “dictionary”.

    Further, why can’t the questions refer to past conversation? Like, “Did my question about losing your virginity bother you? How so?.”

    Once you allow for that, I think you start to see the reasoning behind the systems reply. Although I also think an agent needs the ability to act in its own interests in the world to fully justify attributing semantics to it.

  37. I don’t understand your post, Bruce. I take it that the Chinese Room story is about someone who is given the definitions of every Chinese word in Chinese and manages to learn them all. So he’s basically got an entire Chinese dictionary in his head. We may also give him all the rules of Chinese grammar so he can put the words he “knows” into WFFs.

    Then the question comes, does such a person actually understand any Chinese at all? Searle says NO: one can’t get any semantics (meanings) no matter how much syntax one has mastered.

    So, I don’t understand what you are saying above about personal questions.

  38. Kantian Naturalist:
    I’ll also try to catch up on the papers being discussed here, since I actually do find these issues about semantics and syntax central to my concerns.I’ve been a bit busy lately with finishing the book (!), grading, and going back on the tenure-track job market.

    Good luck on your job hunt, KN.

    I cam across this PhD thesis Beyond Folk Psychology that may interest you. The author tries to reply to objections of modern phenomenological philosophers who don’t accept psychology approaches to understanding social interaction. He uses his version of Dennett’s personal/subpersonal division to help do so.

    I’ve only read the first couple of chapters, which are introductory and would likely be of little interest to you. But possibly the reminder might be, since the thesis seems to cover very roughly similar ground to some of the topics you have mentioned in this forum that interest you.

    BTW, good reply to WJM in the other thread on the tautology in evolution. That’s the best approach to dealing with him, I agree. I learn things from Joe F and Steve S when they take time to reply to the ID supporters in that thread, but I am mystified by their motivations in repeatedly doing so, especially in TSZ where the audience is so limited and all of them have no doubt made up their minds.

  39. walto:
    I don’t understand your post, Bruce.I take it that the Chinese Room story is about someone who is given the definitions of every Chinese word in Chinese and manages to learn them all.

    It’s been a while since I read the paper, but I thought the person in the room had to answer questions posed in Chinese and reply in Chinese as well. So he as a person could not understand the questions or the answers. (ETA: “Learn” might taken to imply he understands the meaning of words, so I would avoid that word).

    Hence he would not be able to use his personal experience to answer the questions. Instead, valid answers would have to be based solely on him slavishly manipulating symbols.

    Further, a static dictionary/book of rules would not work since the questions could refer to the past history of this particular series of questions.

    The system reply to Searle says that understanding resides in the virtual entity consisting of the person in the room, the book of rules, and the actions in using the rules to reply to questions. (But I am not sure how the experiment as I recall it address the need for dynamic memory of the conversation.)

    BTW, there are similar concerns with couching the argument in terms of syntax versus semantics. Syntax is static, abstract structure. But computers are dynamic, causal mechanisms.

    Further, as Neil points out, we supply the ones and zeros and the syntactic rules that computers are supposedly following. In fact, computers are electronic machines that operate according to the rules of physics. If you think of syntax as inert rules, then that alone is not an appropriate characterization of a computer. There’s lots more in this vein in the SEP article on the Chinese Room.

  40. walto: I don’t understand how you are using either “syntax” or “derived,” Neil.

    Syntax: formal expressions that are required to strictly follow rules of form.

    As for why I say the computer has only derived intentionality with respect to syntax, that’s because the computer knows nothing of formal structure and does not understand the rules of form. It produces what we see as formal structure and follows what we see as rules of form only because its mechanisms do not permit it to do otherwise.

  41. Neil Rickert: Syntax: formal expressions that are required to strictly follow rules of form.

    I just mean the rules of form (or, I guess the ‘formal structures’ if those are rules of form). Not sure what your def means.

  42. Neil:

    It is syntax that emerges. In its own way, syntax depends on semantics. That is it depends on a very narrowly constrained semantics of syntax.

    Hmm, perhaps it’s because I’m a mathematician that I see it that way.

    I don’t think it’s because you’re a mathematician. I think it’s because you’re Neil. 🙂

    Neil:

    From my point of view, a computer does syntax only in the sense of derived intentionality. We take the computer operations to be syntactic, but they are really electromagnetic and the idea that they are syntactic is an interpretation that we find it useful to impose on the computer.

    Bruce:

    Further, as Neil points out, we supply the ones and zeros and the syntactic rules that computers are supposedly following. In fact, computers are electronic machines that operate according to the rules of physics. If you think of syntax as inert rules, then that alone is not an appropriate characterization of a computer. There’s lots more in this vein in the SEP article on the Chinese Room.

    Neil and Bruce,

    You’re both misunderstanding the meaning of ‘syntactic’ in the context of these discussions. The level of 1s and 0s is not the syntactic level of a computer. Physics is.

    That can be confusing, because in other contexts the manipulation of 1s and 0s according to formal rules would count as ‘syntax’, but here, the distinction is between systems that take meaning into account (aka ‘semantic engines’) versus those that don’t (‘syntactic engines’).

    Here’s Dennett (from chapter 31 of Intuition Pumps):

    How can meaning make a difference? It doesn’t seem to be the kind of physical property, like temperature or mass or chemical composition, that could cause anything to happen. What brains are for is extracting meaning from the flux of energy impinging on their sense organs, in order to improve the prospects of the bodies that house them and provide their energy. The job of a brain is to “produce future” in the form of anticipations about the things in the world that matter to guide the body in appropriate ways. Brains are energetically very expensive organs, and if they can’t do this important job well, they aren’t earning their keep. Brains, in other words, are supposed to be semantic engines. What brains are made of is kazillions of molecular pieces that interact according to the strict laws of chemistry and physics, responding to shapes and forces; brains, in other words, are in fact only syntactic engines.

    Imagine going to the engineers and asking them to build you a genuine-dollar-bill-discriminator, or, what amounts to the same thing, a counterfeit-detector: its specs are that it should put all the genuine dollars in one pile and all the counterfeits in another. Not possible, say the engineers; whatever we build can respond only to “syntactic” properties: physical details — the thickness and chemical composition of the paper, the shapes and colors of the ink patterns, the presence or absence of other hard-to-fake physical properties. What they can build, they say, is a pretty good but not foolproof counterfeit-detector based on such “syntactic” properties. It will be expensive, but indirectly and imperfectly it will test for counterfeithood well enough to earn its keep.

    Any configuration of brain parts is subject to the same limitations. It will be caused by physiochemical forces to do whatever it does regardless of what the input means (or only sorta means). Don’t make the mistake of imagining that brains, being alive, or made of proteins instead of silicon and metal, can detect meanings directly, thanks to the wonder tissue in them. Physics will always trump meaning. A genuine semantic engine, responding directly to meanings, is like a perpetual motion machine — physically impossible. So how can brains accomplish their appointed task? By being syntactic engines that track or mimic the competence of the impossible semantic engine.

  43. keiths: You’re both misunderstanding the meaning of ‘syntactic’ in the context of these discussions. The level of 1s and 0s is not the syntactic level of a computer. Physics is.

    At the level of physics, the operation of the computer is semantic. It operates on electrical charges or currents according to their real world properties. So that’s completely semantic.

    We we have done, in designing computers, is harness those natural semantic actions so that they represent the syntactic actions of computation.

    What the brain does is harness the naturally semantic actions of biochemistry, so that they represent other kinds of semantics about other parts of the world.

  44. Neil,

    At the level of physics, the operation of the computer is semantic. It operates on electrical charges or currents according to their real world properties. So that’s completely semantic.

    Not by any definition of ‘semantic’ I’ve ever seen (other than your idiosyncratic definition, of course).

    The computer operates on charges and currents, of course, but the operation depends in no way on the meanings (if any) assigned to those charges and currents. It’s not semantic.

    If you don’t believe me, consult the dictionaries.

  45. keiths: Not by any definition of ‘semantic’ I’ve ever seen (other than your idiosyncratic definition, of course).

    I perhaps should have used “proto-semantic”, but it gets tedious having to do that all the time. The implications of the context should be sufficient.

  46. Neil,

    “Proto-semantic” doesn’t work any better. The word you’re looking for is “syntactic”. 🙂

  47. keiths: “Proto-semantic” doesn’t work any better. The word you’re looking for is “syntactic”.

    A flip-flop is in a stable state. The electrical flows are what keep it in that stable state. From the point of view of the stable process, the electrical flows can be reasonably said to be meaningful, though not in a conscious sense.

    The signal that triggers a change of state is admittedly different, a kind of external interference. So I don’t suggest anything semantic about that.

Leave a Reply