The subject of obscure writing came up on another thread, and with Steven Pinker’s new book on writing coming out next week, now is a good time for a thread on the topic.
Obscure writing has its place (Finnegan’s Wake and The Sound and the Fury, for example), but it is usually annoying and often completely unnecessary. Here’s a funny clip in which John Searle laments the prevalence of obscurantism among continental philosophers:
John Searle – Foucault and Bourdieu on continental obscurantism
When is obscure prose appropriate or useful? When is it annoying or harmful? Who are the worst offenders? Feel free to share examples of annoyingly obscure prose.
Science is systematic. You probably cannot systematize brains because they are all different.
I do agree human brains and how they contribute to human beings are harder subjects of study than are physics or chemistry. Most physicists and chemists would agree, I’d venture.
But are you saying that we cannot now make AI or that we cannot ever make AI? The first is obvious; the second needs evidence and argument if that is what you are claiming.
“A lot can happen between now and never”
(– guess the G of T source)
Now I’ve lost the plot. I thought the issue was just about why natural-law type meaning were needed to get the evolution of mental representation started, which I’ll admit is not rocket science (ie Newton’s laws are not involved). And there are more subtle issues involved than just relying on natural law to have representation vehicles vary with their content through causation.
Wasn’t Dretske the source of a related microorganism representation example (magnetic particles in microorganisms representing the direction for presence of oxygen)? I’m going by memory on that. It might work better as an example than chemical gradient I think, because of the internalized, causally-varying representation.
Now I admit that a few grains of magnetic material is not face-recognition, but we we all have to start somewhere.
What issue am I missing that you think the exchange with Keith is about?
I’m not sure what your asking here. As I said, I haven’t read those books so they may indeed discuss the teleological/evolutionary meaning stuff that I take keiths to be talking about above. But that paper about representation is not on that subject and is consistent with the view that semantics is not derivable from syntax.
walto,
Yes. The ‘impregnable barrier’ isn’t so impregnable after all. Semantics can emerge from pure syntax, and indeed must have done so during the process of evolution.
keiths:
Bruce:
My objection isn’t to the theory per se, but to the way you are describing it. To say that a neuron “decides to fire” or “wants to fire” under certain circumstances seems like an acceptable application of the intentional stance, but to say that it “wants to join coalitions”, etc., doesn’t.
And to be fair to Dennett, he doesn’t quite say what you are attributing to him. He says that neurons “form coalitions”, not that they “want to join coalitions”, and the difference is important.
Bruce,
That’s right. If the representation isn’t causally tied to its referent, then any “stands for” relation that obtains is purely coincidental and likely cannot be systematically exploited by evolution.
I actually don’t think that the history matters, as long as the “stands for” criterion is satisfied. The meaning of the word “dog” is the same to the Swamp Man and his predecessor, even though there is no causal link between actual dogs and the Swamp Man’s “dog” concept.
It’s just that the “stands for” criterion is overwhelmingly unlikely to be met if there isn’t a causal link. In any realistic evolutionary scenario, there will be such a link, and it is the (possibly less than perfect) reliability of that link that is being exploited by evolution.
Thanks. I like that strategy. It’s important to restrict the meaning of “emerge” there, because I’m guessing Searle and many of those convinced by his argument, will have no problem agreeing that our ability to cogitate emerged from other things at some point during the evolutionary journey.
But maybe that can be accomplished. I will look at the Dretske books Bruce mentioned. Do you have other suggestions?
Thanks, that make sense.
I have not read the book either, and was going more by the assumption that naturalization (as per the title) could only work if you invoked evolution somehow. I think Millikan (and Dennett) might do that more explicitly, at least from looking and a various summaries of Dretske’s book online.
ETA: “Derivable” is a slippery word. If one means derivable by deduction, it seems right to me to say that you cannot logically derive meaning from structure. But if the explanation involves “functioning for an agent to achieve its goals in the world” then maybe that adds something not available from formal deduction.
ETA 2: And to finish my point properly, that type of functioning would “emerge” from evolution, assuming it increased the fitness of the organism.
Keith:
I think the philosophical subtleties start to bite when you add the restriction that the explanation of how representations work must also explain why we make mistakes. We think we see a dog in dim light but it is actually a wolf. If a representation is based on a reliable causal natural law (like tree rings representing age), then why would it fail at times?
I cannot do justice to all the philosophical arguments back and forth, but there are many philosophers who think the swamp man example combined with those philosophical nuances is a serious challenge Millikan/Dennett stuff I outlined.
The SEP article I linked goes into the details, if you are interested.
Dennett in Intuition Pumps dismisses the argument by saying history does matter, and the thought experiment is too divorced from reality to have force. Dretske talks about a “swamp-photo” in the paper of his I linked and suggests it is not a real photo. Similarly, a swamp man version of me might think he was me, but he would not be. Anyway, enough of that merry-go-round for now.
ETA: Wanted to mention that the swamp man argument probably does not work against a star trek transporter functioning normally, since there is still a causal history from Captain Kirk on the transporter deck to Captain Kirk on the planet’s surface. But if the transporter fails and we end up with two Captain Kirks, one still on the deck and one on the planet, then maybe one at least is a swamp man analog? Anyway, I think that example came up long ago in a thread far, far away…
Keith:
To be consistent with Dennett, I should have said “sorta want”, not simply “want”
But as I read the whole chapter, Dennett does claim the intentional stance can be applied to neurons; that is that the neurons behavior can be successfully modeled and predicted by assuming they have a limited form of agency.
(ETA: Is saying neurons “want to fire” a valid way of applying the intentional stance to neurons? I think it might be mixing the neuron and sub-neuron level).
Whether Dennett is an instrumentalist about that or whether he believes that the success of the model means the limited agency is a real property of neurons — well that is a different discussion.
keiths:
walto:
True. I should stress that I mean weak emergence, and that the result does not qualify as what Searle would call “original intentionality.”
I’ve found Dennett’s “two-bitser” thought experiment to be very useful. The original essay is here, and the concept shows up in his later works including Intuition Pumps.
Thanks.
FWIW, I found on online precis of the Dretske 95 book. Among the excerpts is page 7 para 2 where he states explicitly acceptance of the Millikan et al approach and the role of evolution. However, he also says “more detail in chapter 5” which I don’t have access to.
(The excerpt is a pdf image so I cannot post it here but you have the book….)
.
Now back to the regular show on the tautology that disproves 150 years of science. It’s amazing to me how this topic (and a few like it) can bring back the crowds. Although the posts on the history of the term are interesting.
Perhaps irrelevant, but my 18 month old grandson is visiting. Out cat is in hiding, but the kid sees the food bowl and says, “kitty.” The pet water fountain is also kitty. He does this with other words, like mama.
I raised two children and don’t recall seeing this before.
Which of the two Dretske books are you referring to above?
Naturalizing the Mind which the precis excerpts (PDF) says is 97 but which was 1995 according to sep. Maybe the paperback is 1997.
Thx. I really like Dretske, but my view is based only on a bunch of his papers: I’ve never read any of his books, in spite of owning them all. He died just as I was finishing my Hall book, and I ended up not trying to contact the publisher of “Experience as Representation” for inclusion, but I think it would have made the book considerably better. His paper is maybe the clearest statement–though in a slightly more extreme form–of the representationism that I believe Hall is responsible for first expressing comprehensively (though I understand a couple of Medieval philosophers may have said something in that ballpark).
I think I’m always going to regret that lacuna.
A delayed response to this.
Does Searle really talk of “an impregnable barrier”?
I’ve been taking Searle’s argument to be that syntax and semantics are very different kinds of things, that there’s something like a category mistake (in Ryle’s terminology) involved in the idea of going from syntax to semantics. I wouldn’t think that “impregnable barrier” was the right way of describing that.
That seems about right to me.
I think walto was responding to this when he said (of Searle):
I’m not sure where walto gets that idea. I do not see anything syntactic about the behavior of bacteria.
Neil Rickert,
I don’t know, actually. Could we not simulate the activity of a bacterium?
BTW, I do think Searle would accept the notion of an “impregnable barrier” between syntax and semantics. You likely remember that he says a Chinese city is no closer than than a Chinese room: no piling on of syntax ever gets you to semantics on his view. I guess another way of saying that is that they are different “categories.” That comes to much the same thing, I think.
As far as I understand Searle (which is not very far), the “you can’t get to semantics from syntax” is not a conclusion of his, but a premise.
The Chinese Room thought-experiment is designed to get us to accept the intuitiveness of that premise, rather than an argument which yields that assertion as a conclusion. If Searle has any actual arguments that give us “you can’t get to semantics from syntax” as a conclusion, I’m not aware of them.
Granted, I’m inclined to share his intuition — but that’s hardly a reason to think it’s correct. For all we know, it could be that a correct theory of semantics would show that it is constructible from syntax after all. If that theory turns out to be “counter-intuitive,” so much the worse for our “intuitions”!
Neil,
Not in those exact words, but he does say this:
Bruce,
Yes, but at a different level than the one you were suggesting.
You wrote:
Dennett agrees that neurons form coalitions, but he doesn’t say that they “want” or “sorta want” to join coalitions. His application of the intentional stance to neurons is more modest:
He explicitly states that they are ignorant of what goes on at the higher levels:
It makes sense. A neuron doesn’t “know” (or “sorta know”) that it’s part of a ganglion any more than a logic gate “knows” that it’s part of an ALU.
Yes, I agree he says that. But I think “impenetrable barrier” gives a wrong impression of his view. “Impenetrable barrier” suggests two things side by side, but separated by a barrier. But I think he is instead saying that the two concepts are orthogonal (or something like that).
keiths:
Bruce:
The causal relationship doesn’t always have to be reliable (or even to exist at all — consider unicorns). It’s just that evolution will favor it when it is.
We can establish an artificial chemical gradient in a petri dish and watch as the bacteria swim toward the nonexistent food. There is a “stands for” relation — the gradient stands for the food — but the food isn’t real. It’s an illusion created by the experimenter.
Since I don’t insist on a causal linkage between represenation and referent, I think I’m off the hook.
I think he’s copping out. Thought experiments needn’t be realistic to be effective. Twin Earth isn’t very realistic, but Dennett certainly acknowledges its importance. He even refers to his “two-bitser” intuition pump as “the poor man’s Twin Earth”!
Neil,
Shrug. I knew what walto was getting at, and now you do too.
As for how Searle thinks that original intentionality can arise without being reducible to syntax, I don’t know. I’ve ordered his book and I’ll let you all know what I learn.
I do remember reading somewhere that he thinks that desires, like hunger, are intrinsically about their objects, but if so, I don’t know how he justifies this.
I am not sure if you plan to pursue this right now, but if you do, I’d be interested in your thoughts on a disagreement between Dennett and Dretske.
In that page 7 paragraph I noted, Dretske states he agrees with Millikan but that Dennett does not agree with him (Dretske). That confused me since Dennett seems to be in close agreement with Millikan.
Dretske’s cites Dennett’s Evolution, Error and Intentionality (a book chapter of Dretske’s citation) for the source of the disagreement with Dennett.
As I read it, Dennett says there that Dretske believes in original intentionality and that is their source of disagreement.
If you do decide to look further at the Dretske stuff, I’d be interested in your thoughts on this issue. Is Dennett right about Dretske’s views on intentionality?
Keith:
I agree that evolution is important.
It’s the details of explaining how it works, how semantics “emerges”, that need to be addressed. As part of doing so, one has to explain why mental representations can be unreliable but still increase fitness somehow. One also needs to address the disjunction problem.
I am not saying these are unsolvable issues, only that the arguments these issues raise need to be confronted.
Keith:
I’ve always found the term “sorta” to be a bit vague — maybe Dennett needs it to be that way to work in all the circumstances he uses it.
On page 97 of Intuition Pumps he says
“We use the intentional stance to keep track of the beliefs and desires (or ‘beliefs’ and ‘desires’ or sorta beliefs and sorta desires of the (sorta-) rational agents at every level, from the simplest bacterium through all the discriminating, signaling, comparing, remembering circuits that comprise the brains of animals from starfish to astronomers”
Based on my reading of that, I think applying “sorta” to neurons is compatible with the agency Dennett attributes to them. Having a “sorta desire” is a sorta want.
I do agree the subpersonal level does not “know” about the personal level and that is true for all sublevels of the subpersonal. I understand that to be a key point of his model. You do have to stay within the level when applying the intentional stance.
How do you understand “sorta”?
If fitness is defined by reproduction, beer goggles can increase fitness.
I have not read all the papers, but if the history outlined in the SEP CR article is accurate, Searle switches the intuition he is appealing to. In later papers, it is claimed that
My suspicion is that many people reviewing the CR experiment mix “knowing meaning” with the “awareness that one knows meaning”. The two are different and I think one can have the first without the second (eg in animals). Or even the second without the first (temporarily anyway, as in — I know I know your name, just give me a moment…)
I’m inclined to say that semantics doesn’t emerge. It is there from the get-go. Or, if you like, it is an aspect of homeostasis.
It is syntax that emerges. In its own way, syntax depends on semantics. That is it depends on a very narrowly constrained semantics of syntax.
Hmm, perhaps it’s because I’m a mathematician that I see it that way.
From my point of view, a computer does syntax only in the sense of derived intentionality. We take the computer operations to be syntactic, but they are really electromagnetic and the idea that they are syntactic is an interpretation that we find it useful to impose on the computer.
I don’t understand how you are using either “syntax” or “derived,” Neil. The question that Searle and his critics seem to me to be interested in is whether all the formation rules for some language can produce a single reference or designation rule. Can one who knows no Chinese, e.g., come to understand what a single Chinese word means, by memorizing a Chinese dictionary or are the interchange rules provided by a dictionary a kind of “closed loop” that don’t get you a single smidge of meaning?
I’ll try to read the Dennett/Haugeland paper as well as “Evolution, Error, and Intentionality” over the next few days and give my impression. I glanced at the Dretske book and there are only a couple of brief references to Dennett there, so I don’t know if I’ll be able to get a sense of exactly how he responds to Dennett’s critique. But if I can suss it out, I’ll report on that too.
I’ll also try to catch up on the papers being discussed here, since I actually do find these issues about semantics and syntax central to my concerns. I’ve been a bit busy lately with finishing the book (!), grading, and going back on the tenure-track job market.
Uh-oh. I want to warn everyone in advance that I can’t read nearly as fast as KN.
🙁
I am not sure of the exact parameters of the CR, but I think it has to go beyond memorizing a dictionary of even solely memorizing anything.
Why can’t the questions include personal questions? For example, how big is your family? Where did you go to school? When did you lose your virginity?
So there would have to be a fully constructed person embedded somehow in that “dictionary”.
Further, why can’t the questions refer to past conversation? Like, “Did my question about losing your virginity bother you? How so?.”
Once you allow for that, I think you start to see the reasoning behind the systems reply. Although I also think an agent needs the ability to act in its own interests in the world to fully justify attributing semantics to it.
I don’t understand your post, Bruce. I take it that the Chinese Room story is about someone who is given the definitions of every Chinese word in Chinese and manages to learn them all. So he’s basically got an entire Chinese dictionary in his head. We may also give him all the rules of Chinese grammar so he can put the words he “knows” into WFFs.
Then the question comes, does such a person actually understand any Chinese at all? Searle says NO: one can’t get any semantics (meanings) no matter how much syntax one has mastered.
So, I don’t understand what you are saying above about personal questions.
Good luck on your job hunt, KN.
I cam across this PhD thesis Beyond Folk Psychology that may interest you. The author tries to reply to objections of modern phenomenological philosophers who don’t accept psychology approaches to understanding social interaction. He uses his version of Dennett’s personal/subpersonal division to help do so.
I’ve only read the first couple of chapters, which are introductory and would likely be of little interest to you. But possibly the reminder might be, since the thesis seems to cover very roughly similar ground to some of the topics you have mentioned in this forum that interest you.
BTW, good reply to WJM in the other thread on the tautology in evolution. That’s the best approach to dealing with him, I agree. I learn things from Joe F and Steve S when they take time to reply to the ID supporters in that thread, but I am mystified by their motivations in repeatedly doing so, especially in TSZ where the audience is so limited and all of them have no doubt made up their minds.
It’s been a while since I read the paper, but I thought the person in the room had to answer questions posed in Chinese and reply in Chinese as well. So he as a person could not understand the questions or the answers. (ETA: “Learn” might taken to imply he understands the meaning of words, so I would avoid that word).
Hence he would not be able to use his personal experience to answer the questions. Instead, valid answers would have to be based solely on him slavishly manipulating symbols.
Further, a static dictionary/book of rules would not work since the questions could refer to the past history of this particular series of questions.
The system reply to Searle says that understanding resides in the virtual entity consisting of the person in the room, the book of rules, and the actions in using the rules to reply to questions. (But I am not sure how the experiment as I recall it address the need for dynamic memory of the conversation.)
BTW, there are similar concerns with couching the argument in terms of syntax versus semantics. Syntax is static, abstract structure. But computers are dynamic, causal mechanisms.
Further, as Neil points out, we supply the ones and zeros and the syntactic rules that computers are supposedly following. In fact, computers are electronic machines that operate according to the rules of physics. If you think of syntax as inert rules, then that alone is not an appropriate characterization of a computer. There’s lots more in this vein in the SEP article on the Chinese Room.
Syntax: formal expressions that are required to strictly follow rules of form.
As for why I say the computer has only derived intentionality with respect to syntax, that’s because the computer knows nothing of formal structure and does not understand the rules of form. It produces what we see as formal structure and follows what we see as rules of form only because its mechanisms do not permit it to do otherwise.
I just mean the rules of form (or, I guess the ‘formal structures’ if those are rules of form). Not sure what your def means.
For those that have not seen the cartoon, here is Dan Dennett’s favorite reply to Searle’s argument.
Neil:
I don’t think it’s because you’re a mathematician. I think it’s because you’re Neil. 🙂
Neil:
Bruce:
Neil and Bruce,
You’re both misunderstanding the meaning of ‘syntactic’ in the context of these discussions. The level of 1s and 0s is not the syntactic level of a computer. Physics is.
That can be confusing, because in other contexts the manipulation of 1s and 0s according to formal rules would count as ‘syntax’, but here, the distinction is between systems that take meaning into account (aka ‘semantic engines’) versus those that don’t (‘syntactic engines’).
Here’s Dennett (from chapter 31 of Intuition Pumps):
At the level of physics, the operation of the computer is semantic. It operates on electrical charges or currents according to their real world properties. So that’s completely semantic.
We we have done, in designing computers, is harness those natural semantic actions so that they represent the syntactic actions of computation.
What the brain does is harness the naturally semantic actions of biochemistry, so that they represent other kinds of semantics about other parts of the world.
Neil,
Not by any definition of ‘semantic’ I’ve ever seen (other than your idiosyncratic definition, of course).
The computer operates on charges and currents, of course, but the operation depends in no way on the meanings (if any) assigned to those charges and currents. It’s not semantic.
If you don’t believe me, consult the dictionaries.
I perhaps should have used “proto-semantic”, but it gets tedious having to do that all the time. The implications of the context should be sufficient.
Neil,
“Proto-semantic” doesn’t work any better. The word you’re looking for is “syntactic”. 🙂
A flip-flop is in a stable state. The electrical flows are what keep it in that stable state. From the point of view of the stable process, the electrical flows can be reasonably said to be meaningful, though not in a conscious sense.
The signal that triggers a change of state is admittedly different, a kind of external interference. So I don’t suggest anything semantic about that.