The ‘Hard Problem’ of Intentionality

I’m starting a new thread to discuss what I call “the hard problem of intentionality”: what is intentionality, and to what extent can intentionality be reconciled with “naturalism” (however narrowly or loosely construed)?

Here’s my most recent attempt to address these issues:

McDowell writes:

Consider this passage from Dennett, Consciousness Explained, p. 41: “Dualism, the idea that the brain cannot be a thinking thing so a thinking thing cannot be a brain, is tempting for a variety of reasons, but we must resist temptation . . . Somehow the brain must be the mind”. But a brain cannot be a thinking thing (it is, as Dennett himself remarks, just a syntactic engine). Dualism resides not in the perfectly correct thought that a brain is not a thinking thing, but in postulating some thing immaterial to be the thinking thing that the brain is not, instead of realizing that the thinking thing is the rational animal. Dennett can be comfortable with the thought that the brain must be the mind, in combination with his own awareness that the brain is just a syntactic engine, only because he thinks that in the sense in which the brain is not really a thinking thing, nothing is: the status of possessor of intentional states is conferred by adoption of the intentional stance towards it, and that is no more correct for animals than for brains, or indeed thermostats. But this is a gratuitous addition to the real insight embodied in the invocation of the intentional stance. Rational animals genuinely are “semantic engines”. (“Naturalism in Philosophy of Mind,” 2004)

Elsewhere McDowell has implied that non-rational animals are also semantic engines, and I think this is a view he ought to endorse more forthrightly and boldly than he has. But brains are, of course, syntactic engines.

So it seems quite clear to me that one of the following has to be the case:

(1) neurocomputational processes (‘syntax’) are necessary and sufficient for intentional content (‘semantics’) [Churchland];
(2) intentional content is a convenient fiction for re-describing what can also be described as neurocomputational processes [Dennett] (in which case there really aren’t minds at all; here one could easily push on Dennett’s views to motivate eliminativism);
(3) neurocomputational processes are necessary but not sufficient for intentional content; the brain is merely a syntactic engine, whereas the rational animal is a semantic engine; the rational animal, and not the brain, is the thinking thing; the brain of a rational animal is not the rational animal, since it is a part of the whole and not the whole [McDowell].

I find myself strongly attracted to all three views, actually, but I think that (3) is slightly preferable to (1) and (2). My worry with (1) is that I don’t find Churchland’s response to Searle entirely persuasive (even though I find Searle’s own views completely unhelpful). Is syntax necessary and sufficient for semantics? Searle takes it for granted that this is obviously and intuitively false. In response, Churchland says, “maybe it’s true! we’ll have to see how the cognitive neuroscience turns out — maybe it’s our intuition that’s false!”. Well, sure. But unless I’m missing something really important, we’re not yet at a point in our understanding of the brain where we can understand how semantics emerges from syntax.

My objection to (2) is quite different — I think that the concept of intentionality plays far too central a role in our ordinary self-understanding for us to throw it under the bus as a mere convenient fiction. Of course, our ordinary self-understanding is hardly sacrosanct; we will have to revise it in the future in light of new scientific discoveries, just as we have in the past. But there is a limit to how much revision is conceivable, because if we jettison the very concept of rational agency, we will lose our grip on our ability to understand what science itself is and why it is worth doing. Our ability to do science at all, and to make sense of what we are doing when we do science, presupposes the notion of rational agency, hence intentionality, and abandoning that concept due to modern science would effectively mean that science has shown that we do not know what science is. That would be a fascinating step in the evolution of consciousness, but I’m not sure it’s one I’m prepared to take.

So that leaves (3), or something like it, as the contender: we must on the one hand, retain the mere sanity that we (and other animals) are semantic engines, bearers of intentional content; on the other hand, we accept that our brains are syntactic engines, running parallel neurocomputational processes. This entails that the mind is not the brain after all, but also that rejecting mind-brain identity offers no succor to dualism.

Neil Rickert’s response is here, followed by Petrushka’s here.

 

 

 

 

 

334 thoughts on “The ‘Hard Problem’ of Intentionality

  1. Blas,

    Yes? You have evidence that something that not exists? Are you in the list for the Nobel?

    No counterargument, eh?

  2. The so-called “causal interaction” problem is, I think, an insurmountable objection to dualism.

    The dualist says, “in order to explain mental phenomena, we should posit an immaterial entity as the cause of those phenomena.” Now, there’s nothing wrong with positing entities in order to explain phenomena — on the contrary, that’s the very hallmark of scientific theorizing. But now things go off the rails for the dualist very quickly, because — to return to a refrain I’ve often made here, not all posits are created equal.

    What makes a posit a good one requires that the model in which the posited entity occurs also accounts for our cognitive access to the posit. If there were immaterial souls, how could we know that there are?

    Dualists usually appeal to introspection, but this is a non-starter — introspection only at best allows us to describe accurately the mental phenomena that need to be explained; in other words, introspection gives us the explanandum but not the explanans. (Descartes’s confusion on this essential point has wrecked much havoc in the history of philosophy.)

    So, if not introspection, then we need some third-person, objective account of immaterial souls. But this is where the causal interaction problem kicks in with a deadly vengeance. For it is crucial to the entire account that we cannot explain how anything material could causally affect anything immaterial, and conversely.

    We understand pretty well what causation is when it comes to one physical thing causally affecting another physical thing, but how can anything physical causally affect anything non-physical — something that has no spatial properties, no mass, no volume, that cannot be described by any laws of physics? Yet the dualist is committed to asserting that the most basic cognitive acts — perceiving and acting — involve precisely this causal influence of the material on the immaterial. And this means that the dualist is committed to holding that the most basic cognitive acts, the epistemological foundation of all empirical knowledge, are completely unintelligible and essentially magical.

    It should be clear by now that since we cannot even coherently conceive of what the causal influence of the material on the immaterial could be like, we cannot conceive of how the posited immaterial entity could be empirically detected.

    And since introspection is useless — providing us with a description of the explanandum but offering us no explanans, no way of telling us whether our mental phenomena are caused by something immaterial or material — dualism cannot be vindicated in either first-person or third-person terms. It can’t be vindicated as either subjective knowledge or as objective knowledge. It’s just a myth.

  3. This passage (from Dennett’s Intuition Pumps and Other Tools for Thinking) nicely encapsulates the syntax vs. semantics issue:

    How can meaning make a difference? It doesn’t seem to be the kind of physical property, like temperature or mass or chemical composition, that could cause anything to happen. What brains are for is extracting meaning from the flux of energy impinging on their sense organs, in order to improve the prospects of the bodies that house them and provide their energy. The job of a brain is to “produce future” in the form of anticipations about the things in the world that matter to guide the body in appropriate ways. Brains are energetically very expensive organs, and if they can’t do this important job well, they aren’t earning their keep. Brains, in other words, are supposed to be semantic engines. What brains are made of is kazillions of molecular pieces that interact according to the strict laws of physics and chemistry, responding to shapes and forces; brains, in other words, are in fact only syntactic engines.

    Imagine going to the engineers and asking them to build you a genuine-dollar-bill-discriminator, or, what amounts to the same thing, a counterfeit-detector: its specs are that it should put all the genuine dollars in one pile and all the counterfeits in another. Not possible, say the engineers; whatever we build can respond only to “syntactic” properties: physical details — the thickness and chemical composition of the paper, the shapes and colors of the ink patterns, the presence or absence of other hard-to-fake physical properties. What they can build, they say, is a pretty good but not foolproof counterfeit-detector based on such “syntactic” properties. It will be expensive, but indirectly and imperfectly it will test for counterfeithood well enough to earn its keep.

    Any configuration of brain parts is subject to the same limitations. It will be caused by physicochemical forces to do whatever it does regardless of what the input means (or only sorta means). Don’t make the mistake of imagining that brains, being alive, or made of proteins instead of silicon and metal, can detect meanings directly, thanks to the wonder tissue in them. Physics will always trump meaning. A genuine semantic engine, responding directly to meanings, is like a perpetual motion machine — physically impossible. So how can brains accomplish their appointed task? By being syntactic engines that track or mimic the competence of the impossible semantic engine.

  4. keiths:
    Alan,

    We’ve moved on, and the discussion has proceeded quite nicely despite leaving Shallit’s questions unanswered, which kind of proves my point. :-)

    Ignoring Professor Shallitt’s comment would have been a more eloquent non-response.

    As to “moving on”, I’m not really seeing that: though that may be due to deafness to the music of the spheres on my part. I still am sceptical about the utility of talking about “intentionality” or “about-ness” as if there were some property, phenomenon or entailment that was, in some way, coherent, or in the interim, apparently useful.

    ETA too much usefulness

  5. keiths,

    Thanks for that passage! I really should get that book!

    At least it helps clarify what Dennett means by “syntactical” in this context. It’s probably not the best term he could have chosen — the important thing is that brains, being themselves physical, can only reliably detect physical properties and features. And so, since “meaning” doesn’t appear to be anything physical, brains can’t detect (or generate) it.

    But, in light of McDowell’s criticism of Dennett, it becomes clear that Dennett is relying on some suppressed premises. Fully explicated, his view is more vulnerable. This is not quite Dennett’s own view, but a view that’s an amalgam of what he says and what Alex Rosenberg says — in any event, it’s a useful foil for me:

    (1) a semantic engine is anything that bears or possesses intentional content;
    (2) intentional content must be propositional content;
    (3) if there are any semantic engines in nature, they must be brains;
    (4) but neurophysiological processes do not encode information in sentential or propositional form;
    (5) therefore brains cannot have intentional content;
    (6) therefore brains cannot be semantic engines;
    (7) therefore there are no semantic engines in nature.

    — although, Dennett insists, there are semantic engine mimics, and brains are very good at doing that!

    I think that premises (1) and (4) are exactly right, but that both (2) and (3) are false. I think that the semantic engines in nature are “the higher animals”, all of whom have intentional content, and only a few of whom have propositional content. Non-propositional intentional content lies in the whole complicated spatially and temporally extended dynamic of organism-environment interactions — not in any one part of it, as premise (3) requires — and propositional intentional content is similarly spatially and temporally extended, though not across the organism/environment interaction but rather across the linguistic community.

  6. Alan,

    Verbal communication is useful because it means something. Words are about other things. They have intentionality.

    How is that useless or incoherent?

  7. keiths,

    keiths:
    Alan,

    Verbal communication is useful because it means something.Words are about other things.They have intentionality.

    How is that useless or incoherent?

    Only in the redundancy of “they have intentionality”. What does that statement add?

  8. Kantian Naturalist: [quoting Daniel Dennett](1) a semantic engine is anything that bears or possesses intentional content;

    So presumably, intentionality is something possessed by a semantic engine? And brains are or are not semantic engines? Are insect brains very simple semantic engines or are we restricting ourselves to human attributes?

  9. davehooke: “Intentionality” is a noun. Kind of useful for discussion.

    Well, that’s the point at issue. How can it be useful to talk about “intentionality” as “about-ness” without attempting to clarify what you mean when you use the term? I am not trying to be obtuse and I don’t think I am utterly stupid.

  10. Alan,

    Only in the redundancy of “they have intentionality”. What does that statement add?

    Nothing. If it did add something, then the statements wouldn’t be equivalent.

    In the same way, “it is transparent” adds nothing to the idea “you can see clearly through it”. It isn’t supposed to add anything, but the word “transparent” is nevertheless quite useful.

  11. “Intentionality” isn’t an explanation of what words have that make them about something — that would be a ‘dormitive virtue’-style explanation, i.e. a non-explanation — intentionality is just a re-description of what we are already saying when we say that words are about things (and other words). But it is a useful re-description, once you get used to the lingo.

    Alan Fox: So presumably, intentionality is something possessed by a semantic engine? And brains are or are not semantic engines? Are insect brains very simple semantic engines or are we restricting ourselves to human attributes?

    Right, I’m using the concept of “intentionality” to explicate the concept of “semantic engine” — something counts as a semantic engine if it has intentional content.

    The other question — are brains semantic engines? — is the very thing we’ve been arguing about! Keiths and I have been defending Dennett’s claim that they are not; Neil has been arguing that brains are semantic engines after all

    I just don’t know what to say about insects. I feel more comfortable saying that frogs are very simple semantic engines.

  12. keiths: This passage (from Dennett’s Intuition Pumps and Other Tools for Thinking) nicely encapsulates the syntax vs. semantics issue:

    I’m inclined to say that it is nonsense — or perhaps I should say that it is in Dennett speak rather than in English.

    What brains are for is extracting meaning from the flux of energy impinging on their sense organs, …

    There is no meaning in that flux of energy, so there is no meaning to extract.

    …, in order to improve the prospects of the bodies that house them and provide their energy.

    The meaning is already there in having prospects. Meaning comes from within, not from without.

    Moving along

    Not possible, say the engineers; whatever we build can respond only to “syntactic” properties: physical details — the thickness and chemical composition of the paper, the shapes and colors of the ink patterns, the presence or absence of other hard-to-fake physical properties.

    That’s a misuse of “syntactic”. The appropriate word there would have been “physical”, not “syntactic”. I guess that’s why you misunderstood my earlier comments about syntax.

  13. keiths:
    Alan,

    Nothing.If it did add something, then the statements wouldn’t be equivalent.

    In the same way, “it is transparent” adds nothing to the idea “you can see clearly through glass”.It isn’t supposed to add anything, but the word “transparent” is nevertheless quite useful.

    Words come and go depending on their usefulness in context. “Phlogiston” is a lovely word, for example.

    All well and good but I must still be missing something. Is the point just trivial? Philosophical intentionality is an attempt to categorize something, is it?

  14. Kantian Naturalist: This is not quite Dennett’s own view, but a view that’s an amalgam of what he says and what Alex Rosenberg says — in any event, it’s a useful foil for me:

    (1) a semantic engine is anything that bears or possesses intentional content;
    (2) intentional content must be propositional content;
    (3) if there are any semantic engines in nature, they must be brains;
    (4) but neurophysiological processes do not encode information in sentential or propositional form;
    (5) therefore brains cannot have intentional content;
    (6) therefore brains cannot be semantic engines;
    (7) therefore there are no semantic engines in nature.

    Why must intentional content be propositional? I see the argument as failing at that point. If I enjoy the beauty of nature, is that not intentional content? Yet I don’t see that it is propositional.

    I’m also doubtful of (3). And (4) seems to miss the point. Meaning is not something that is encoded into propositions. While Wittgenstein’s “meaning is use” might not be perfect, it does get at the point that there isn’t an encoding.

  15. I think part of my problem is remembered frustration over what semiosis is and isn’t.

    Coming back to syntax and semantics which to me are linguistic terms referring to structure and meaning of words in a sentence. Obviously there is broader usage here. Are we attempting to discuss brain function and how neuron firing and synapse forming and breaking could result in the gamut of human thought?

  16. Alan,

    All well and good but I must still be missing something. Is the point just trivial? Philosophical intentionality is an attempt to categorize something, is it?

    The choice of the word ‘intentionality’ is unimportant. The question is how we can explain intentionality if minds are purely physical. How to get semantics from mere syntax, in other words.

  17. Alan Fox: Philosophical intentionality is an attempt to categorize something, is it?

    “Intentionality” is useful for discussing matter that philosophers want to discuss. It may be that you have no interest in discussing them. It is not a scientist’s term.

  18. Alan,

    Coming back to syntax and semantics which to me are linguistic terms referring to structure and meaning of words in a sentence. Obviously there is broader usage here. Are we attempting to discuss brain function and how neuron firing and synapse forming and breaking could result in the gamut of human thought?

    Yes. More generally, we’re talking about how any physical system can come to be about something else. How does a pattern of neural firings, or the state of a computer, come to mean anything? A logic gate doesn’t know what its inputs or outputs mean, and neither does a neuron. They both work purely mechanically. How is it that a massive arrangement of them can transcend mere syntax and ascend to semantics?

  19. Kantian Naturalist: Keiths and I have been defending Dennett’s claim that they are not; Neil has been arguing that brains are semantic engines after all.

    I just don’t know what to say about insects. I feel more comfortable saying that frogs are very simple semantic engines.

    That strongly suggests an evolutionary development pathway from simpler nervous systems to more complex. But if you agree with Dennett, then a frog is a simple syntactic engine, no?

  20. keiths: How is it that a massive arrangement of them can transcend mere syntax and ascend to semantics?

    Well, why didn’t you say so before! Trivial issue: what does syntax and semantics imply in this context. Interesting issue; how do brains work?

  21. Neil Rickert: “Intentionality” is useful for discussing matter that philosophers want to discuss. It may be that you have no interest in discussing them. It is not a scientist’s term.

    I get that, Neil. I have been reading your comments and they are intriguing. I am trying to broaden my outlook but I can’t tear myself away from evidence first.

  22. keiths: How does a pattern of neural firings, or the state of a computer, come to mean anything? A logic gate doesn’t know what its inputs or outputs mean, and neither does a neuron. They both work purely mechanically. How is it that a massive arrangement of them can transcend mere syntax and ascend to semantics?

    But we already know from evidence that there is not a problem. We observe human physiology and we observe human semantic output. Therefore neuron firing etc can result in the ceiling of the Sistine Chapel. We just don’t understand the process in any detail. We can chip away at it productively though.

  23. Alan,

    Trivial issue: what does syntax and semantics imply in this context. Interesting issue; how do brains work?

    Not quite. Here’s the real issue: the workings of a brain can be described in entirely syntactic terms. If so, then where does meaning come from? How can it have any causal power? How can an assemblage of purely syntactic neurons come to possess intentionality?

  24. keiths:
    Alan,

    Not quite.Here’s the real issue: the workings of a brain can be described in entirely syntactic terms.If so, then where does meaning come from?How can it have any causal power?How can an assemblage of purely syntactic neurons come to possess intentionality?

    Silly answer; because we observe it happening. Where is the barrier? I can visualize an evolutionary pathway for the brain and nervous system.

  25. Neil:

    I’m inclined to say that it is nonsense — or perhaps I should say that it is in Dennett speak rather than in English.

    I think the real problem is that it is in English rather than Neil-speak. 🙂

    There is no meaning in that flux of energy, so there is no meaning to extract.

    Only in Neil-speak. In English, the following is perfectly permissible:

    The swallows are coming back. That means that spring has arrived.

    Neil:

    The meaning is already there in having prospects. Meaning comes from within, not from without.

    No, because the appropriate action depends on the meaning of the “energy flux”. If the incoming photons tell me that a grizzly is approaching, my response will be quite different than if the energy flux indicates that someone is offering me a cronut.

    That’s a misuse of “syntactic”. The appropriate word there would have been “physical”, not “syntactic”. I guess that’s why you misunderstood my earlier comments about syntax.

    Again, your restriction applies only to Neil-speak, not to English.

    You’ll get more out of these discussions if try to discern what people actually mean (heh) rather than what their statements would mean in Neil-speak.

  26. keiths: Only in Neil-speak. In English, the following is perfectly permissible:

    The swallows are coming back. That means that spring has arrived.

    Of course it is. But the meaning is not in the energy flux.

    Meaning is subjective, not objective. If the meaning were in the energy flow, it would necessarily be objective.

  27. Perhaps a more fruitful line of approach would be to recast the entire discussion in the form of a research proposal.

    That would highlight any paralysis about the meanings of words. If one can’t even begin to imagine how to test one’s ideas, that usually means it is time to rethink them.

  28. Neil,

    The sight of my cat, the feel of her fur against my leg, the sound of her polite meow, all convey the fact that she is here. They mean that she is here. Without the sensory input, I wouldn’t know of her proximity.

    The sensory inputs impart information. They are meaningful.

  29. Kantian Naturalist:
    I just don’t know what to say about insects.I feel more comfortable saying that frogs are very simple semantic engines.

    I’ve been looking for an excuse to bring up bee dancing!
    Now that we understand it, we can see the we can that it is “about” the pollen the dancer found.

    Does it it have aboutness for bees? It seems so, which implies bees have “original” intentionality, if there is such a thing.

  30. Alan Fox: How can it be useful to talk about “intentionality” as “about-ness” without attempting to clarify what you mean when you use the term?

    What do you think philosophers do?

  31. Mike Elzinga:

    That would highlight any paralysis about the meanings of words. If one can’t even begin to imagine how to test one’s ideas, that usually means it is time to rethink them.

    I’m assuming you mean testing in the sense of scientific experimentation.

    I can understand that as a criterion for scientific discussion; are you suggesting it should also be a criterion for philosophical discussion?

    If so, are the testing processes the same in both disciplines?

    I am not a philosopher, but it seems to is a role for philosophy in pre-scientific analysis of concepts which would not involve scientific testing.

    Of course, if such ideas are part of the domain of science, then they would eventually have to be settled by science.

    One possible example would be the idea of functionalism in philosophy of mind which has become a paradigm for some of cognitive science. For example, work with artificial neural networks might be considered as in that paradigm.

  32. Mike Elzinga: If one can’t even begin to imagine how to test one’s ideas, that usually means it is time to rethink them.

    Hence philosophy.

    If anyone wants to start a “What has philosophy ever done for us?” thread, they should go ahead.

  33. Certainly the dance is about the distance and direction of the pollen, but it is not about it for the bees.

    I take this to mean that we will need to have a more nuanced account of degrees of intentionality. Muller defends this in his essay I referred to above; I’ll have more to say after I finish reading it.

  34. davehooke: If anyone wants to start a “What has philosophy ever done for us?” thread, they should go ahead.

    Indeed — apart from developing the concepts of nature, science, democracy, capitalism, justice, rights, knowledge, thought, art, and beauty, what has philosopher ever done for us?

  35. keiths: Here’s the real issue: the workings of a brain can be described in entirely syntactic terms. If so, then where does meaning come from? How can it have any causal power? How can an assemblage of purely syntactic neurons come to possess intentionality?

    That’s how Dennett frames it, yes. But, for one thing, the semantics/syntax language has caused some problems. Presumably Dennett had in mind that all that brains do is compute, and all computation is syntactical.

    But perhaps a better way of putting it would be to replace “syntactical” with “bio-physical” here:

    Here’s the real issue: the workings of a brain can be described in entirely bio-physical terms. If so, then where does meaning come from? How can it have any causal power? How can an assemblage of purely bio-physical neurons come to possess intentionality?

    My sense is that there really isn’t any way of understanding how intentional content is instantiated in neurocomputational processes. The trick, which McDowell seems to pull off, is how to accept this idea without offering aid and comfort to dualism. But McDowell’s view is also not entirely acceptable, because he balks at the hard project of trying to build a theory of how neurocomputational process play a necessary but not sufficient role in a dynamic, organism/environment and organism/organism relation that does instantiate intentional content.

  36. Nice essay, K.N.

    I think Amie Thomasson does a very nice job handling the causal closure problem. See her paper “A Nonreductivist Solution to Mental Causation” and/or her book _Ordinary Objects_.

    W

  37. Kantian Naturalist: Indeed — apart from developing the concepts of nature, science, democracy, capitalism, justice, rights, knowledge, thought, art, and beauty, what has philosopher ever done for us?

    Well, I don’t think I can concede such a broad claim. I assert knowledge, thought, beauty and art precede philosophy as a discipline. Maybe a subject for another thread. I did toy with the subject of sexual selection and the evolution of art and language in Homo sps.

  38. walto,

    Hello! And thank you for those references! I’ve read a bit of Thomasson — she has a really nice essay on phenomenology as ordinary-language philosophy that I liked, and I’ve looked here and there at Phenomenology and Philosophy of Mind. It’s nice to see another philosopher here — I hope you stick around and contribute!

  39. Kantian Naturalist:

    I take this to mean that we will need to have a more nuanced account of degrees of intentionality. Muller defends this in his essay I referred to above; I’ll have more to say after I finish reading it.

    I’ll look forward to an explanation of what “degrees of intentionality” could mean.

    Perhaps this will also address the concern with measuring intentionality raised by another poster?

  40. Talking with really smart, interesting people who aren’t trained as professional philosophers makes me a better philosopher. It also keeps my prose style lively and accessible and makes me a better teacher in the classroom.

  41. Kantian Naturalist: That’s how Dennett frames it, yes.But, for one thing, the semantics/syntax language has caused some problems.Presumably Dennett had in mind that all that brains do is compute, and all computation is syntactical.

    My reading of that section of “Intuition Pumps” is that Dennett is arguing against original intentionality. He is taking a “semantic engine” to mean something that possesses original intentionality.

    Here is my understanding of his argument that there is no need for original intentionality.

    He starts with the two-bitser example. This slug detector is meant to illustrate how one needs to know the purpose of the designer of an artifact to understand the meaning of its states (or inferential results, which I think would be equivalent for these purposes). In particular, we can only know that a state represents a fke coin by understanding the purpose of the designer. That is the standard definition of derived intentionality — it is derived from the purposes of the designer.

    He moves on to consider a robot designed to act as a guardian when while that designer is preserved cryogenically. He points out the designer would have to equip the robot to preserve itself by seeking resources needed to run, by dealing with competition, and by flexibly adapting to a changing environment.

    He then invites you to conclude that the robot would possess intentionality, and that there is no principled difference between the “derived” intentionality of the robot and the original intentionality of an organism. He pushes this point home by referring to Dawkins selfish genes and asking whether it makes sense to consider them the designer for what would be consequently derived intentionality for us.

    In other words, the designer in the case of the organism would be evolution and there is no need for “original” intentionality or for a true semantic engine which embody such intentionality; the approximation we have been given by evolution is good enough.

    He does not consider degrees of intentionality in the book; perhaps that concept would shed a different light on the original versus derived dichotomy.

  42. BruceS:

    I’ve been looking for an excuse to bring up bee dancing!

    I’ve been looking for an excuse to do some bee dancing. Anyone want to do the figure 8 with me? I can get some pollen from the health food store.

  43. BruceS,

    My reading of that section of “Intuition Pumps” is that Dennett is arguing against original intentionality. He is taking a “semantic engine” to mean something that possesses original intentionality.

    That’s my reading also. It becomes clear with an example he introduces later in the book (which is unfortunately too long and involved to reproduce here).

    I think of it as being similar to his ideas about free will. You can’t have libertarian free will, he says, but here is a compatibilist version of free will that is “a variety of free will worth wanting”. Likewise, you can’t have original intentionality, but the kind of intentionality that evolution has bestowed upon us does its job admirably.

  44. Mike,

    If one can’t even begin to imagine how to test one’s ideas, that usually means it is time to rethink them.

    You can test your ideas by 1) probing for logical inconsistencies, 2) running thought experiments, and 3) comparing your ideas against reality through observation and experimentation.

    I think that original intentionality doesn’t exist because a) I don’t see how it could be instantiated in a physical system, b) I don’t see how you could distinguish it from its lesser imitators, either internally or externally, and c) I see some logical problems with the idea of a physical system or arrangement truly being about something else.

    Perhaps a more fruitful line of approach would be to recast the entire discussion in the form of a research proposal.

    Research proposals aren’t necessary, but careful thinking is.

  45. keiths:

    I think of it as being similar to his [Dennett] ideas about free will. You can’t have libertarian free will, he says, but here is a compatibilist version of free will that is “a variety of free will worth wanting”. Likewise, you can’t have original intentionality, but the kind of intentionality that evolution has bestowed upon us does its job admirably.

    Yeah, well Dennett has been seen in pretty bad company!

Leave a Reply