The ‘Hard Problem’ of Intentionality

I’m starting a new thread to discuss what I call “the hard problem of intentionality”: what is intentionality, and to what extent can intentionality be reconciled with “naturalism” (however narrowly or loosely construed)?

Here’s my most recent attempt to address these issues:

McDowell writes:

Consider this passage from Dennett, Consciousness Explained, p. 41: “Dualism, the idea that the brain cannot be a thinking thing so a thinking thing cannot be a brain, is tempting for a variety of reasons, but we must resist temptation . . . Somehow the brain must be the mind”. But a brain cannot be a thinking thing (it is, as Dennett himself remarks, just a syntactic engine). Dualism resides not in the perfectly correct thought that a brain is not a thinking thing, but in postulating some thing immaterial to be the thinking thing that the brain is not, instead of realizing that the thinking thing is the rational animal. Dennett can be comfortable with the thought that the brain must be the mind, in combination with his own awareness that the brain is just a syntactic engine, only because he thinks that in the sense in which the brain is not really a thinking thing, nothing is: the status of possessor of intentional states is conferred by adoption of the intentional stance towards it, and that is no more correct for animals than for brains, or indeed thermostats. But this is a gratuitous addition to the real insight embodied in the invocation of the intentional stance. Rational animals genuinely are “semantic engines”. (“Naturalism in Philosophy of Mind,” 2004)

Elsewhere McDowell has implied that non-rational animals are also semantic engines, and I think this is a view he ought to endorse more forthrightly and boldly than he has. But brains are, of course, syntactic engines.

So it seems quite clear to me that one of the following has to be the case:

(1) neurocomputational processes (‘syntax’) are necessary and sufficient for intentional content (‘semantics’) [Churchland];
(2) intentional content is a convenient fiction for re-describing what can also be described as neurocomputational processes [Dennett] (in which case there really aren’t minds at all; here one could easily push on Dennett’s views to motivate eliminativism);
(3) neurocomputational processes are necessary but not sufficient for intentional content; the brain is merely a syntactic engine, whereas the rational animal is a semantic engine; the rational animal, and not the brain, is the thinking thing; the brain of a rational animal is not the rational animal, since it is a part of the whole and not the whole [McDowell].

I find myself strongly attracted to all three views, actually, but I think that (3) is slightly preferable to (1) and (2). My worry with (1) is that I don’t find Churchland’s response to Searle entirely persuasive (even though I find Searle’s own views completely unhelpful). Is syntax necessary and sufficient for semantics? Searle takes it for granted that this is obviously and intuitively false. In response, Churchland says, “maybe it’s true! we’ll have to see how the cognitive neuroscience turns out — maybe it’s our intuition that’s false!”. Well, sure. But unless I’m missing something really important, we’re not yet at a point in our understanding of the brain where we can understand how semantics emerges from syntax.

My objection to (2) is quite different — I think that the concept of intentionality plays far too central a role in our ordinary self-understanding for us to throw it under the bus as a mere convenient fiction. Of course, our ordinary self-understanding is hardly sacrosanct; we will have to revise it in the future in light of new scientific discoveries, just as we have in the past. But there is a limit to how much revision is conceivable, because if we jettison the very concept of rational agency, we will lose our grip on our ability to understand what science itself is and why it is worth doing. Our ability to do science at all, and to make sense of what we are doing when we do science, presupposes the notion of rational agency, hence intentionality, and abandoning that concept due to modern science would effectively mean that science has shown that we do not know what science is. That would be a fascinating step in the evolution of consciousness, but I’m not sure it’s one I’m prepared to take.

So that leaves (3), or something like it, as the contender: we must on the one hand, retain the mere sanity that we (and other animals) are semantic engines, bearers of intentional content; on the other hand, we accept that our brains are syntactic engines, running parallel neurocomputational processes. This entails that the mind is not the brain after all, but also that rejecting mind-brain identity offers no succor to dualism.

Neil Rickert’s response is here, followed by Petrushka’s here.

 

 

 

 

 

334 thoughts on “The ‘Hard Problem’ of Intentionality

  1. Neil put one point this way: “Searle has no supporting argument. His CR argument at most shows that syntactic processing can be done without reference to semantics.” I’m not even that charitable to Searle. Searle’s “Chinese room” thought-experiment relies as a premise on the claim that syntax is insufficient for semantics. He doesn’t argue for it; he argues from it! This is why the Churchlands are perfectly right to call him out on this in their reply.

    However, Neil disagrees with my acceptance of Dennett’s claim that brains are just syntactic engines. I’ll look at Dennett’s Intentional Stance when I get home and post some materials that explicate what he means by that claim. In the meantime, I’d like to hear more about (a) whether neurophysiological processes are, or are best modeled as, computational processes and (b) why, if they are computational, would that not count as syntactical?

  2. Regarding option two and Daniel Dennett, I was amused by Putnam’s remark:

    Thus it is that in the closing decades of the twentieth century we have intelligent philosophers claiming that intentionality itself is something we project by taking a ‘stance’ to some parts of the world…as if ‘taking a stance’ were not itself an intentional notion!

    From Representation and Reality., pages 15-16.

  3. KN,

    So that leaves (3), or something like it, as the contender: we must on the one hand, retain the mere sanity that we (and other animals) are semantic engines, bearers of intentional content; on the other hand, we accept that our brains are syntactic engines, running parallel neurocomputational processes.

    If a person’s brain is merely a “syntactic engine”, but the person as a whole is a “semantic engine”, then what critical ingredient(s), when added to a brain, bestow semantic enginehood on the aggregate?

    We can also go in the other direction. A whole person is a “semantic engine”. A person missing an arm or a leg is still a “semantic engine”. What part(s) of the person must be removed in order for the remainder to be merely a “syntactic engine”?

  4. Can anyone present a good reason why intentionality cannot be observed in lesser brains and indeed even at the level of reflex and tropism. And even in evolution itself.

    It strikes me as a bit like the imaginary line between microevolution and macroevolution. At what magic point does a learning system qualify as intentional?

  5. petrushka: At what magic point does a learning system qualify as intentional?

    Is learning intentional? I view evolutionary processes as passive. Organisms tumble into the niches that happen to be available. (That could be projection of my passive lifestyle! Just going to enjoy a glass of merlot on “la terrasse” watching the sunset!) 😉

  6. Alan Fox: Is learning intentional? I view evolutionary processes as passive. Organisms tumble into the niches that happen to be available. (That could be projection of my passive lifestyle! Just going to enjoy a glass ofmerlot on “la terrasse” watching the sunset!)

    I’m asking where you draw the magic line. Evolution is not intentional, but implements intentionality. that’s the view from where I sit.

    What you observe is systems (populations) adapting to the environment. Where is the magic line where you require an homunculus to implement intentionality?

  7. Let me try to understand you:

    By sintactic or neurocomputational process you mean to order data.

    By semantic you mean giving meaning to the data.

    If that is correct I think that you still have not reached the intentionality.

    Giving meaning to data is intelligence, intentionality means will, and to have will you need to have a goal.

    Another question, what makes possible intelligence or will in a whole animal and not possible for a brain?

  8. Thanks for starting this topic.

    Kantian Naturalist: I’m not even that charitable to Searle. Searle’s “Chinese room” thought-experiment relies as a premise on the claim that syntax is insufficient for semantics. He doesn’t argue for it; he argues from it!

    I don’t think that’s quite right. Searle’s formal argument does, indeed, start with that premise. But the thought experiment does not depend on a premise, as best I can tell.

    The thought experiment illustrates that you can do the syntactic processing (follow the rules of the computer program) with no knowledge of the semantics. To most computer scientists and mathematicians, this is a trivial well known point that hardly requires a thought experiment. However, people whose background is not from computer science seem to find the thought experiment very persuasive.

    In the meantime, I’d like to hear more about (a) whether neurophysiological processes are, or are best modeled as, computational processes and (b) why, if they are computational, would that not count as syntactical?

    Perhaps a little background on me.

    In elementary school, I was already questioning the English grammar that we were taught. I was not questioning it to the teachers. I was questioning it to myself. It seemed to me that it did not fit very well, and that language was really driven by semantics (or meanings) rather than by rules of grammar. To a first approximation, I had rejected Chomsky’s theory of language several years before he came up with it.

    My view of mathematics was probably underdeveloped at that time (i.e. in elementary school). By the time that I was in high school, it was clear that I saw geometry as the heart and soul of mathematics.

    To a first approximation, I see logic as the syntactic engine of mathematics, and geometry as the semantic engine of mathematics. And the geometry is, by far, the more important.

    Back to neurophysiological processes. No, I do not see them as doing computation. I see them as doing geometry (“geometry” as “measuring the world”). In particular, I see the core operations as related to categorization. And categorization (dividing up the world) is a geometric activity. So I see the brain as a semantic engine.

  9. Alan,

    Is learning intentional? I view evolutionary processes as passive.

    ‘Intentionality’, annoyingly, is not about ‘intention’ in the sense of ‘intending to do something’. It’s philosphical jargon for ‘aboutness’ and has nothing to do with the active/passive distinction.

    A book has intentionality because it is about its topic. Thoughts have intentionality because they are about their referents. A flugelhorn lacks intentionality because it isn’t about anything; it just is.

  10. I was questioning it to myself. It seemed to me that it did not fit very well, and that language was really driven by semantics (or meanings) rather than by rules of grammar. To a first approximation, I had rejected Chomsky’s theory of language several years before he came up with it.

    I can’t recall being that precocious, but I remember thinking Chomsky was bullshit the first time I herd about it, and for the same reason.

    Brains are about what to do. Meaning is about what action is needed. You can try to shove language into the grammar box, but most of the meaning leaks out.

  11. Are learning and evolution passive?

    I think it depends on the level of abstraction. Populations are modified as a result of differential reproductive success. Learning systems are modified as a result of differential feedback. At one level of analysis, the process is passive.

    But non-passive implies an homunculous, and inner demon doing the active intendending.

    Where in the evolution of brains does the intentional demon get implanted?

  12. petrushka: Are learning and evolution passive?

    In my opinion, no. That’s partly why I say that I am not a Darwinist.

    I see both learning and evolution as opportunistic. They detect opportunities, and then actively exploit those opportunities.

    In the case of evolution, a randomly acquired mutation possibly enables a small population to thrive (that’s the opportunistic part). That is followed by explosive growth to fill the niche (that’s the active part). And this may well be seen in the fossil record, as a case of punk-eek.

  13. I’m not disagreeing with that. I just want to know — following the path of increasing behavioral complexity — where we draw the line between learning and opportunism and intentionality.

    I don’t see a line.

  14. Neil Rickert: It seemed to me that it did not fit very well, and that language was really driven by semantics (or meanings) rather than by rules of grammar.

    Indeed. Grammar was invented to help non-native speakers learn Latin. Language evolves and grammarians struggle to keep up.

  15. keiths: ‘Intentionality’, annoyingly, is not about ‘intention’ in the sense of ‘intending to do something’. It’s philosphical jargon for ‘aboutness’ and has nothing to do with the active/passive distinction.

    I’m not a fan of philosophical jargon, as you may already suspect. Thanks for clarifying. ( though I’m more confused now!)

  16. petrushka:
    I’m not disagreeing with that. I just want to know — following the path of increasing behavioral complexity — where we draw the line between learning and opportunism and intentionality.

    I don’t see a line.

    Neither do I. Does an individual E. coli bacillus intend to maintain itself in optimum nutrient concentration by “tumble and run”?

  17. How intentional are brains when cooled to about 15 Celsius (59 Fahrenheit) or heated to 42 Celsius (108 Fahrenheit)?

  18. Okay, I can see a possible line.

    Intention requires a system sophisticated enough to make predictions about consequences and engage in anticipatory behavior.

    Shapiro make a claim about evolution that shades into intentionality. He claims that living things can produce smart mutations that are more likely than random to be adaptive.

    Regardless of whether this is correct or not, it suggests an operational definition of intention. A definition that does not require a ghost in the machine.

    Machines can have intention by this definition.

    Perhaps the part that creates the first level of confusion is that brains experience emotions, and emotions are associated with intention.

  19. Mike Elzinga:
    How intentional are brains when cooled to about 15 Celsius (59 Fahrenheit) or heated to 42 Celsius (108 Fahrenheit)?

    Your rationale is: As there is no intentionality without a working brain, then the brain explain intentionlity. Isn´t it?

  20. Non sequitur.

    Saying the brain is necessary for intentionality is not the same thing as an explanation.

  21. Neil Rickert: Back to neurophysiological processes. No, I do not see them as doing computation. I see them as doing geometry (“geometry” as “measuring the world”). In particular, I see the core operations as related to categorization. And categorization (dividing up the world) is a geometric activity. So I see the brain as a semantic engine.

    That’s really interesting! I’m at the absolute limit of my knowledge of philosophy of neuroscience, but I want to know more! And it seems fairly obvious, when you put it that way. So now I really want to know why Dennett thinks otherwise, if he does.

  22. Another set of questions:

    Does intentionality differ for a single ant as compared with an ant colony? The same question would apply to bees.

    And by analogy, does intentionality apply to a single cell or to an entire network of cells? Does the “topology” of a network have anything to do with intentionality?

    How do other phenomena – e.g., temperature, drugs, anoxia, nitrogen narcosis, etc. – affect the intentionality of a brain?

    It appears that neurophysiology and network topology – not to mention direct interaction with an external environment – are inescapable parts of the question. Maybe “intentionality” is not the word one should be trying to attach to such a system.

  23. keiths: ‘Intentionality’, annoyingly, is not about ‘intention’ in the sense of ‘intending to do something’. It’s philosphical jargon for ‘aboutness’ and has nothing to do with the active/passive distinction.

    A book has intentionality because it is about its topic. Thoughts have intentionality because they are about their referents. A flugelhorn lacks intentionality because it isn’t about anything; it just is.

    Right! One addition: some philosophers (Brentano, Chisholm, Searle) think there’s a distinction to make between “derived intentionality” and “original intentionality”. (Dennett’s whole schtick is to deny the usefulness of this distinction!) So, we’d say that the sentences in the book have derived intentionality because what makes them assertions, and not just marks on paper, is the thoughts that someone intended to convey with them. Whereas thoughts (whatever they are!) aren’t derived from anything else — our own minds have original intentionality.

    The account of intentionality I’m developing posits two different kinds of “original intentionality”: discursive intentionality, which originates in the linguistic community as a whole, and somatic intentionality, which originates in the basic structures of embodied behavior. We have to have both in the picture in order to get the ‘triangulation’ between a plurality of subjects and the objects we confront — here I’m basically agreeing with Davidson’s argument that subjectivity, intersubjectivity, and objectivity are interdependent concepts, and my major disagreement with him is on the technical issue of whether this process can be described in a wholly extensional language.

    But enough of my ramblings . . .

  24. petrushka:

    Intention requires a system sophisticated enough to make predictions about consequences and engage in anticipatory behavior.

    By contrast, the philosophical concept of (original) intentionality only requires that a system mentally interprets patterns semantically. Intentionality requires the ability to find meaning. “frog” has no intrinsic meaning. It is marks on paper or pixels on a screen. Intentionality enables us to understand “frog” denotes a frog. If we do whatever we do when think of a cat, it is intentionality that is the ability to understand we are thinking about a cat. A system that reacts to a cat but does not think about it as something (and I don’t think that has to be a cat, it could be as crude as ‘danger’) does not have intentionality.

    Now, whether this philosophical language restricted to the mental is ultimately scientifically useful or not I don’t know, but it does seem that at least for now, we require the concept of intentionality to capture something important.

    In Searle’s Chinese Room thought experiment, it is contended that the system doesn’t have intentionality. It doesn’t understand Chinese, it doesn’t know what the symbols are about.

    An idle thought has long amused me: If no progress is made over an extended period of time, will it become widely accepted that systems such as the Chinese Room, even though we may not ascribe consciousness to them, have an analogue of intentionality?

  25. davehooke
    An idle thought has long amused me: If no progress is made over an extended period of time, will it become widely accepted that systems such as the Chinese Room, even though we may not ascribe consciousness to them, have an analogue of intentionality?

    Related to this, I find that in a 1980 paper McCarthy argues that thermostats have beliefs. Specifically, “the room is too hot”, “the room is too cold”, “the room is OK.”

    I think KN alluded to this in the ‘Knowledge over Faith’ thread.

  26. Philosphical intentionality

    I know KN does not like to be pressed for definitions, but I am curious about, whatever it is that “intentionality” in a philosophical context is supposed to be about, why someone did not consider that choosing a word already in common usage could confuse a layman.

    Internet Encyclopedia of Philosophy on Husserl and Intentionality

    Hmm!

    For any intentional mental event it would make no sense to speak of it as involving an act without an intentional object any more than it would to say that the event involved an intentional object but no act or way of attending to that object (no intentional act).

    Whither philosophical intentionality? No such thing as a mental event in my book.

  27. Alan Fox:
    Philosphical intentionality

    I know KN does not like to be pressed for definitions, but I am curious about, whatever it is that “intentionality” in a philosophical context is supposed to be about, why someone did not consider that choosing a word already in common usage could confuse a layman.

    Pretty much every specialist field has terms that are used in everyday language differently e.g. “spin” in physics, even “cell” in biology.

  28. Alan Fox: I know KN does not like to be pressed for definitions, but I am curious about, whatever it is that “intentionality” in a philosophical context is supposed to be about, why someone did not consider that choosing a word already in common usage could confuse a layman.

    The sad fact is that professional philosophy relies upon a specialized jargon for both good and bad reasons. It does facilitate communication between specialists, but it also raises a barrier for non-specialists to figure out what’s going on. And that’s not good.

    One way in which we do this is by using words with Greek and Latin roots in lieu of Anglo-Saxon equivalents.

    (That this has the effect of constructing jargon is due to an interesting feature of English: because English is the hybrid of the Old French spoken by Norman elites and the Old English spoken by the Anglo-Saxons they conquered, Latinate and Greek English words have the whiff of elitism and upper class compared to their Germanic correlates — compare “feces” with “shit”, “illuminate” with “light up”, “economy” with “business”.)

    So, “intentionality” comes from the Latin intentio, which is just a perfectly good Latin word for what we call, in our uncouth Germanic tongue, “aboutness”. It sounds like jargon in part because Latinate words in general sound like jargon to English speakers. (You can thank/blame William the Conqueror for that.)

    For those who are interested in Greek philosophy, I highly recommend Joe Sachs’ translations of Aristotle — he translates from the ancient Greek directly to modern English, creating neologisms exactly when Aristotle does, and completely bypasses the entire Scholastic terminology of “essence,” “substance”, “potentiality,” and “actuality” that makes Aristotle so frustrating to read.

  29. The problem isn’t that it sounds like jargon. The problem is it sounds like the wrong word.

  30. KN,

    I’d still be interested in hearing your responses to the questions I raised earlier in the thread:

    If a person’s brain is merely a “syntactic engine”, but the person as a whole is a “semantic engine”, then what critical ingredient(s), when added to a brain, bestow semantic enginehood on the aggregate?

    We can also go in the other direction. A whole person is a “semantic engine”. A person missing an arm or a leg is still a “semantic engine”. What part(s) of the person must be removed in order for the remainder to be merely a “syntactic engine”?

  31. keiths:
    KN,

    If a person’s brain is merely a “syntactic engine”, but the person as a whole is a “semantic engine”, then what critical ingredient(s), when added to a brain, bestow semantic enginehood on the aggregate?

    We can also go in the other direction.A whole person is a “semantic engine”.A person missing an arm or a leg is still a “semantic engine”.What part(s) of the person must be removed in order for the remainder to be merely a “syntactic engine”?

    Could a brain in a vat plus a communication system communicate with us? Perhaps, if it has a body map, even without a body. However, there are proponents of ‘whole body consciousness’. I can only think of the linguistic term, deixis, but something like that, an awareness of the self in contextual relation to the environment (perhaps at minimum in space and time) might be required for a “semantic engine”.

  32. Neil:

    Back to neurophysiological processes. No, I do not see them as doing computation. I see them as doing geometry (“geometry” as “measuring the world”). In particular, I see the core operations as related to categorization. And categorization (dividing up the world) is a geometric activity.

    KN:

    And it seems fairly obvious, when you put it that way. So now I really want to know why Dennett thinks otherwise, if he does.

    I think categorization is a kind of computation, and I suspect Dennett would agree.

  33. Far be it from me to know what I am talking about when it comes to philosophy, but…

    The people who have asked about where “intentionality” arises evolutionarily seem to be onto something.

    I remember a time when, as a child, I was swimming in a pond in the countryside, alone. A large horsefly came after me. I was being hunted, and it sure felt to me as if that horselfy intended to bite me.

    So, is that intentionality? Or was only my determination to stay underwater long enough for the horsefly to go away true intensionality?

  34. davehooke:

    I can only think of the linguistic term, deixis, but something like that, an awareness of the self in contextual relation to the environment (perhaps at minimum in space and time) might be required for a “semantic engine”.

    Suppose I am kidnapped today and my brain is surgically removed and envatted, with no link to the outside world. I will continue to have thoughts, at least until I go crazy, and my thoughts will continue to have meaning. My brain, by itself, will still be a “semantic engine”.

  35. Joe Felsenstein:

    The people who have asked about where “intentionality” arises evolutionarily seem to be onto something.

    I remember a time when, as a child, I was swimming in a pond in the countryside, alone.A large horsefly came after me.I was being hunted, and it sure felt to me as if that horselfy intended to bite me.

    So, is that intentionality?Or was only my determination to stay underwater long enough for the horsefly to go away true intensionality?

    To confuse you further, intenSionality is a different philosophical concept.

  36. keiths:
    davehooke:

    Suppose I am kidnapped today and my brain is surgically removed and envatted, with no link to the outside world.I will continue to have thoughts, at least until I go crazy, and my thoughts will continue to have meaning.My brain, by itself, will still be a “semantic engine”.

    It seems to me that this is an empirical matter. I don’t think you can say for sure you would have any or all of sentience, consciousness, and intentionality. As far as I know, no-one has enough of a handle on any of these things to make accurate predictions.

  37. Well, once again we seem to be having a discussion on terms that are so vague we cannot reason effectively about them. “Aboutness”? What does it mean? Suppose I discover a book in a language I don’t know. How can I know if it has “intentionality” or not? What units do we measure “intentionality” in? Is it a black-and-white property, or can some things have more “aboutness” than others? Until these kinds of questions get good and precise answers, how can we say *anything* meaningful about intentionality at all?

    Here’s a very specific example. Imagine there is some mechanism in nature that records some specific yearly information. Call this, I don’t know, let’s make up a word, say “varve”. Assume these varves somehow encode things like temperatures in the past, perhaps by how thick they are. Do the varves have intentionality or not?

  38. Dave,

    Another thought experiment. Suppose that my brain remains happily ensconced in my skull, but that there are electronic blocking devices on all of the nerves leading into my brain. Someone throws a switch, and my brain is cut off from all stimuli for five minutes. Then the blocking devices turn off and everything is normal again.

    Do you really doubt that I would continue to think during those five minutes, and that my thoughts would have meaning?

  39. Dave,

    Also, Keith, a formerly embodied brain might be able to think, for a while, but could a brain that never had a body?

    I think so, but I’m not sure it really matters to the question at hand. KN was arguing that a brain by itself could only be a syntactic engine, and the envatting of my brain provides a counterexample, unless you think that an envatted brain can no longer have meaningful thoughts.

  40. Five minutes is too brief. Sensory deprivation leads to hallucinations. A person born with all sensory input blocked would never think.

    I think.

  41. shallit,

    Those are good questions to think about, plus the philosophical literature on intentionality is vast and varied. Enjoy!

  42. petrushka,

    Five minutes is too brief.

    Why? For those five minutes, the brain is on its own, by itself. If it continues to think meaningful thoughts during that time then it is a “semantic engine”, contradicting KN’s assertion.

  43. keiths:
    Dave,

    I think so, but I’m not sure it really matters to the question at hand.KN was arguing that a brain by itself could only be a syntactic engine, and the envatting of my brain provides a counterexample, unless you think that an envatted brain can no longer have meaningful thoughts.

    I think it matters, because it is the difference between feedback from the body and no such feedback at all.

    I like your experiment though. Has anyone ever done anything like that (is what I want to know)? By “anyone”, I mean the Nazis, most likely.

  44. Dave,

    I think it matters, because it is the difference between feedback from the body and no such feedback at all.

    Keep in mind that the philosophical question applies to any syntactic engine, not just human (or human-like) brains. Even if you think that human brains somehow rely on continual somatic feedback, it doesn’t mean that all syntactic engines would, or that the somatic feedback would somehow be essential to semantic processing.

    I like your experiment though. Has anyone ever done anything like that (is what I want to know)? By “anyone”, I mean the Nazis, most likely.

    If the technology had been there, I’m sure they would have.

    I’ve always thought that the Wada test is creepy, though usually done for noble reasons.

  45. shallit:
    Suppose I discover a book in a language I don’t know.How can I know if it has “intentionality” or not?What units do we measure “intentionality” in?Is it a black-and-white property, or can some things have more “aboutness” than others?Until these kinds of questions get good and precise answers, how can we say *anything* meaningful about intentionality at all?

    Depends whether you think philosophy can say anything meaningful.

    On the book specifically, whether your knowledge is relevant depends on whether we should be realists about intentionality or not.

  46. What about the entire field of anesthesiology? When a patient is put under anesthetics, say, for a major heart operation, blood is circulated using a machine.

    The brain gets no signals from the rest of the body; nor does it process any signals. Nothing is remembered about the operation afterward, even though events before the operation are remembered.

    During surgery, one’s brain is essentially a brain in a vat.

  47. Mike,

    During surgery, one’s brain is essentially a brain in a vat.

    Except that the brain itself is anesthetized. The interesting philosophical cases arise when the brain is not anesthetized but is cut off from stimuli.

Leave a Reply