The ‘Hard Problem’ of Intentionality

I’m starting a new thread to discuss what I call “the hard problem of intentionality”: what is intentionality, and to what extent can intentionality be reconciled with “naturalism” (however narrowly or loosely construed)?

Here’s my most recent attempt to address these issues:

McDowell writes:

Consider this passage from Dennett, Consciousness Explained, p. 41: “Dualism, the idea that the brain cannot be a thinking thing so a thinking thing cannot be a brain, is tempting for a variety of reasons, but we must resist temptation . . . Somehow the brain must be the mind”. But a brain cannot be a thinking thing (it is, as Dennett himself remarks, just a syntactic engine). Dualism resides not in the perfectly correct thought that a brain is not a thinking thing, but in postulating some thing immaterial to be the thinking thing that the brain is not, instead of realizing that the thinking thing is the rational animal. Dennett can be comfortable with the thought that the brain must be the mind, in combination with his own awareness that the brain is just a syntactic engine, only because he thinks that in the sense in which the brain is not really a thinking thing, nothing is: the status of possessor of intentional states is conferred by adoption of the intentional stance towards it, and that is no more correct for animals than for brains, or indeed thermostats. But this is a gratuitous addition to the real insight embodied in the invocation of the intentional stance. Rational animals genuinely are “semantic engines”. (“Naturalism in Philosophy of Mind,” 2004)

Elsewhere McDowell has implied that non-rational animals are also semantic engines, and I think this is a view he ought to endorse more forthrightly and boldly than he has. But brains are, of course, syntactic engines.

So it seems quite clear to me that one of the following has to be the case:

(1) neurocomputational processes (‘syntax’) are necessary and sufficient for intentional content (‘semantics’) [Churchland];
(2) intentional content is a convenient fiction for re-describing what can also be described as neurocomputational processes [Dennett] (in which case there really aren’t minds at all; here one could easily push on Dennett’s views to motivate eliminativism);
(3) neurocomputational processes are necessary but not sufficient for intentional content; the brain is merely a syntactic engine, whereas the rational animal is a semantic engine; the rational animal, and not the brain, is the thinking thing; the brain of a rational animal is not the rational animal, since it is a part of the whole and not the whole [McDowell].

I find myself strongly attracted to all three views, actually, but I think that (3) is slightly preferable to (1) and (2). My worry with (1) is that I don’t find Churchland’s response to Searle entirely persuasive (even though I find Searle’s own views completely unhelpful). Is syntax necessary and sufficient for semantics? Searle takes it for granted that this is obviously and intuitively false. In response, Churchland says, “maybe it’s true! we’ll have to see how the cognitive neuroscience turns out — maybe it’s our intuition that’s false!”. Well, sure. But unless I’m missing something really important, we’re not yet at a point in our understanding of the brain where we can understand how semantics emerges from syntax.

My objection to (2) is quite different — I think that the concept of intentionality plays far too central a role in our ordinary self-understanding for us to throw it under the bus as a mere convenient fiction. Of course, our ordinary self-understanding is hardly sacrosanct; we will have to revise it in the future in light of new scientific discoveries, just as we have in the past. But there is a limit to how much revision is conceivable, because if we jettison the very concept of rational agency, we will lose our grip on our ability to understand what science itself is and why it is worth doing. Our ability to do science at all, and to make sense of what we are doing when we do science, presupposes the notion of rational agency, hence intentionality, and abandoning that concept due to modern science would effectively mean that science has shown that we do not know what science is. That would be a fascinating step in the evolution of consciousness, but I’m not sure it’s one I’m prepared to take.

So that leaves (3), or something like it, as the contender: we must on the one hand, retain the mere sanity that we (and other animals) are semantic engines, bearers of intentional content; on the other hand, we accept that our brains are syntactic engines, running parallel neurocomputational processes. This entails that the mind is not the brain after all, but also that rejecting mind-brain identity offers no succor to dualism.

Neil Rickert’s response is here, followed by Petrushka’s here.

 

 

 

 

 

334 thoughts on “The ‘Hard Problem’ of Intentionality

  1. llanitedave,

    I’ve never looked at the Google definition or the Wikipedia entry on intentionality, and maybe I should…

    In this case it’s a good idea, because the philosophical meaning of the term is so different from its everyday meaning.

  2. keiths:
    llanitedave,

    In this case it’s a good idea,because the philosophical meaning of the term is so different from its everyday meaning.

    It’s the utilitarian meaning that I’m interested in. It may be a “hard” problem if you only consider the philosophical meaning, but then, philosophical meanings are intended to be hard. That’s why I prefer a working definition.

  3. davehooke: Are you certain? I don’t think this has been established.

    I’m assuming for the sake of argument that the brain in the vat wasn’t born there, that it has at least the memory of a sensory and motor system.

  4. llanitedave: I’m assuming for the sake of argument that the brain in the vat wasn’t born there, that it has at least the memory of a sensory and motor system.

    Still, how do you know that a brain in a vat can function? This is an area of active research.

  5. llanitedave: It’s the utilitarian meaning that I’m interested in. It may be a “hard” problem if you only consider the philosophical meaning, but then, philosophical meanings are intended to be hard.That’s why I prefer a working definition.

    No they are not intended to be hard. They are intended to be precise.

    If you are discussing the every day meaning of “intentionality” it doesn’t really have anything to do with the OP.

  6. On the one hand, I agree with Dave and disagree with Dave. On the other hand, I think Dave is right. 🙂

  7. Kantian Naturalist:

    As I explained, animal minds count as having concepts — even though they do not judge — because they classify and recognize features of their environment.It’s not just stimulus-response because there is mediation going on there — “all is not dark within”. (There is something it is like to be a cat — or a bat!)

    classifying and recognizing are two activities that can´t be made without judging. As you say it is not only stimulus- response.
    By the way, there is something in biology outside stimulus-response? Which is the effector of that?

    strong>Kantian Naturalist:

    Anyone who thinks that animals (or babies) are just fancy automata is going to have to explain why they don’t think of human beings the same way — and if they insist on some major ontological gulf between human beings (qua bearers of intentional content) and animals (as fancy machines), it must be pointed out how that gulf is inconsistent with the basic commitment to continuity (though of course not smooth continuity!) in Darwinism.One might, just perhaps, be a Kantian and a Darwinian — such is my ambition, obviously — but one cannot be a Cartesian and Darwinian.

    Well I think that is the point of the post. Allow naturalism to humans be more than just an a fancy automat?
    I think no, and many darwinists agrees with me.

  8. keiths:
    My point is that thoughts needn’t correspond to external reality in order to be meaningful.Mathematical thoughts,for example,are meaningful whether or not one is a Platonist.
    […]
    If mathematical thoughts can continue during such an interruption, then the brain by itself can be a semantic engine and not merely a syntactic engine.

    If you assume a developed brain is envatted, then you are assuming all the environmental interactions that are needed to create a brain have occurred. I’m thinking of everything from the womb through learning in infancy and early childhood.

    I agree such a brain would still have thoughts if envatted and stimulated by a virtual world like our own,. I don’t know how/if it would function if completely cut off from all inputs.

    Would those thoughts be meaningful? Yes, but the meaning would differ, at least for thoughts referring to the world external to the envatted brain.

    It seems the meaning of fictional objects and mathematical objects would not change from whatever it was before.

    Would the meaning of terms for unobservables in scientific theories change? If a physicist was envatted, does the meaning of quark change for him or her in future experiments in the virtual world? Assuming he or she was a scientific realist, I’d say yes, the meaning has changed, since the term now refers to a simulated world.

  9. petrushka: I’m not sure what the issue is here, but in brains, signals are everything.

    In brain signals are everything and all you can have from a brain.

  10. Neil Rickert: Signals are not data.

    Neil, is your point that even though the electrochemical processes that happen during perception might be modeled by computations, it is not the case that perception is computation.

    The analogy would be that even though orbits of planets can be modeled by computation, planets are not doing computations.

    Off topic: you seem to disprove the old saw that, for a man with a hammer, everything looks like a nail. That is, if one believed this proverb, then one would think that for a computer scientist, every brain would look like a Turing machine.

  11. I agree with davehooke that intentionality in the philosophical sense is not “intended to be hard”, contrary to llanitedave’s claim. The philosophical definition is no less a “working definition” than the everyday one, and both are useful, but it’s the former, not the latter, that is the topic of the OP.

    However, I think llanitedave is right to question davehooke’s skepticism regarding the ability of envatted brains to function meaningfully, at least in the short run. As I said in reply to KN:

    I see no reason why mathematical thoughts should suddenly become impossible merely because the brain’s sensory input is temporarily interrupted.

    If mathematical thoughts can continue during such an interruption, then the brain by itself can be a semantic engine and not merely a syntactic engine.

  12. But actual sensory deprivation experiments indicate that brains rapidly become non-functional when disconnected from sensory input.

    But I have another line of thought. My dreams are more or less coherent, at least in the sense that they have a story line and often involve problem solving.

    The problems, however, make little sense if told as stories, because the context floats. Almost, but not entirely unlike Hollywood dream sequences. Cut off from feedback, the brain rapidly loses association with reality.

  13. Blas:

    There is no possible naturalistic explanation for a starting point for knowledge or immagination

    Thanks for taking the time to answer my questions and I think I understand your position a bit better.

    I think that a scientific program to try to find a naturalistic explanation for these things is a worthwhile enterprise.

    But my reasons are at least partly pragmatic and would not convince you, I suspect.

  14. BruceS: Neil, is your point that even though the electrochemical processes that happen during perception might be modeled by computations, it is not the case that perception is computation.

    Yes. I see it as coming far closer, if we say that perception is measurement — measuring the world around us in many parallel processes.

    And to get to the related issue: measurement uses signals and produces data.

    Off topic: you seem to disprove the old saw that, for a man with a hammer, everything looks like a nail. That is, if one believed this proverb, then one would think that for a computer scientist, every brain would look like a Turing machine.

    I’m a mathematician much more than I am a computer scientist. In mathematics, my area is functional analysis, which combines geometry and algebra.

  15. BruceS,

    If you assume a developed brain is envatted, then you are assuming all the environmental interactions that are needed to create a brain have occurred. I’m thinking of everything from the womb through learning in infancy and early childhood.

    Sure, but the question KN raises in the OP isn’t an empirical one about human brains. It’s a philosophical question regarding the ability of “syntactic engines” to act as “semantic engines”.

    An envatted David Icke would presumably continue to believe that George Bush is a lizard, and that belief, though false (at least in a non-figurative sense), would continue to have meaning. David Icke’s envatted brain would arguably still be a semantic engine (at least in the short run), even if cut off from all stimuli.

    Would the meaning of terms for unobservables in scientific theories change? If a physicist was envatted, does the meaning of quark change for him or her in future experiments in the virtual world? Assuming he or she was a scientific realist, I’d say yes, the meaning has changed, since the term now refers to a simulated world.

    It would depend on whether the physicist was aware of his or her envattedness.

    If unaware, then the meaning of ‘quark’ would change, unbeknownst to the hapless scientist.

    If aware of the envattation, then the physicist could continue to reason about “real” quarks, though they would no longer be experimentally accessible. He or she would distinguish real quarks from simulated quarks.

  16. keiths:
    Sure, but the question KN raises in the OP isn’t an empirical one about human brains.It’s a philosophical question regarding the ability of “syntactic engines” to act as “semantic engines”.

    I am not sure what the term “semantic engine” is trying to convey.

    If one googles it, the results cover semantic search on the web, that is using computer programs to do searches which “understand the meaning” of the search terms. I suspect that is not what KN has in mind.

    Further, I think it is a given that a syntactic engine preserve meaning (its outputs preserve the truth of its inputs).

    What more could a semantic engine do?

    Maybe it has to “create” meaning in some sense. But what could “create” meaning signify? I’m not sure, but it seems it should involve some kind of interaction with a world, whether real or virtual.

  17. Here’s the relevant quote from Dennett that got the whole ball rolling:

    (From The Intentional Stance, p. 61)

    The task of sub-personal cognitive psychology is to explain something that at first glance seems utterly mysterious and inexplicable. The brain, as intentional system theory and evolutionary biology show us, is a semantic engine; its task is to discover what its multifarious inputs mean, to discriminate them by their significance, and ‘act accordingly’. That’s what brains are for. But the brain, as physiology or plain common sense shows us, is just a syntactic engine; all it can do is discriminate its inputs by their structural, temporal, and physical features and let its entirely mechanical activities be governed by these ‘syntactical’ features of its inputs. That’s all brains can do. Now how does the brain manage to get from semantics from syntax? How could any entity (how cold a genius or an angel, or God) get the semantics of a system from nothing but its syntax? It couldn’t. The syntax of a system doesn’t determine its semantics. By what alchemy, then, does the brain extract semantically reliable results from syntactically driven operations? It cannot be designed to do an impossible task, but it could be designed to approximate the impossible task, to mimic the behavior of the impossible object (the semantic engine) by capitalizing on close (close enough) fortuitous correspondences between structural regularities — of the environment and of its own internal states and operations — and semantic types.

    However, one could object to Dennett on the following lines (and perhaps this is one way of seeing Churchland’s objection to Dennett): insofar as the brain is encoding information about objects and properties in its environment through the distribution of differentially weighted connections across the relations between neurons, that is semantics. Just because it’s non-propositional, doesn’t mean it’s non-semantic, and so there’s no need for the “from syntax . . . semantics!” conjuring trick.

  18. BruceS:

    it should involve some kind of interaction with a world, whether real or virtual.

    What is a virtual world for a naturalist?

  19. Here’s how I understand the contrast between “semantic engine” and “syntactic engine,” as Dennett uses the terms: a semantic engine is able to make sense of things; it can tell what something means. So you and I and all of us are semantic engines, just because we’re arguing about the meanings of terms, about what inferentially follows from what, and so on.

    A syntactic engine is indifferent to meaning — all it does (and can do) is process information that’s given to it according to rules, and the instantiation of the rules in the system’s activity has nothing to do with what the information means to the system. So a computer would be, presumably, a syntactic engine in this sense.

    So, here’s the problem: it’s incontestable (I submit) that you and I, qua rational animals, semantic engines. (This is McDowell’s central point.) It seems incontestable (but maybe it isn’t) that computers are just syntactic engines. So what are we to say about brains?

    Dennett’s point is that whether we treat the brain as a syntactic engine or a semantic engine depends on our “stance” towards it. Churchland wants to say that the brain really is a semantic engine, and we just need to figure out how to relate the domain-portrayal semantics at the neurophysiological level to the propositional semantics at the personal (individual-cum-communal) level. McDowell urges that the brain really is a syntactic engine, and that the rational animal really is a semantic engine, so the hard problem of cognitive science is how to understand the contribution of the brain’s syntactical processing to the realization of genuine significance.

    No doubt there are other important views out there I need to take seriously, but those are the three I’m best acquainted with.

  20. Kantian Naturalist:
    Here’s the relevant quote from Dennett that got the whole ball rolling:

    (From The Intentional Stance, p. 61)

    The task of sub-personal cognitive psychology is to explain something that at first glance seems utterly mysterious and inexplicable.The brain, as intentional system theory and evolutionary biology show us, is a semantic engine; its task is to discover what its multifarious inputs mean, to discriminate them by their significance, and ‘act accordingly’.That’s what brains are for.But the brain, as physiology or plain common sense shows us, is just a syntactic engine; all it can do is discriminate its inputs by their structural, temporal, and physical features and let its entirely mechanical activities be governed by these ‘syntactical’ features of its inputs.That’s all brains can do.Now how does the brain manage to get from semantics from syntax?How could any entity (how cold a genius or an angel, or God) get the semantics of a system from nothing but its syntax?It couldn’t.The syntax of a system doesn’t determine its semantics.By what alchemy, then, does the brain extract semantically reliable results from syntactically driven operations?It cannot be designed to do an impossible task, but it could be designed to approximate the impossible task, to mimic the behavior of the impossible object (the semantic engine) by capitalizing on close (close enough) fortuitous correspondences between structural regularities — of the environment and of its own internal states and operations — and semantic types.

    However, one could object to Dennett on the following lines (and perhaps this is one way of seeing Churchland’s objection to Dennett): insofar as the brain is encoding information about objects and properties in its environment through the distribution of differentially weighted connections across the relations between neurons, that is semantics.Just because it’s non-propositional, doesn’t mean it’s non-semantic, and so there’s no need for the “from syntax . . . semantics!” conjuring trick.

    No Kantian it is not semantic, the differentially distributed conections are only signals. No more than signals. And signals can only produce other signals. Also for what Dennet call sintaxis like “discriminate its inputs by their structural, temporal, and physical features” you need semantics. You need to judge, make statements about the signals aginst concepts like form, size and time.

  21. Blas: No Kantian it is not semantic, the differentially distributed conections are only signals. No more than signals. And signals can only produce other signals.

    This simply begs the question by asserting as true something that has not at all been argued for, let alone demonstrated: that there’s no way to get from syntax to semantics. Searle assumes that this is the case; so does Dennett, in his own way. All we can say with assurance is that, with regard to natural languages, the syntax/semantics distinction is drawn such that we cannot get semantics and syntax. (This is one of the big results from Carnap’s work in the 1930s onwards, if my knowledge of the history of philosophy is correct.) But that result holds only for the analysis of language; it doesn’t tell us how to use the concepts “syntax” and “semantics” with regard to the brain.

    Also for what Dennet call sintaxis like “discriminate its inputs by their structural, temporal, and physical features” you need semantics. You need to judge, make statements about the signals aginst concepts like form, size and time.

    This looks like the “homunculus fallacy” to me — the idea that the brain is able to discriminate its inputs because there’s a tiny little person inside the brain that’s doing the judging and comparing. The problem with the homunculus picture — and the reason why it is a fallacy — is because it doesn’t explain anything. It just says, we are able to judge and compare because there’s a tiny little version of us inside the brain that judges and compares. It doesn’t actually explain what we do because the explanandum and the explanans are identical. (Not to mention the threat of infinite regress — is there an even tinier person inside that one to explain how he is able to judge and compare?)

  22. If thought were semantics, AI would be done by now.

    Skinner asserted that brains weigh probabilities.

  23. Blas:

    No more than signals.

    I’ve seen various descriptions of your error. On is mistaking the map for the territory.

    I would say you are mistaking an abstraction for its object.

    You have a term — matter — which you define. then you try to derive the capabilities of matter from your abstraction. I don’t know what to say about your method other than it gets incorrect results and ins just silly.

    Matter ignores your categories and does what it will.

  24. Kantian Naturalist:

    “But that result holds only for the analysis of language; it doesn’t tell us how to use the concepts “syntax” and “semantics” with regard to the brain.”

    Then we still have signals in the brain that we cannot connect with concepts and statements in our mind.

    Kantian Naturalist:
    This looks like the “homunculus fallacy” to me — the idea that the brain is able to discriminate its inputs because there’s a tiny little person inside the brain that’s doing the judging and comparing.The problem with the homunculus picture — and the reason why it is a fallacy — is because it doesn’t explain anything.It just says, we are able to judge and compare because there’s a tiny little version of us inside the brain that judges and compares.It doesn’t actually explain what we do because the explanandum and the explanans are identical.(Not to mention the threat of infinite regress — is there an even tinier person inside that one to explain how he is able to judge and compare?)

    Ok, if for the naturalistic view the homunculus is a fallacy how do you then explain naturalistically how we reach concept and the ability to judge starting with signals in the brain.

  25. petrushka:
    Blas:

    I’ve seen various descriptions of your error. On is mistaking the map for the territory.

    I would say you are mistaking an abstraction for its object.

    You have a term — matter — which you define. then you try to derive the capabilities of matter from your abstraction. I don’t know what to say about your method other than it gets incorrect results and ins just silly.

    Matter ignores your categories and does what it will.

    What else other than signals receive and produce our brain?

  26. Thinks, feels, dreams, imagines, etc.

    Where is the manual that says matter can’t do these things?

  27. petrushka:
    Thinks, feels, dreams, imagines, etc.

    Where is the manual that says matter can’t do these things?

    A thought is made of what? A feeling is made of what?

  28. Blas: A thought is made of what?

    Tin cans.

    Blas: A feeling is made of what?

    A change in the relative arrangement of tin cans.

  29. keiths:
    I agree with davehooke that intentionality in the philosophical sense is not “intended to be hard”, contrary to llanitedave’s claim.The philosophical definition is no less a “working definition” than the everyday one, and both are useful, but it’s the former, not the latter,that is the topic of the OP.

    However, I think llanitedave is right to question davehooke’s skepticism regarding the ability of envatted brains to function meaningfully, at least in the short run.

    I am going to do more reading on this. I am thinking about getting Shaun Gallagher’s book How The Body Shapes The Mind.

  30. Kantian Naturalist:

    Here’s how I understand the contrast between “semantic engine” and “syntactic engine,” as Dennett uses the terms: a semantic engine is able to make sense of things; it can tell what something means. So you and I and all of us are semantic engines, just because we’re arguing about the meanings of terms, about what inferentially follows from what, and so on.

    I am not sure how you are using the term “inferentially”.

    The term can be used to denote what is solely a syntactic process about argument validity (and there are computer algorithms for doing the corresponding formal manipulations).

    But if it is meant to convey assessing the soundness of arguments, then that would involve understanding the meaning of the premises. But using “meaning” this way to define semantic seems circular to me (and you do that in other places in the above.)

    I think any explanation of “semantic” has to avoid relying on a existing conception of “meaning”

    In your previous quote of Dennett’s definition, he uses the words “act accordingly”, which I take as being a key phrase. I think creating meaning involves the ability to form representations (ie intentional, mental contents [redundant?]) which allow organisms to act in a way which makes them successful (in an evolutionary sense).

    That is why a find Millikan’s ideas on teleological semantics appealing (as does Dennett, I believe).

    I also have a vague understanding of some of your other ideas relating to different types of intentionality as somehow being related to acting in a pre-language environment versus acting (ie conversing) in a language community. But I am not familiar with the philosophical influences you have mentioned, so that could be way off base.

  31. Blas: What is a virtual world for a naturalist?

    If I am allowed to assume that a physical reality exists (even if it is not all of reality — ie you do not need to be a physicalist) then I think the answer is obvious from any dictionary definition.

    If I cannot assume that as a starting point, then there is probably no basis for discussion.

    But nice try.

  32. Kantian Naturalist: A syntactic engine is indifferent to meaning — all it does (and can do) is process information that’s given to it according to rules, and the instantiation of the rules in the system’s activity has nothing to do with what the information means to the system. So a computer would be, presumably, a syntactic engine in this sense.

    As I think about it, this syntax vs. semantics is just nonsense. “Syntax” is itself a semantic term. Syntax is not a part of the natural world. A completely naturalistic account of a computer should say only that it is an electrical appliance.

    When we describe the computer as a syntactic engine, we describe what it does to binary digits. But binary digits are inexistent intentional objects (or platonic entities if you are a platonist).

    The Turing machine is usually defined as an abstract machine acting on abstract symbols. It is not a physical machine acting on natural signals.

    You don’t start with syntax, then get to semantics by adding meaning. Rather, you start with semantics, and get to syntax by severely constraining the range of meanings that you will allow.

  33. BruceS: If I am allowed to assume that a physical reality exists (even if it is not all of reality — ie you do not need to be a physicalist) then I think the answer is obvious from any dictionary definition.

    If I cannot assume that as a starting point, then there is probably no basis for discussion.

    But nice try.

    1. having the essence or effect but not the appearance or form of: a virtual revolution
    2. physics being, relating to, or involving a virtual image: a virtual focus
    3. computing of or relating to virtual storage: virtual memory
    4. of or relating to a computer technique by which a person, wearing a headset or mask, has the experience of being in an environment created by the computer, and of interacting with and causing changes in it
    5. rare capable of producing an effect through inherent power or virtue
    6. physics See also exchange force designating or relating to a particle exchanged between other particles that are interacting by a field of force: a virtual photon

    Which one is yours?

  34. Neil,

    As I think about it, this syntax vs. semantics is just nonsense. “Syntax” is itself a semantic term.

    Every term is semantic, unless you’re making up nonsense words.

    Your argument is equivalent to this:

    As I think about it, this verbal vs. nonverbal is just nonsense. “Nonverbal” is itself a word.

  35. BruceS,

    It’s not (viciously) circular, because I’m not using inference to define semantics — rather, I’m appealing to inference as a theory of what meaning is. And central of course is “material inference”, as distinct from “formal inference”.

    Here’s are some examples: “if Pittsburgh is east of Chicago, then Chicago is west of Pittsburgh”; “if something is red, then something is colored”; if X is taller than Y, then Y is shorter than X”. These are material inference because one cannot evaluate the propriety of the inference without knowing how to use the concept.

    But what is a concept? In a representationalist semantics, a concept represents some feature of the world — the concept cat allows us to sort the world into those things that are cats and those things that are non-cats. In an inferentialist semantics, a concept is a node in an inferential nexus — the concept cat allows us to sort the inferences in which the term “cat” occurs into good ones and bad ones. So rather than beginning with the notion of representation and explaining inference in terms of representation, an inferentialist semantics begins with the notion of inference and explains representation in terms of inference. (Thus understood, inferentialist semantics came into its own with Brandom’s Making It Explicit (1994), but the deeper history of the tradition goes back through Sellars, C.I Lewis, Hegel, and even Kant and Leibniz.)

    Where I disagree with Brandom — and this is central to my book — is that I don’t think that inferential semantics is sufficient as a theory of empirical content, because we need to have a theory of animal sentience as contributing non-conceptual content to empirical judgments. (See how Kantian I am? Concepts are not intuitions!)

    I still haven’t gotten around to reading Millikan, and I don’t intend to discuss her work in my book. Several folks have asked me to discuss how my appeal to Merleau-Ponty (for the stuff on animal sentience) makes my view different from hers. Basically, what I’m doing here is explicating the epistemological or transcendental point of view, which is (necessarily, I would say) done from the first-person and second-person points of view. So I’m arguing that a theory of animal sentience needs to be incorporated into that project — it is our (and one’s own) animal sentience that needs to be incorporated into the account of empirical content. The other question, about implementation, is an “engineering question” — and that is done from the third-person perspective of empirical inquiry, and if that’s what I wanted to do, then Millikan would be absolutely relevant. And I’m not dismissing the importance of the engineering question — I’m just saying that is a different question than the transcendental or epistemological question that I am asking.

  36. keiths,

    Granted, that’s not the best argument Neil could have made, but I think he’s on to something genuinely important when he says,

    You don’t start with syntax, then get to semantics by adding meaning. Rather, you start with semantics, and get to syntax by severely constraining the range of meanings that you will allow.

    (This puts me in mind of Deacon’s criticism of Chomsky and Pinker in The Symbolic Species — still one of the best book in the evolution of cognition I’ve read!)

    One might also point out that our concept of “syntax” is formed by abstracting away from good inferences all the semantic features, so we just have “logical form”. That’s a further reason to be nervous about applying the concept of “syntax” to the brain, even if neurocomputationalism is a good theory of how brains function. (And I’m not even so sure about that — the dynamical systems theory looks really compelling to me, based on what little I know of it.)

  37. Blas: A thought is made of what? A feeling is made of what?

    Thoughts and feelings are some of the things matter do.

    What is your authority for saying what matter can and cannot do?

  38. KN,

    It’s true that our concept of syntax was formed by abstracting semantics away from language and looking at what was left, but that’s different from saying that syntax itself is dependent on semantics.

    Any process that follows fixed, non-semantic rules is syntactic. Nature is shot through with syntax, and this was true long before semantics came on the scene. It’s just that we don’t normally use the word ‘syntactic’ when describing the rule-based character of nature unless we are emphasizing the absence of semantics.

  39. keiths: Every term is semantic, unless you’re making up nonsense words.

    Yes. But that’s a rather trivial point and not at all what I was saying.

    I suggest you reread what I wrote.

  40. petrushka: Thoughts and feelings are some of the things matter do.

    What is your authority for saying what matter can and cannot do?

    That is the nturalistic problem. Matter don´t do anything. Matter just following physical laws tend to the lower energy high entropy state.
    Everything that appears in that process wasn´t done, wasn´t intended it just happened to be. Where can intentionality come from?

  41. Blas: That is the nturalistic problem. Matter don´t do anything. Matter just following physical laws tend to the lower energy high entropy state.
    Everything that appears in that process wasn´t done, wasn´t intended it just happened to be. Where can intentionality come from?

    What is your authority for saying what matter can and cannot do?

  42. Neil,

    I suggest you reread what I wrote.

    Could you highlight a point you would like me to address?

  43. keiths: Any process that follows fixed, non-semantic rules is syntactic.

    I can only repeat the point. A computer doesn’t actually do syntax. It does electrical operations. We interpret it as doing syntax. And this is possible because designed computers so that they could be interpreted that way.

    keiths: Nature is shot through with syntax, and this was true long before semantics came on the scene.

    At present, I cannot think of a single case of syntax in nature. For that matter, I cannot think of a single case where nature can properly be said to follow rules.

    Sure, planets move in orbits. But the planets do not move in orbits by following rules. Rather, we describe the planets by following our rules of description.

  44. Blas: That is the nturalistic problem. Matter don´t do anything. Matter just following physical laws tend to the lower energy high entropy state.
    Everything that appears in that process wasn´t done, wasn´t intended it just happened to be. Where can intentionality come from?

    You are trying to resurrect vitalism, which has been dead for over a century.

    It didn’t work a century ago and doesn’t work now.

  45. petrushka: You are trying to resurrect vitalism, which has been dead for over a century.

    It didn’t work a century ago and doesn’t work now.

    No, I´m not resurrecting anything. Just stating that from a naturalistic view there is no possibility of intentionality. As matter is passive, just follows the physical laws. I cannot demostrate something impossible, show me that matter can “do” something.

  46. Blas: Just stating that from a naturalistic view there is no possibility of intentionality. As matter is passive, just follows the physical laws.

    Hurricanes, tornados, supernovae — they don’t look all that passive.

  47. Blas: No, I´m not resurrecting anything. Just stating that from a naturalistic view there is no possibility of intentionality. As matter is passive, just follows the physical laws. I cannot demostrate something impossible, show me that matter can “do” something.

    You are making assumptions about matter, just as vitalists did years ago, regarding whether matter was sufficient to enable life. Thjey were wrong then and you are wrong now.

    And your position is exactly equivalent to vitalism.

    Who elected you God and able to pronounce what mater is capable of?

    Edit:

    That sounds like a personal attack on Blas, but it isn’t intended that way. It’s a generic question to anyone who declares a priori what matter can and cannot do.

    It’s a silly claim, one that is continually being discredited. It appears to be the rationale for dualism. So much the worse for dualism.

  48. Following physical laws is a stupid concept.

    Physical laws are just descriptions of regularities we observe. There is no list of physical laws and no list of what matter can and cannot do.

    People are physical objects, and people have consciousness, think, dream, imagine, feel, perceive. It’s really up to dualists to demonstrate that there is something non-physical that can do these things.

Leave a Reply