The ‘Hard Problem’ of Intentionality

I’m starting a new thread to discuss what I call “the hard problem of intentionality”: what is intentionality, and to what extent can intentionality be reconciled with “naturalism” (however narrowly or loosely construed)?

Here’s my most recent attempt to address these issues:

McDowell writes:

Consider this passage from Dennett, Consciousness Explained, p. 41: “Dualism, the idea that the brain cannot be a thinking thing so a thinking thing cannot be a brain, is tempting for a variety of reasons, but we must resist temptation . . . Somehow the brain must be the mind”. But a brain cannot be a thinking thing (it is, as Dennett himself remarks, just a syntactic engine). Dualism resides not in the perfectly correct thought that a brain is not a thinking thing, but in postulating some thing immaterial to be the thinking thing that the brain is not, instead of realizing that the thinking thing is the rational animal. Dennett can be comfortable with the thought that the brain must be the mind, in combination with his own awareness that the brain is just a syntactic engine, only because he thinks that in the sense in which the brain is not really a thinking thing, nothing is: the status of possessor of intentional states is conferred by adoption of the intentional stance towards it, and that is no more correct for animals than for brains, or indeed thermostats. But this is a gratuitous addition to the real insight embodied in the invocation of the intentional stance. Rational animals genuinely are “semantic engines”. (“Naturalism in Philosophy of Mind,” 2004)

Elsewhere McDowell has implied that non-rational animals are also semantic engines, and I think this is a view he ought to endorse more forthrightly and boldly than he has. But brains are, of course, syntactic engines.

So it seems quite clear to me that one of the following has to be the case:

(1) neurocomputational processes (‘syntax’) are necessary and sufficient for intentional content (‘semantics’) [Churchland];
(2) intentional content is a convenient fiction for re-describing what can also be described as neurocomputational processes [Dennett] (in which case there really aren’t minds at all; here one could easily push on Dennett’s views to motivate eliminativism);
(3) neurocomputational processes are necessary but not sufficient for intentional content; the brain is merely a syntactic engine, whereas the rational animal is a semantic engine; the rational animal, and not the brain, is the thinking thing; the brain of a rational animal is not the rational animal, since it is a part of the whole and not the whole [McDowell].

I find myself strongly attracted to all three views, actually, but I think that (3) is slightly preferable to (1) and (2). My worry with (1) is that I don’t find Churchland’s response to Searle entirely persuasive (even though I find Searle’s own views completely unhelpful). Is syntax necessary and sufficient for semantics? Searle takes it for granted that this is obviously and intuitively false. In response, Churchland says, “maybe it’s true! we’ll have to see how the cognitive neuroscience turns out — maybe it’s our intuition that’s false!”. Well, sure. But unless I’m missing something really important, we’re not yet at a point in our understanding of the brain where we can understand how semantics emerges from syntax.

My objection to (2) is quite different — I think that the concept of intentionality plays far too central a role in our ordinary self-understanding for us to throw it under the bus as a mere convenient fiction. Of course, our ordinary self-understanding is hardly sacrosanct; we will have to revise it in the future in light of new scientific discoveries, just as we have in the past. But there is a limit to how much revision is conceivable, because if we jettison the very concept of rational agency, we will lose our grip on our ability to understand what science itself is and why it is worth doing. Our ability to do science at all, and to make sense of what we are doing when we do science, presupposes the notion of rational agency, hence intentionality, and abandoning that concept due to modern science would effectively mean that science has shown that we do not know what science is. That would be a fascinating step in the evolution of consciousness, but I’m not sure it’s one I’m prepared to take.

So that leaves (3), or something like it, as the contender: we must on the one hand, retain the mere sanity that we (and other animals) are semantic engines, bearers of intentional content; on the other hand, we accept that our brains are syntactic engines, running parallel neurocomputational processes. This entails that the mind is not the brain after all, but also that rejecting mind-brain identity offers no succor to dualism.

Neil Rickert’s response is here, followed by Petrushka’s here.

 

 

 

 

 

334 thoughts on “The ‘Hard Problem’ of Intentionality

  1. Blas: And I have to say, there are people that have his body disconected from his brain due lesions in the spinal cord that live attached at a machine having only his head working.

    Citation please.

  2. Blas:

    davehooke: I don’t believe in ghosts but I didn’t ask the question.

    Yes you did here.

    davehooke:

    I presume you mean intentionality not intention. Why are you so sure that the brain alone is involved in intentionality?

    Any ghosts you see here are your responsibility not mine.

    How about, instead of a ghost, the body being involved in intentionality?

  3. davehooke: This does not answer my question. What evidence do you have that this premise:

    A process with no intention like evolution cannot produce intentionality

    is at the very least probably true?

    Intentionality needs imagine a new desirable state where I want to be. How a natural process can produce immagination?

  4. davehooke: Any ghosts you see here are your responsibility not mine.

    Not mine, OMagain asked for ghosts because of your question. Why do not discusse the point with him?

    davehooke:
    How about, instead of a ghost, the body being involved in intentionality?

    And what change add the body to the source of intentionality? Solve the naturalistic problem? Is a brain plus a body more than a sophiticated net of feedback loops?

  5. Blas: Well, which is your answer?

    I might use intentional language for the E. coli example. I don’t think it’s a good fit for the others.

    We also use a lot of intentional language in talk about computers and computation.

  6. Neil Rickert: I might use intentional language for the E. coli example.I don’t think it’s a good fit for the others.

    We also use a lot of intentional language in talk about computers and computation.

    So you agree with me. In the naturalistic world intentionality is a fiction man made. A linguistic metaphor.

  7. Blas: Intentionality needs imagine a new desirable state where I want to be. How a natural process can produce immagination?

    This is incredulity, not evidence that your premise is at least probably true. What evidence do you have that your premise is at least probably true?

  8. Blas: So you agree with me. In the naturalistic world intentionality is a fiction man made. A linguistic metaphor.

    A metaphor is not a fiction. It is a tool for communication.

    Or miscommunication, as the case may be.

    A person (or a cat) can intend. Saying this does not demonstrate the existence of a non-physical little person inside.

  9. davehooke: Who states that evolution is merely chance? It is not.

    Not me. I didn´t use the adjective “merely”.
    Do you deny that chance is a plays a big role in darwinistic evolution? Do you affirm that the appearance of humans was unavoidable?

  10. davehooke: This is incredulity, not evidence that your premise is at least probably true. What evidence do you have that your premise is at least probably true?

    The lower probability of the alternative will satisfy you as evidence?
    The fact that the only agent in the universe are us humans will stisfy you as evidence? The fact as the most of the commenters here thinks that intentionality is a metaphor will satisfy you as evidence?

  11. Blas: Not mine, OMagain asked for ghosts because of your question. Why do not discusse the point with him?

    I don’t see any reason to talk about ghosts here.

    And what change add the body to the source of intentionality?

    It changes the source of intentionality if the brain+body system is what we are looking at rather than just the brain. The source would then be the brain+body system.

    Is a brain plus a body more than a sophiticated net of feedback loops?

    Unless you have evidence of something else involved then it would appear that it must be more than a sophisticated network of feedback loops, yes.

  12. Blas: The lower probability of the alternative will satisfy you as evidence?

    What evidence do you have for the lower probability of the alternative?

    The fact that the only agent in the universe are us humans will stisfy you as evidence?

    That is not a fact, but I don’t see how it is relevant anyway.

    The fact as the most of the commenters here thinks that intentionalityis a metaphor will satisfy you as evidence?

    Definitely not.

    Where is your evidence for your premise?

  13. What is implied by the phrase “system of feedback loops”?

    Are you asserting that you can know from first principles what can be accomplished by a physical system?

    From what do you derive your list of what can and cannot be derived from physical elements?

  14. Blas: Not me. I didn´t use the adjective “merely”.
    Do you deny that chance is a plays a big role in darwinistic evolution? Do you affirm that the appearance of humans was unavoidable?

    Chance plays a role, but so what? How is this evidence for your premise?

  15. Blas: So you agree with me. In the naturalistic world intentionality is a fiction man made. A linguistic metaphor.

    It’s a form of language expression that we find useful. That does not necessarily make it metaphor.

  16. I’ll explain later where I got this example from but for now I’ll just throw out the name so you all can have an example of a real life envatted brain(or at least as close as one has ever come and been able to talk about it afterward).

    Helen Keller

    Just how much thinking do you suppose she did before she was awakened by Anne Sullivan?

    I’ll also add my voice to the chorus asking for a definition of “aboutness”.

  17. I find intentionality a very helpful concept for talking about the content of thoughts, beliefs, and desires — that when we think, we are (almost always) thinking about something. It poses interesting questions and problems.

    For example, intentionality is usually thought of as a relation between one’s thinking and what one is thinking about. But we can think about things that don’t (even can’t) exist — a problem that Brentano called “intentional inexistence”. But normally, relations only obtain when their relata also exist. So how can there is a relation when one of the relata doesn’t exist?

    So, I don’t think that all uses of the concept “intentionality” are metaphorical — though some of them are. Though it might be better to call those uses analogical rather than metaphorical, the key difference being that in analogies, we make explicit all the ways that the analogy does not hold.

    petrushka:
    Philosophers and theologians seem to consider thinking to be comprised of words and sentences.

    Is this the case?

    In the philosophical tradition that I work in, thinking itself is not comprised of sentences, but our concept of thinking is constructed by analogy with our concept of language (thought as “inner speech”). But as we find out more about how brains actually process information and allow successful navigation of their environments, we can revise our concept of what “thinking” is. Or we can construct a new concept, “cognition,” and then figure out how to relate the sub-personal processes of cognition to the personal-level process of thinking.

    Of course there are philosophers (and theologians) who do hold that thought really is a linguistic affair, but I regard that as deeply mistaken. Probably the best-known contemporary exponent of this position is Jerry Fodor, who invented what he called “the language of thought” hypothesis, aka “Mentalese”, which he thinks of a scientifically legitimate posit we need to invoke in order to explain what mental states are. I think that’s crazy, but hey, he’s a full professor and I’m a lowly adjunct, so what do I know?

  18. Alan,

    Shallit is doing what I think of as “definition trolling,” in which the perpetrator enters a discussion, demands an overly precise definition of a term, and then dismisses the entire conversation as “meaningless” without it.

    For example, this question is pure troll:

    What units do we measure “intentionality” in?

    Definition trolling is also a favorite tactic of Mung’s, which should tell you something.

    The question we’re discussing is whether a brain on its own can be not just a “syntactic engine” but a “semantic engine” as well. To answer that question, you don’t need to decide whether varves possess intentionality, but shallit can’t resist the urge to do some definition-trolling and some gratuitous philosophy-bashing as well.

  19. Petrushka,

    Philosophers and theologians seem to consider thinking to be comprised of words and sentences.

    Some do, others don’t.

    Is this the case?

    Not exclusively. As an engineer, I can tell you that much of my thinking is visual/spatial, not verbal. I suspect that almost everyone engages in nonverbal thinking, as in this exercise.

  20. petrushka: A metaphor is not a fiction. It is a tool for communication.

    Or miscommunication, as the case may be.

    A person (or a cat) can intend. Saying this does not demonstrate the existence of a non-physical little person inside.

    Just in order to understand the true mean of the metaphor

    a cat intend in the same way an individual E. coli maintaining itself in optimum nutrient concentration by “tumble and run”?

  21. I think that cats (and dogs, and cows, and most birds and mammals) have “intentional content” — they have thoughts about their environments. They just can’t have the same kinds of thoughts that we can have. That’s what I’m getting at when I say that they are sentient but not sapient.

    I just don’t know what to say about bacteria.

  22. petrushka:
    What is implied by the phrase “system of feedback loops”?

    Are you asserting that you can know from first principles what can be accomplished by a physical system?

    No, not for principles but from the knowledge of what can accomplish the parts of the phisical systems within the limits of the physical laws.

    petrushka:
    From what do you derive your list of what can and cannot be derived from physical elements?

    Naturalism implies physical elements follows physical laws.

  23. keiths:

    I think categorization is a kind of computation, and I suspect Dennett would agree.

    Neil:

    The difference between us here, is that you see categorization as something that is done with data, while I see categorization as interaction with the world that is prior to having data.

    Cortical neurons don’t wrap their slimy dendrites around an object to decide whether it is round or angular. They operate on sensory data carried into the brain by nerves.

  24. Blas: Naturalism implies physical elements follows physical laws.

    More precisely, it says that physical stuff is constrained by physical laws. Brains can’t violate the laws of physics and chemistry. That’s quite different from saying that biological and psychological phenomena can be deduced from the laws of physics and chemistry.

  25. Kantian Naturalist:
    I think that cats (and dogs, and cows, and most birds and mammals) have “intentional content” — they have thoughts about their environments.They just can’t have the same kinds of thoughts that we can have. That’s what I’m getting at when I say that they are sentient but not sapient.

    I just don’t know what to say about bacteria.

    Have thought is a very different thing of feel a sensation. In order to have a though you need to correlate two concepts in a statement Do you think dogs can do that? Couldn´t be the behavior of dogs a chain of conditioned reflex?

  26. Kantian Naturalist: More precisely, it says that physical stuff is constrained by physical laws. Brains can’t violate the laws of physics and chemistry.That’s quite different from saying that biological and psychological phenomena can be deduced from the laws of physics and chemistry.

    But biological and psycological phenomena are constrained by the physics and chemistry of its effectors.

  27. Blas: Have thought is a very different thing of feel a sensation. In order to have a though you need to correlate two concepts in a statement Do you think dogs can do that? Couldn´t be the behavior of dogs a chain of conditioned reflex?

    I have a fairly odd view about this topic — I think that it’s a knee-jerk anti-anthropomorphism that says that attributing content to human minds is acceptable, but attributing content to animal minds is off-limits. That strikes me as a combination of excessive anxiety about anthropomorphism in cognitive ethology and rationalization about our treatment of animals in medical research and food production.

    The really interesting question is not “do they think?” but “what kinds of thoughts can they have?”

    Now, you’re perfectly right that we commonly think about thoughts in terms of combinations of concepts. But let’s slow down a moment here: the combination of two (or more) concepts is a good way of thinking about judgment or assertion. In saying that animals think, I am saying that they use concepts, but I’m not saying that they judge.

    Instead, they deploy concepts by virtue of classifying similar objects in their environment as being of the same kind, and can recognize similarities and differences amongst perceptual features. They have “simple concepts” that they use in having “simple thoughts”. And those aren’t judgments, assertions, or (to use a closely related term) statements. But it is still a kind of semantic or intentional content nevertheless!

  28. KN:

    For similar reasons, I think that a suitably “envatted” brain — a brain that had some categorical structure encoded in its synaptic patterns, and wired up to the right inputs and outputs, receiving information from an environment (presumably a computer), would be a component in a semantic engine — only the semantic engine would be the brain+vat+computer, rather than, as in our case, the brain+body+environment. However, if the computer inputs were such to produce the appearance of a body+environment system, then the brain’s thoughts, although genuine thoughts, would not be about its actual, causal situation. It would as systematically deceived as Descartes’ res cogitans would be if there were an ‘evil genius’ afoot.

    You’re overlooking the fact that thoughts can be meaningful without referring to specific objects in the external world. We can think meaningful thoughts about angels or pink unicorns despite their nonexistence. Also, Descartes’ cogito is meaningful even if he is being deceived by an evil demon.

  29. Kantian Naturalist:

    Instead, they deploy concepts by virtue of classifying similar objects in their environment as being of the same kind, and can recognize similarities and differences amongst perceptual features.They have “simple concepts” that they use in having “simple thoughts”. And those aren’t judgments, assertions, or (to use a closely related term) statements.But it is still a kind of semantic or intentional content nevertheless!

    How can they reach concepts without judge? aren´t that simple thoughts conditioned behavior?

  30. keiths: You’re overlooking the fact that thoughts can be meaningful without referring to specific objects in the external world. We can think meaningful thoughts about angels or pink unicorns despite their nonexistence. Also, Descartes’ cogito is meaningful even if he is being deceived by an evil demon.

    I can see why you think I overlooked the fact of ‘intentional inexistence,’ as Brentano called it. It all depends on how we’re describing the envatted brain:

    (1) the envatted brain is kept alive in a merely biological sense — oxygen and nutrients are fed into it, and waste is taken out, but there are no sensory inputs or motor outputs;

    (2) the envatted brain has electronic sensory inputs and outputs, connected to a computer, that is constantly sending information to the brain and receiving information from it.

    I had (2) in mind when I said that the envatted brain would have genuine thoughts. I’d have to think a bit about (1), but my prejudice is to say that it wouldn’t have any thoughts at all. It might have consciousness, in the sense of mere awareness, but I’m not confident that it would even have intentional inexistence, like what we have when think about unicorns or angels. And I’m not even terribly confident about that at all — in part because I don’t think that the Cartesian cogito is a correct conception of what mindedness is. I don’t buy Cartesian skepticism for a minute (and I don’t think Descartes did, either, but that’s a different topic).

    Blas: How can they reach concepts without judge? aren´t that simple thoughts conditioned behavior?

    As I explained, animal minds count as having concepts — even though they do not judge — because they classify and recognize features of their environment. It’s not just stimulus-response because there is mediation going on there — “all is not dark within”. (There is something it is like to be a cat — or a bat!)

    Anyone who thinks that animals (or babies) are just fancy automata is going to have to explain why they don’t think of human beings the same way — and if they insist on some major ontological gulf between human beings (qua bearers of intentional content) and animals (as fancy machines), it must be pointed out how that gulf is inconsistent with the basic commitment to continuity (though of course not smooth continuity!) in Darwinism. One might, just perhaps, be a Kantian and a Darwinian — such is my ambition, obviously — but one cannot be a Cartesian and Darwinian.

  31. “Shallit is doing what I think of as “definition trolling,” in which the perpetrator enters a discussion, demands an overly precise definition of a term, and then dismisses the entire conversation as “meaningless” without it.”

    I’ll remember that when next it happens to me here. Low level definition trolling abounds at TSZ. Shallit was being responsible and professional imo.

  32. Gregory,

    Low level definition trolling abounds at TSZ. Shallit was being responsible and professional imo.

    Seeking clarification is fine when it’s relevant to the discussion, but shallit’s questions about varves and the “units of intentionality” were irrelevant and trollish.

  33. keiths: Cortical neurons don’t wrap their slimy dendrites around an object to decide whether it is round or angular. They operate on sensory data carried into the brain by nerves.

    Signals are not data.

  34. keiths: Seeking clarification is fine when it’s relevant to the discussion, but shallit’s questions about varves and the “units of intentionality” were irrelevant and trollish.

    I disagree; they weren’t deep or insightful, but if someone is new to the vocabulary, they’re not bad questions. Certainly not trollish!

  35. KN,

    My point is that thoughts needn’t correspond to external reality in order to be meaningful. Mathematical thoughts, for example, are meaningful whether or not one is a Platonist.

    I see no reason why mathematical thoughts should suddenly become impossible merely because the brain’s sensory input is temporarily interrupted.

    If mathematical thoughts can continue during such an interruption, then the brain by itself can be a semantic engine and not merely a syntactic engine.

  36. KN,

    Those questions aren’t inherently trollish, but they were certainly trollish in context. Shallit claimed that the discussion was meaningless because intentionality hadn’t been explicitly defined, within the thread, to his arbitrary standards.

    It’s as if I were to butt in on a mathematical discussion between shallit and one of his colleagues, saying the discussion was meaningless because they hadn’t defined recursion to my satisfaction, and such a vague concept “could not be reasoned about effectively“.

  37. I suppose I could be a troll, but I still don’t know what the topic is about. Most philosophical discussions seem to be about people talking past each other.

  38. davehooke: KN, any more thoughts on the EAAN and my reply?

    I agree with you that Plantinga thinks that only theological metaphysics can provide an adequate account of the reliability of cognition, but I’m not entirely clear on why you think (as I understand you) that his metaphysics is already work in setting up the EAAN itself.

    It’s pretty clear that Plantinga intends the EAAN to proceed as follows: begin with a metaphysically-neutral conception of our epistemic situation — one that would be accepted by all, naturalists and theists alike — and then show that naturalism cannot adequately account for our epistemic situation, whereas theism can.

    My objection isn’t that Plantinga’s conception of our epistemic situation has convert or implicit metaphysical biases — which seems to be your objection? — but rather that Plantinga’s conception of our epistemic situation has its own specifically epistemological and semantic biases, which are not justified by what we know about neuroscience and cognitive ethology, and so. But any good naturalist is going to appeal to neuroscience and ethology in arriving at a theory of semantic content and cognitive function, so she wouldn’t endorse Plantinga’s conception of our epistemic situation in the first place.

    In other words, it’s not that Plantinga smuggles theism into the description of the problem, but that he describes the problem in such a way that theism is the most reasonable (only?) solution.

    (OK, now that I’ve said that, I’m not so sure how much of a difference that makes!)

  39. petrushka,

    Saying you don’t understand the discussion isn’t trollish. If you were to claim without evidence that the discussion was meaningless, as shallit did, then that would be trollish.

  40. It seems to me that comments about intentionality being ill defined are hasty, when the first two results on Google give a perfectly good definition:

    “the power of minds to be about, to represent, or to stand for, things, properties and states of affairs.”

    Questions I have are: Can we talk about intentionality as the ability to assign meaning to representations? Symbols? If yes, are any assumptions that make reducing the definition to “the ability to do semantics” not useful or just plain wrong? Does derived intentionality boil down to the symbols or representations to which we assign meaning?

  41. I may be question begging or engaging in circular reasoning,. but I still think the most productive approach is to ask how symbols (language) function. What evolutionary path did they take.

    I’m thinking that sign language may shed some light on the topic. Signs are actions. We can see them, They involve moving body parts.

    spoken language and thinking are assumed to be mostly invisible, and so it is easy not to think of them as actions. One can entertain the possibility that thoughts are disembodied.

    I would argue that language is social behavior, and that thinking is a kind of rehearsal of social behavior.

    Language in the academic world is not usually thought of as an elaboration of social gestures, but I suspect it is. Human language is distinctive because it has syntax and because it can transcend time and place, but it is still largely social gesture.

    A disinterested observer, not knowing any human language, and observing internet debates, might come away thinking them equivalent to the screeching match in the movie 2001.

  42. petrushka:
    I suppose I could be a troll, but I still don’t know what the topic is about. Most philosophical discussions seem to be about people talking past each other.

    I’m in the same boat, petrushka. I’ve never looked at the Google definition or the Wikipedia entry on intentionality, and maybe I should, but I was using a working definition of the concept long before there was an internet to present these things to me as authoritative.

    In my working definition, intentionality was always a goal-directed motivation, a trigger for specific forms of behavior. The intention of a lion stalking its prey is to capture that prey and bring it down. The intention of a man plying a woman with drinks and compliments is to convince her to have sex with him.

    Intentionality requires some kind of conscious modeling of the environment, and some kind of potential for physical behavior — even a brain in a vat can formulate plans about what to do if it ever gets a body returned to it.

    The reason I think concepts like this are important is because it ties in to so much about what makes humans the way we are. Our brains seem to be evolved to seek out intentionality in others, and are so eager to do it that they will assign it even where there is none. This might even be related to the roots of religion. So trying to dismiss the concept as so much semantic hand-waving really don’t add to any understanding of anything, I’m afraid.

    My take on this might be totally wrong, but in this case I’d rather be wrong and straightforward than overthinking it out of existence.

  43. Kantian Naturalist

    If it is possible that empirical enquiry is the only way there can be justified true belief about contingent, physical states of affairs in our universe, then it is fruitless to frame the argument so that capital T Truth must be the arbiter of the question of whether humans can be reliable knowers. This is epistemology, yes, but the justification of it is not and cannot be rationalism without metaphysics. For the argument is not that naturalism is problematical, rather it is that it is probably untrue. Plantinga’s alternative to naturalism, theism, is the justification for his epistemology. It does seem that by end of your comment you are agreeing with me.

    I agree with you that Plantinga thinks that only theological metaphysics can provide an adequate account of the reliability of cognition, but I’m not entirely clear on why you think (as I understand you) that his metaphysics is already work in setting up the EAAN itself.

    It’s pretty clear that Plantinga intends the EAAN to proceed as follows: begin with a metaphysically-neutral conception of our epistemic situation — one that would be accepted by all, naturalists and theists alike — and then show that naturalism cannot adequately account for our epistemic situation, whereas theism can.

    My objection isn’t that Plantinga’s conception of our epistemic situation has convert or implicit metaphysical biases — which seems to be your objection? — but rather that Plantinga’s conception of our epistemic situation has its own specifically epistemological and semantic biases, which are not justified by what we know about neuroscience and cognitive ethology, and so.But any good naturalist is going to appeal to neuroscience and ethology in arriving at a theory of semantic content and cognitive function, so she wouldn’t endorse Plantinga’s conception of our epistemic situation in the first place.

    In other words, it’s not that Plantinga smuggles theism into the description of the problem, but that he describes the problem in such a way that theism is the most reasonable (only?) solution.

    (OK, now that I’ve said that, I’m not so sure how much of a difference that makes!)

  44. petrushka:

    First year of undergraduate linguistics:

    Speech Acts. JL Austin’s How To Do Things With Words is a clear, interesting, and relatively easy (for philosophy) read.

    See also Sociolinguistics.

    I may be question begging or engaging in circular reasoning,. but I still think the most productive approach is to ask how symbols (language) function. What evolutionary path did they take.

    I’m thinking that sign language may shed some light on the topic. Signs are actions. We can see them, They involve moving body parts.

    spoken language and thinking are assumed to be mostly invisible, and so it is easy not to think of them as actions. One can entertain the possibility that thoughts are disembodied.

    I would argue that language is social behavior, and that thinking is a kind of rehearsal of social behavior.

    Language in the academic world is not usually thought of as an elaboration of social gestures, but I suspect it is. Human language is distinctive because it has syntax and because it can transcend time and place, but it is still largely social gesture.

    A disinterested observer, not knowing any human language, and observing internet debates, might come away thinking them equivalent to the screeching match in the movie 2001.

Leave a Reply