Naturalizing Teleology and Intentionality? (Must Nature Be “Disenchanted”?)

Over the past year or so, two very interesting books in the philosophy of nature have attracted attention outside of the ultra-rarefied world of academic discourse: Alex Rosenberg’s The Atheist’s Guide to Reality: Enjoying Life without Illusions and Thomas Nagel’s Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False.  Both of these works have been extensively discussed in popular magazines, radio shows, blogs, and esp. at Uncommon Descent.  Here, I want to briefly describe what I see going on here and open up the topic for critical discussion.

Rosenberg’s central claim is that “the physical facts fix all the facts” — all the sociological, psychological, biological, and chemical facts that there are, are determined by the facts about fermions and bosons.  Anything that cannot be explained in terms of bosons and fermions just is not the case.  This means that there is no such thing as intentionality, teleology (purposiveness), or meaning — not really.  These notions turn out to be nothing more than cognitive short-cuts that our distant ancestors evolved in order to survive on the African savannah of the lower-to-mid Pleistocene.  Intentionality is no obstacle for Rosenberg’s naturalism, because there’s just no such thing.  (Interestingly, he does concede that mathematics is a problem for naturalism — why he makes the concession for mathematics, but not for intentionality, puzzles me greatly.)

Nagel, by contrast, takes the opposite view: that the unquestionable reality of intentionality and consciousness strongly suggests that disenchanted naturalism — Rosenberg-style naturalism — simply cannot be the whole story.   What he proposes instead is what he wants to call “natural teleology”: that there is a basic tendency at work in the cosmos towards living things with intentionality and consciousness.

Rosenberg insists that intentionality teleology cannot be naturalized; Nagel insists that they must be.  But it is not really clear what “natural teleology” might mean, or whether intentionality could be naturalized.   So that’s what I want to explore a bit further.

On one common interpretation, intentionality and teleology require that there be “final causes”.   And final causes seem to be the sort of thing that the scientific revolution dispensed with — that’s what it means to say that the modern conception of nature is “disenchanted”.   But while it’s certainly true that the scientific revolution showed the uselessness of final causation for doing physics (and maybe chemistry), I submit that the scientific revolution did not show the uselessness of final causes for biology (let alone psychology. etc.)   So there’s one way of understanding the irreducibility of biology to physics: biology requires final causes, and physics does not.

However, that seems inadequate, because we still want to know where final causation, or purposiveness, comes from.  (This is the ‘hard problem’ that makes the problem of the origin of life seem so intractable.)   How can we naturalize final causes without reducing final causes to efficient causes?   (I’m open to different ways of framing the problem than this one, of course.)

A provisional solution: final causes can be explained in terms of efficient causes.  “Wait a minute!”, one might say, “doesn’t that just reduce them to efficient causes, in that case?”   And to that my answer is “no.”   We would be reducing final causes to efficient causes if we showed that we could replace all talk of final causes with talk of efficient causes, without suffering any loss of predictive success or explanatory power.  But that’s a kind of conceptual analysis — whereas what I’m talking about is explaining final causes in terms of efficient causes, which is not a matter of rejecting final causes or eliminating them from our scientific world-view, but a matter of showing how to make sense of final causes — which are, I submit, indeed indispensable for biological and psychological explanations — within the modern scientific world-view.

Nature can be, perhaps, partially re-enchanted after all.

73 thoughts on “Naturalizing Teleology and Intentionality? (Must Nature Be “Disenchanted”?)

  1. keiths,

    …it’s not intentionality because one important feature of intentionality is linguistic.

    I should think that Laura Bridgman and Helen Keller offer strong evidence of intentionality existing prior to language.

  2. I’m a “take the world as it come”-ist, if there is such a thing

    Sounds like you’re a closet pragmatist.

  3. Certainly a pragmatist, though not necessarily what philosophers define as pragmatist. But also a behaviorist of sorts (but not the Skinner sort).

  4. rhampton,

    That’s KN’s claim, not mine.

    I think perception is generally non-linguistic.

  5. keiths: That’s KN’s claim, not mine.

    I think perception is generally non-linguistic.

    No, rhampton quoted me out of context — what I’d said was

    One kind of intentionality, the intentionality of propositional attitudes, is what I call discursive intentionality, and it is “at home” in the linguistic community as a whole. The other kind of intentionality, the intentionality of perception and action, is what I call somatic intentionality, and it is “at home” in the lived body.

    My view is that perception is intentional and non-linguistic, because it is bodily.

    So, I’ve been reading recent scholarship on Merleau-Ponty and on Sellars this evening, and trying to figure out just why it is that I don’t want to say that the robot’s modeling of its environment — what Sellars would call “picturing” — counts as perceptual-practical intentionality — what Merleau-Ponty would call “motor intentionality”. And here’s what I’ve come up with.

    I can insist on a distinction here only by parting ways (quite drastically, maybe?) with Dennett, and insisting that intentionality is only a first-person and second-person concept — part of what I do in describing my own perceptual and conceptual experience, and also what we do in our perceptual and conceptual experience. It is not a concept that figures in a third-person, perspective-neutral description of the world — whereas ‘picturing’ (= Churchland’s ‘mapping’, = Richard Wein’s “modeling”) is such a concept — as his example of the robotic vacuum-cleaner makes reasonably clear.

    (Corollary: when it comes to animals and babies, we do not ascribe somatic intentionality to them, but rather their whole of being-in-the-world-with-us manifests their own modes of somatic intentionality.)

    I don’t think this is all that satisfying, but it’s a start (?).

  6. Kantian Naturalist,
    Is discursive intentionality learned only after acquiring language? And if so, is it necessary to first have intentionality of perception and action to learn language?

  7. rhampton: Is discursive intentionality learned only after acquiring language? And if so, is it necessary to first have intentionality of perception and action to learn language?

    I would say, “probably” and “definitely”, respectively.

  8. KN,

    You did say that Richard’s robotic vacuum example was not an example of intentionality because it was not linguistic:

    I agree that this kind of modelling of an environment is part is usually meant by ‘intentionality’, but I’m far more inclined to say that while this is a useful analogy for how how animals represent their environments, it’s not intentionality because one important feature of intentionality is linguistic.

  9. Newborns imitate tongue protrusion, mouth opening, lip pursing, sequential finger movements, blinking, and vocalization of vowel sounds. They also imitate happy, sad, and surprised facial expressions, widening their lips in response to a happy face, protruding lower lips in response to a sad face, and opening eyes and mouth wide in response to an expression of surprise. Imitation of facial gestures has been demonstrated in infants as young as 42 minutes. At birth they not only perceive facial expressions in others, but also map those expressions onto their own faces – which they have never seen!

  10. keiths:
    KN,

    You did say that Richard’s robotic vacuum example was not an example of intentionality because it was not linguistic:

    Fair enough — I did say that — but I shouldn’t have, and it’s not what I really think. (Man, you guys are tough! I like that!)

    I think that Richard’s robotic vacuum-cleaner doesn’t display somatic intentionality, of the perceptual-practical kind, because — well, now I wonder how to say this in a way that isn’t just begging the question! — but what I would like to say is that the robot is “just a machine” — it’s not a living organism — and so the objects it encounters cannot count as meaningful or significant for it, as they do for living things that have somatic intentionality.

    As the late, great Gilbert Harman once said, “I’d like to make a distinction here, but I can’t think of one.”

  11. KN,

    I think that Richard’s robotic vacuum-cleaner doesn’t display somatic intentionality, of the perceptual-practical kind, because — well, now I wonder how to say this in a way that isn’t just begging the question! — but what I would like to say is that the robot is “just a machine” — it’s not a living organism — and so the objects it encounters cannot count as meaningful or significant for it, as they do for living things that have somatic intentionality.

    How about robots that look for electrical outlets and plug themselves in when they need recharging?

  12. keiths: How about robots that look for electrical outlets and plug themselves in when they need recharging?

    That would be getting warmer!

    Look, I don’t think there’s any in principle reason why we couldn’t design and build robots that would display somatic intentionality and/or discursive intentionality. In the first case, they would be synthetic or artificial animals; in the second case, synthetic or artificial persons.

    We’ll almost certainly see synthetic animals within the next ten years — well, I’m just guessing blindly, not knowing anything about robotics — and once that threshold is passed, synthetic persons won’t be far behind.

    If you like, I could say that “original intentionality” can result from either evolution or technology.

  13. I’m just trying to understand exactly what would distinguish a “synthetic animal” from a robot (besides the fact that you would ascribe original intentionality to the former but not the latter! :)).

  14. Neil Rickert:
    I’m not persuaded that’s an account of intentionality (aboutness).It is subject to the criticism that it is only using derived intentionality, and not original intentionality.In this case, we would look to the robot’s designers for the original intentionality.

    I only claimed to be giving an account of aboutness, not intentionality, as I find the latter term too ill-defined. I see no reason to think that there is any such thing as “original intentionality” in need of an explanation.

    I gave an example of a simple “about” fact: the robot acquires information about its environment. Since this is a case of something being about something, I think it’s perfectly reasonable to call it a case of aboutness.

    I try to stick to ordinary language terms (such as “about”) as much as possible, as we can understand their meaning by reference to our ordinary experience of language. I find that philosophical terms of art often (not always) create more confusion than they are worth, because their meaning is too unclear. I think “intentionality” is one of those. If you don’t mean “aboutness” in the obvious sense, but are merely using it as another ill-defined philosophical term of art equivalent to “intentionality”, then I think it just adds to the confusion instead of reducing it.

    P.S. Perhaps it would help if you gave an example of a specific fact that you think is in need of explanation. “We have original intentionality” is not specific enough for us to agree that it’s a fact.

  15. Kantian Naturalist: Thus, facts have an intentional structure — they are assertions that refer to or are about states of affairs. But, ex hypothesi, intentionality has no place within a physicalist ontology.

    I think that, once again, the concept of “intentionality” is being allowed to obscure the issue unnecessarily. Why not leave “intentionality” out of it, and simply talk about facts, assertions, states of affairs, etc? These are familiar ordinary-language concepts that we can use with relatively little problem. Then we can simply ask whether the fact that we make assertions about states of affairs is a problem for physicalism. And I will argue that it isn’t. “Intentionality” is an unnecessary and confusing middle-man.

    Cut out the middle-man; pour your beer straight down the toilet!

  16. Kantian Naturalist:

    I think that Richard’s robotic vacuum-cleaner doesn’t display somatic intentionality, of the perceptual-practical kind, because — well, now I wonder how to say this in a way that isn’t just begging the question! — but what I would like to say is that the robot is “just a machine” — it’s not a living organism — and so the objects it encounters cannot count as meaningful or significant for it, as they do for living things that have somatic intentionality.

    Well, I would agree that my simple robot doesn’t have the same sort of meaningful inner life that a human does. But as far as I can tell that isn’t what philosophers are usually referring to when they say “intentionality”. I can’t help feeling that in this discussion the word “intentionality” has become a catch-all for referring to quite a variety of different things!

    P.S. On second thoughts, maybe this is not so different from what philosophers usually mean. But an awful lot depends on what you mean by “meaningful or significant”. In a very basic sense, the vacuum cleaner’s data do mean something to it. It’s able to interpret those data as the locations of objects, and act accordingly. On the other hand, it doesn’t have such thoughts as, “Phew. I narrowly escaped bumping into that chair!”.

  17. To us, it is about the environment. To the robot itself, it isn’t about anything. That’s roughly the distinction between derived intentionality and original intentionality.

  18. Why not leave “intentionality” out of it, and simply talk about facts, assertions, states of affairs, etc?

    I’m all for that, if you can manage it.

    There is a lot of literature which seems to consist of completely circular definitions of “fact”, “truth”, “state of affairs” etc. The definitions fail to break out of the circle.

    I did spend time trying to define fact (i.e. break out of that circle). And that’s part of why I said “there are no physical facts” in an earlier comment.

  19. Kantian Naturalist,

    Kantian Naturalist: T
    Look, I don’t think there’s any in principle reason why we couldn’t design and build robots that would display somatic intentionality and/or discursive intentionality.In the first case, they would be synthetic or artificial animals; in the second case, synthetic or artificial persons.

    Would synthetic person also be aware/conscious of the “meaning” of their thoughts in Block’s sense of “access consciousness”. Or is that more than intentionality.

  20. Neil Rickert:
    To us, it is about the environment.To the robot itself, it isn’t about anything.That’s roughly the distinction between derived intentionality and original intentionality.

    The collected data are about the environment “to the robot” because it’s the robot which is modelling its environment and acting on that model. The model would be a model of the environment (about the environment) even if we did not exist.

    P.S. I put to “to the robot” in scare quotes because “it’s about X to Y” isn’t ordinary language, and its meaning is unclear. I would rather just say that the robot has data about its environment by virtue of the fact that it is modelling its environment. The robot is the only system that’s relevant here. We outside observers are irrelevant to the fact that the data are about the environment. We are just the ones describing the situation. But that’s no more relevant than that we are the ones who say the Earth is round. I don’t think you would claim that the Earth is only round “to us”.

    Of course I don’t deny that we humans have far more sophisticated models than the robot. For a start, we have language and are able to say, “the data are about X”. This is a kind of second-order model: it’s about aboutness.

  21. Kantian Naturalist/Emergentist,

    Thanks for your reply. I’m in another country now on a research visit, so a bit slow to respond.

    You are welcome for the Weber link. I find it quite important. As well, the contemporary discussions of ‘secularisation’ and ‘disenchantment’ by folks such as David Martin and Charles Taylor, among a significant number of others, show a new view of Weber’s ‘disenchantment/re-enchantment’ thesis. I wrote my undergrad thesis on this topic as it relates to higher education reforms.

    Thanks also for a bit of your background wrt studies in German HPS and philosophy of social sciences. It is a topic often under-estimated and forgotten in most American discussions as far as I’ve noticed. Taylor’s philosophy of social sciences is of course an exception, though the Eastern European (usually more ‘east’ than Germany) contributions imo far exceed his.

    “I don’t see how explaining final causes in terms of efficient causes undermines the methodological autonomy of the social sciences or the reality of the phenomena disclosed by those methods.”

    It’s a ‘nature/nurture’ conversation usually in N. America. Final causes have been exiled in natural-physical sciences, but are unavoidable in human-social sciences (though IDists are literally ‘dumb’ to this reality). It is interesting that you speak of “the methodological autonomy of the social sciences” because quite a number of people here argued with me several months ago against that very possibility. It does seem to me that you are more sensitive to the ‘multiple methods’ understanding of ‘science’ than many trained as natural scientists in N. America who remain stuck in a single ‘THE scientific method’ myopia. Yet I wonder how you navigate the so-called MN vs. MN English-language debate, i.e. between ‘metaphysical naturalism’ and ‘methodological naturalism,’ which, e.g. Lizzie says she still doesn’t understand.

    “Why doesn’t emergentism cut it?”

    There are several questions you didn’t answer in your earlier thread on ’emergentism,’ including from me, which take precendence from your asking me now to do the work for you on ’emergentism.’ So, I’m not biting here.

    Yes, I agree – Adorno is a curious figure. I’ll have to address him directly again in a debate with colleagues next year. For now, however, I note the distinction between ‘nature’ and ‘culture’ that many have made (and still do make). The ‘disenchantment of nature’ may be tied to a “rehabilitation of teleology,” as you call it. But I’d take Berdyaev or Solovyev over Goethe almost any day of the week, and certainly Hayek on ‘proper and improper teleological fields’ than I would Adorno. Do advise me if you think otherwise.

    And I really do think you should have a look at Bhaskar, that is, if you’re curious how critical realism, scientific realism and transcendentalism (Kant et al.) may have an impact on the emergentism that you desire to promote (naturalistically or otherwise).

    Btw, I notice that no one in the comments took up the verb ‘naturalise/naturalizing’ as you used it in the title of the thread. Why do you think that is?

Leave a Reply