A Critique of Naturalism

The ‘traditional’ objections to a wholly naturalistic metaphysics, within the modern Western philosophical tradition, involve the vexed notions of freedom and consciousness.   But there is, I think, a much deeper and more interesting line of criticism to naturalism, and that involves the notion of intentionality and its closely correlated notion of normativity.

What is involved in my belief that I’m drinking a beer as I type this?  Well, my belief is about something — namely, the beer that I’m drinking.  But what does this “aboutness” consist of?   It requires, among other things, a commitment that I have undertaken — that I am prepared to respond to the appropriate sorts of challenges and criticisms of my belief.  I’m willing to play the game of giving and asking for reasons, and my willingness to be so treated is central to how others regard me as their epistemic peer.  But there doesn’t seem to be any way that the reason-giving game can be explained entirely in terms of the neurophysiological story of what’s going on inside my cranium.  That neurophysiological story is a story of is the case, and the reason-giving story is essentially a normative story — of what ought to be the case.

And if Hume is right — as he certainly seems to be! — in saying that one cannot derive an ought-statement from an is-statement,and if naturalism is an entirely descriptive/explanatory story that has no room for norms, then in light of the central role that norms play in human life (including their role in belief, desire, perception, and action), it is reasonable to conclude that naturalism cannot be right.

(Of course, it does not follow from this that any version of theism or ‘supernaturalism’ must be right, either.)

 

A Matter of Faith

For those interested in creationism and the culture wars, I bring to your attention the forthcoming A Matter of Faith (trailer here).  [And could someone explain to me how to post the trailer directly in my post?]

Several things fascinate me about this development, among which are:

(a) comments on the YouTube video are disabled, so there can be no debate about a movie which centers on a debate;

(b) this movie is produced and endorses by Answers In Genesis, which explicitly refused to endorse “God’s Not Dead” in their review of it (which comes through, in their terms, in the conflict between “evidential” and “presuppositional” apologetics);

(c) the same aesthetics as “God’s Not Dead” (on which, see here for a brilliant and nuanced assessment of how these kinds of films work);

(d) the culmination of the “teach the controversy” strategy.  I found out earlier today that the pedagogy of “teach the controversy” was developed for dealing with conflicting interpretations of literary texts (source here).  It fell to Paul Johnson to appropriate a pedagogical strategy perfectly suited for the humanities — “teach the controversy” — into the sciences. This leads to what strikes me as the right-wing version of Rorty’s collapse of the humanities/sciences distinction. This is the epistemic apocalypse — there is no knowledge, it’s all just “faith”. Which is kind of a bad thing for a culture with a knowledge-driven economy . . .

 

The ‘Hard Problem’ of Intentionality

I’m starting a new thread to discuss what I call “the hard problem of intentionality”: what is intentionality, and to what extent can intentionality be reconciled with “naturalism” (however narrowly or loosely construed)?

Here’s my most recent attempt to address these issues:

McDowell writes:

Consider this passage from Dennett, Consciousness Explained, p. 41: “Dualism, the idea that the brain cannot be a thinking thing so a thinking thing cannot be a brain, is tempting for a variety of reasons, but we must resist temptation . . . Somehow the brain must be the mind”. But a brain cannot be a thinking thing (it is, as Dennett himself remarks, just a syntactic engine). Dualism resides not in the perfectly correct thought that a brain is not a thinking thing, but in postulating some thing immaterial to be the thinking thing that the brain is not, instead of realizing that the thinking thing is the rational animal. Dennett can be comfortable with the thought that the brain must be the mind, in combination with his own awareness that the brain is just a syntactic engine, only because he thinks that in the sense in which the brain is not really a thinking thing, nothing is: the status of possessor of intentional states is conferred by adoption of the intentional stance towards it, and that is no more correct for animals than for brains, or indeed thermostats. But this is a gratuitous addition to the real insight embodied in the invocation of the intentional stance. Rational animals genuinely are “semantic engines”. (“Naturalism in Philosophy of Mind,” 2004)

Elsewhere McDowell has implied that non-rational animals are also semantic engines, and I think this is a view he ought to endorse more forthrightly and boldly than he has. But brains are, of course, syntactic engines.

Continue reading

Does Atheism Entail Nihilism?

I take it that most (though not all) non-theists assume that atheism does not entail nihilism.  More specifically, most non-theists don’t believe that denying the existence of God or the immortality of the soul entails that truth, love, beauty, goodness, and justice are empty words.

But as we’ve seen in numerous discussions, the anti-materialist holds that this commitment is not one to which we are rationally entitled.  Rather, the anti-materialist seems to contend, someone who denies that there is any transcendent reality beyond this life cannot be committed to anything other than affirmation of power (or maximizing individual reproductive success) for its own sake.

The question is, why is the anti-materialist mistaken about what non-theists are rationally entitled to?   (Anti-materialists are also welcome to clarify their position if I’ve mischaracterized it.)

What Are Concepts?

There’s a nice little discussion going on at Uncommon Descent (see here) about whether concepts are consistent with naturalism (broadly conceived). Here I want to say a bit about what theories of concepts seem to me to be most promising, and to what extent (if any) they are compatible with naturalism (broadly conceived).

The dominant position in philosophy of language treats concepts as representations: I have a concept of *dog* insofar as I am able to correctly represent all dogs as dogs.   It is crucial that concepts have the right kind of generality — that I am able to classify all particular dogs as exemplifying the same general property — in order to properly credit me with having the concept.  (If I only applied the term “dog” to my dog, it would be right to say that I don’t really have the concept *dog*.)

On the representationalist paradigm, rational thought has a bottom-up structure: terms are applied to particulars, terms are combined to form judgments about particulars, and judgments are combined to form arguments, explanations, and other forms of reasoning.

Continue reading

The Quest for Certainty

According to Arrington:

“We cannot know completely. Kurt Gödel demonstrated that even the basic principles of a mathematical system while true cannot be proved to be true. This is his incompleteness theorem. Gödel exploded the myth of the possibility of perfect knowledge about anything. If even the axioms of a mathematical system must be taken on faith, is there anything we can know completely? No there is not. Faith is inevitable. Deny that fact and live a life of blinkered illusion, or embrace it and live in the light of truth, however incompletely we can apprehend it.”

Unfortunately, Arrington is not even wrong.

Continue reading

Science and Metaphysics

A perennial theme of my philosophical peregrinations is the difference between (and relation between) science and metaphysics.   This bears directly on the arguments made by creationists and design proponents.

Design proponents often try to distinguish themselves from both creationists and Darwinists by arguing that they alone are faithful to empiricism — “following the evidence wherever it leads” — whereas both creationists and Darwinists interpret the evidence through the lens of some a priori conceptual framework, a metaphysics.   (I take it to be false, and importantly false, that one can only hold metaphysics in a dogmatic fashion, and that empiricism is the enemy of metaphysics — though of course empiricism is the enemy of dogmatism, if one’s empiricism does not itself become dogmatic.)

Continue reading

Naturalism and Materialism

According to the dim vagaries of recollection, my furtive efforts to be taken seriously over at Uncommon Descent were frustrated due to the perception that I am an atheist.  (Curiously, when I explicitly said that I’d stopped referring to myself as an atheist, this was met with utter silence.)   I had read Nagel’s Mind and Cosmos, and despite my criticisms of the book, I thought it was promising in certain respects, and said as much.  (I also pointed out that some reviews were much more favorable than others, but they didn’t want to notice the favorable reviews, because that would disrupt their martyr-narrative.)  And more generally, I emphatically distanced myself from what I call the “Epicurean” interpretations of Darwinism, e.g. Monod and Dawkins.  But for the occasional exchange with a visitor to UD, this was met with silence or scorn from the UD regulars.

Imagine my surprise, then, when I see today “Making common cause with non-materialist atheists“.   Dembski is now seeking to make common cause with Nagel by distinguishing between naturalism and materialism in terms of two different distinctions: naturalism/theism and materialism/teleology (“teleologism”?).   Interestingly, that’s pretty much the very same set of distinctions that got a distinctly chilly reception from the UD regulars, because I’m not a theist, let alone a Christian, and because I’m a pragmatist and not a rationalist.

It amuses me that Dembski is willing to countenance an intellectual alliance that the rank-and-file UD participants rejected.

 

Speculative Naturalism

The standard design-theorist argument hinges on the assumption that there are three logically distinct kinds of explanation: chance, necessity, and design.  (I say “explanation” rather than “cause” in order to avoid certain kinds of ambiguities we’ve seen worked out here in the past two weeks).

This basic idea — that there are these three logically distinct kinds of explanation — was first worked on by Plato, and from Plato it was transmitted to the Stoics (one can see the Stoics use this argument in their criticism of the Epicureans) and then it gets re-activated in the 18th-centuries following, such as in the Christian Stoicism of the Scottish and English Enlightenment, of which William Paley is a late representative.   Henceforth I’ll call this distinction “the Platonic Trichotomy”

There are at least two different ways of criticizing the Platonic Trichotomy.  One approach, much-favored by ultra-Darwinists, is to argue that unplanned heritable variation (“chance”) and natural selection (“necessity,” if natural selection is a “law” in the first place) together can produce the appearance of design.  (Jacques Monod is a proponent of this view, and perhaps Dawkins is today.) The other approach, which I prefer, is to reject the entire Trichotomy.

To reject the Trichotomy is not to reject the idea that speciation is largely explained in terms of the feedback between variation and selection, but rather to reject the idea that this process is best conceptualized in terms of “chance” and “necessity.”

So what’s the alternative?   What we would need here is a new concept of nature that is not beholden to any of the positions made possible with respect to the conceptual straitjacket imposed by the Trichotomy.

Why Metaphysics is (Almost) Bullshit

I have finally finished reading Robert Brandom‘s massive tome (650 pp.) Making It Explicit, and it’s given me a lot of new tools with which to think about the nature of concepts and the relation between language, perception, action, and the world.  This is my first attempt to do something with what I’ve learned from Brandom.

It is crucial to Brandom’s account that conceptual content — what our thoughts are about — is constrained in two different ways: normatively and causally.  Normative constraint is, for Brandom, essentially and fundamentally social and linguistic.  For a community of speakers, each speaker holds herself and the others accountable for what they say by keeping track of the compatibility and incompatibility of their commitments and entitlements. (If I assert p, and p implies q, then I am committed to q.  If I assert p, and p implies q, but I am already committed to ~q, then I am not entitled to assert p.  And so on.)  The various ways in which we keep track of our own commitments and entitlements, and our own, is a process that Brandom calls “deontic scorekeeping”: deontic from <I>deonta</I> (Greek, “duty”), what we ought to be committed to.   We keep score of what we ought to say.    Deontic scorekeeping is the only normative constraint on discursive statuses — what it is that we believe or desire.  The statuses — the beliefs and desires — are instituted by the attitudes of commitment, entitlement, acknowledgement, avowal, disavowal, and so on — and are only fully intelligible in those terms.

Continue reading

The Possibility of Error

Since the discussion about the possibility of error is much-discussed at Uncommon Descent, I thought it might be interesting to see how Josiah Royce develops his argument concerning “the possibility of error” in his The Religious Aspect of Philosophy (1885).  (I’m using The Philosophy of Josiah Royce, which I found recently in a used-book store. I assume that no one here is too concerned about quotations or citations, but those are available on request.)

Royce’s question here is, “how is error possible?” — and by ‘how’ he means, “what are the logical conditions for the possibility of error?”   An error, he points out, is our recognition of the failure of a judgment to agree with its object.  How is possible for us recognize that our judgments have failed to agree with their purported objects?   The puzzle goes as follows: on the one hand, if the object is entirely within our cognitive grasp, our assertion about it would fully correspond to the object — in which case, there would be no error.  On the other hand, if the object were entirely beyond our cognitive grasp, we would be unable to recognize the lack of correspondence between the judgment and the object — in which case the error would be unrecognizable.  So our ability to recognize errors as errors requires that we have “partial knowledge” of the object.  So what is partial knowledge, and how is it possible?

[It will not surprise anyone here who knows how I think to learn that, from my point of view, the above is more-or-less sound, whereas the next bit utterly goes off the rails.]

What is required, Royce thinks, is that both the judgment and the object are contained within some larger, more inclusive thought that can compare them against them against one another and notice the correspondence (or lack thereof) between them.  And since there are infinitely many errors, the inclusive thought must be all-inclusive — it must contain all possible judgments and their objects.  And that in turn must be the Absolute Knowledge and Absolute Mind of God.  (Didn’t see that one coming, eh?)

TL;DR version: there are errors, therefore God.

 

 

 

 

On “Self-Evident Truths”

When one talks about a “self-evident truth,” what exactly is one talking about?

In one sense, it is “self-evidently true” that when one looks at an object — say, this pint glass next to me as I type — I see that it is a pint-glass.  It is “self-evidently true” that I am looking at a pint-glass (putting to aside worries of Cartesian demonic deception), because I do not perform an inference.  My perception of the pint-glass is not the conclusion of an argument, based on premises.  It is a paradigm case of non-inferential knowledge.

But in another sense, this perceptual knowledge is not “self-evident,” if by that we mean knowledge that does not depend on any further presuppositions.  For the contrary is the case: a great deal of background knowledge must be presupposed in order for me to see the pint-glass — for example, I must have the concept of “pint-glass” and know how to apply that concept.  Even the transparent cases of analytic propositions (“a vixen is a fox”, “the sum of the interior angles of a Euclidean triangle is 180 degrees”, “every effect has a cause”) presuppose as their respective background an adequate grasp of the concepts involved.

It is sometimes said that if a proposition is self-evidently true, then nothing can be done which would show it to be to true to someone who denied it.  But this is not quite right.  What is right is that a proposition is self-evidently true, then it cannot be demonstrated from some other premises nor arrived at through generalizations — it is not grounded in either deduction or induction, one might say.

But that does not mean that one cannot resort to all sorts of other arguments or thought-experiments that disclose that the proposition is self-evidently true.  A classic example of this is Descartes’s famous “I think, therefore I am.”   This is not the result of inference or observation, yet Descartes spends a great deal of time setting the stage to prepare the reader for this truth and to see it as self-evident.   For this reason, “I can’t convince of you of this, because it’s self-evidently true” should not be accepted without criticism.

Because of this distinction between non-inferential knowledge and presuppositionless knowledge, accepting the importance of the former does nothing to settle whether or not we ought to be committed to the latter.   The failure to see this is what Sellars called “the Myth of the Given,” which is the original sin of rationalism and empiricism alike.

The Semantic Apocalypse

The other night I came across this fascinating set of lectures about “the semantic apocalypse” — the thought being that, as we come to know more about how the brain really works, the more it will seem as though meaning and intent are a sort of illusion — something that the brain generates in order to organize information — and in no way corresponding to what’s really going on.   Since the brain is adapted to modeling what is going on in the external environment (including the social environment), it doesn’t need to be good at modeling itself.  So the categories we use to describe “mental phenomena”, such as “intentionality”, are just cognitive shortcuts we rely on to compensate for the lack of the brain’s transparency to itself.

I found all three lectures quite fascinating.  I should warn you that the second lecture leans heavily on the work of Ray Brassier and Quentin Meillassoux, so it may seem somewhat off-putting at first.   I’ve only discovered their work recently myself, but I shall endeavor to respond to the best of my abilities to any questions that arise.

 

 

 

The Roles of Philosophy in An Age of Science

Lately, the conversations I’ve been having here and with friends on other sites have focused my attention on the question, “what is the role of philosophy in an age of science?”   (I have a long-standing interest in this question, as someone who pursued an undergraduate degree in biology and switched to philosophy for graduate study.)

Here are a couple of options that I think deserve to be taken seriously — though I think there are reasons for thinking that some of them are preferable to others — in coming up with this list I was inspired by Ian Barbour‘s models on science and religion —

(1) total separation: science inquires into a posteriori truths, and philosophy inquires into a priori truths, so nothing that science has to say can affect philosophy, or the other way around.  (Another version of total separation puts the emphasis on the distinction between the descriptive project of science and the normative project of philosophy — “how ought we to live?” is not, at first blush, a scientific question.)

(2) conflict — philosophy makes claims about the human condition, experience, value, meaning (etc.) that are undermined by the causal explanations provided by science.   Under the conflict model, science takes priority over philosophy, or philosophy takes priority over science. For example, phenomenology took the position that a distinctive kind of philosophical inquiry was the foundation of the sciences and made the sciences possible.   (Though phenomenology might be better classified under separation than under conflict — it depends on the particular phenomenologist, perhaps.)

(3) dialogue — the sciences benefit from the reflective analysis practiced in philosophy for refining their basic concepts and assumptions, and philosophy benefits from the new empirical discoveries that science discloses.  So philosophers can contribute the metaphysics of physics or the epistemology of scientific inquiry, for example.

(4) integration — a fully philosophical science and a scientific philosophy.

I would position myself somewhere between (3) and (4) — I think that the philosophy is most successful when it creates new conceptions that give voice to the problems and opportunities disclosed by new scientific discoveries*, e.g. re-conceiving the concepts of selfhood and autonomy in light of neuroscience, or in re-conceiving the concepts of matter and causation in light of quantum physics.

* though not just new scientific discoveries — new kinds of artistic creations and political relations can and should also prompt the philosopher to create new concepts.

Philosophy: Call For Topics

I’ve been trying to think of some new posts on philosophical issues here, and I have a few too many ideas — some (if not most) of which would be of little interest, I conjecture, to most participants here.   So I turn it over to you: what topics, if any, would you like to see raised?

Here’s what I have in mind: people here make suggestions, I look them over and see which ones fall within my limited expertise, and then write up a post on that issue for framing discussion.

If that sounds good to you, then have at it!

Plantinga’s EAAN: Criticism and Discussion

Alvin Plantinga’s Evolutionary Argument Against Naturalism has attracted a great deal of serious critical discussion (e.g. Naturalism Defeated?) and has had a substantial impact on ‘popular’ appraisals of naturalism.  (For example, William Lane Craig frequently uses it, and it also appears in the dismissal of naturalism in The Experience of God.)  Many philosophers have pointed out various problems with the EAAN, and in my judgment the EAAN is not only flawed but fatally flawed.  Nevertheless, it’s a really interesting argument and it could be worth exploring a bit.  I’ll present the argument here and then we can get into it in comments if you’d like — though I won’t be offended if you’d rather spend your time doing other things!

The EAAN has gone through various iterations, but here’s the latest version, from Where the Conflict Really Lies: Science, Religion, and Naturalism (2011).  Intuitively, we regard our cognitive capacities — sense-perception, introspection, memory, reasoning — as reliable, where “reliable” means “capable of giving us true beliefs most of the time” (subject to the usual caveats).  Call this claim R (for ‘reliable’).   But how probable is R?

Suppose that one accepts evolution (E) but also affirms naturalism, defined here as the belief that there is no God or anything like God (N).  What is the probability of R, given N & E?    One might think it’s quite high.  But Plantinga thinks that, however high the probability of R, nevertheless the probability of R given N&E is low or inscrutable.  Why’s that?

Now, here’s the key move (and in my estimation, the fatal flaw): beliefs are invisible to selection.  Why?  Because selection only works on behavior.  If an unreliable cognitive capacity is causally linked to adaptive behavior, then the unreliable capacity will be selected for (i.e. not selected against).  Even a radically unreliable capacity — that one never or almost never yields true beliefs — can be selected for.  Selection only “cares” about adaptive behaviors, not about true beliefs.  (More precisely, we have no reason to believe that the semantic content is not epiphenomenal.)

So, Plantinga thinks, given N&E, the probability of R is very low. But, if the probability of R is low, given N&E, then that should ‘infect’ the likelihood of all of the beliefs produced by those capacities — including N&E themselves.  So, given N&E, we should it think it extremely unlikely that N&E is true.  And so the initial assumption of N&E defeats itself.  (Here I’m being much too quick with the argument, but we can get into the details in the comments if you’d like.)

Anyway, it’s a really cool little argument, and it’s not immediately clear what’s wrong with it — and I thought it might be worth discussing, given how influential it is.

 

 

The Idea of “Pseudo-Science”

When I was poking my nose around philosophy of science in the 1990s, I was told that Larry Laudan’s critique of “the demarcation criterion” had pretty much scuppered the very idea of “pseudo-science.”    Since I don’t work in philosophy of science, but take a keen (and amateurish) interest in the debates about creationism and intelligent design, I found this unfortunate.

Imagine my surprise, then, when I found that some philosophers of science still take the idea of “pseudo-science” seriously and are intent on rescuing it from Laudan’s criticism.  First, I bring to your attention a recent NY Times article, “The Dangers of Pseudo-Science” (part of the usually excellent NY Time series The Stone, which brings philosophy out of the rarefied atmosphere of academia into the very slightly less rarefied atmosphere of the NY Times readership).   The authors, Massimo Pigliucci and Maarten Boudry, are also the editors of Philosophy of Pseudoscience: Reconsidering the Demarcation Problem — which, guessing from the table of contents and reviews, will be an excellent collection.

Lewontin and “the A Priori”

At Thoughts in a Haystack, Pieret notes that Citizens For Objective Public Education, Inc. (COPE) has brought a lawsuit in Kansas to block the implementation of Next Generation Science Standards. (The whole complaint is here (PDF).)   The complaint alleges that teaching evolutionary theory amounts to state endorsement of atheism, and hence is unconstitutional.

In making their case, COPE quotes this well-known passage from Lewontin’s review of Sagan’s A Demon-Haunted World:

“Our willingness to accept scientific claims that are against common sense is the key to an understanding of the real struggle between science and the supernatural. We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. The eminent Kant scholar Lewis Beck used to say that anyone who could believe in God could believe in anything. To appeal to an omnipotent deity is to allow that at any moment the regularities of nature may be ruptured, that miracles may happen.”

 

Firstly, this passage is taken out of context; read in context, it is fairly clear that Lewontin is attributing this dogmatism to Sagan, and not endorsing it himself.

Continue reading

On the Idea of “Scientism”

Defenders of evolutionary theory are sometimes accused of “scientism”, and this much-used (and much-abused) term has also arisen in the republic of letters due to Steven Pinker’s recent “Science is Not the Enemy of the Humanities” in The New Republic, which drew interesting responses from Leon Wieseltier, Ross Douthat, and Dan Dennett.    Here I want to examine a bit more carefully the idea of “scientism” by way of a criticism of Wieseltier’s “Perhaps Culture is Now the Counterculture”: A Defense of the Humanities”.  There he complains that

Our glittering age of technologism is also a glittering age of scientism. Scientism is not the same thing as science. Science is a blessing, but scientism is a curse. Science, I mean what practicing scientists actually do, is acutely and admirably aware of its limits, and humbly admits to the provisional character of its conclusions; but scientism is dogmatic, and peddles  certainties. It is always at the ready with the solution to every problem, because it believes that the solution to every problem is a scientific one, and so it gives scientific answers to non-scientific questions. But even the question of the place of science in human existence is not a scientific question. It is a philosophical, which is to say, a humanistic question.

Wieseltier isn’t a philosopher but a professional pundit who sprinkles his prose with philosophemes to appeal to the class-prejudices of his intended audience. So it would take some work just to locate his rant on a more well-traveled map.

Continue reading

Challenge to Theists: Morality

I challenge theists to present their moral structure in this thread – what principles their moral system is based on (if any), how they come to understand/decide what they “ought” to do; whether or not they are “obligated” to act morally, and if so, to whom/what is that obligation owed, and why anyone should care or act according to their moral system. Or, if their moral system doesn’t follow any of these conventions, then explain their moral system/views.