Koons, Aquinas, and Intelligent Design

Robert Koons and Logan Paul Gage have a defense of ID uploaded, titled St. Thomas Aquinas on Intelligent Design. The article is intended to address specifically theist criticism against ID and to show that ID is perfectly compatible with Thomistic metaphysics.

Up front, on the first page, the critics are identified by name. On the second page, the critical theses have been laid out. This is a very promising straightforward start. Unfortunately, the rest is downhill. While much of the criticism is accurately represented, some of it is not, and the defense with its misguided appeal to science misses crucial points of the criticism. And way too much of the defense simply reiterates ID slogans without actually defending them. And, to top it all, Aquinas is falsely interpreted to mean what he could not have meant. The last point is not too concerning though. Aquinas inevitably has that role in the circles that self-identify as Thomistic, which happens to include both ID critics and advocates.

Thomistic defense of ID by Koons and Gage

My interest was in finding an actual Thomistic or Scholastic defense of ID. Therefore I will ignore most of the mere reiteration of ID talking points in the article, such as appeals to science that cannot convince anyone but those already converted to the ID cause and that don’t really address what theist critics criticize about ID. Spoiler: That’s the bulk of the article.

However, I found what I was looking for: Aquinas’ doctrine of exemplar causation. It’s represented in the article as follows.

An exemplar cause is a type of formal cause—a sort of blueprint; the idea according to which something is organized. For Thomas, these ideas exist separately from the things they cause. For instance, if a boy is going to build a soap-box derby car, the idea in his mind is separate from the form of the car; yet the car’s form expresses the idea, or exemplar cause, in the boy’s mind. Herein lies the important point: for Thomas, a creature’s form comes from a similar form in the divine intellect. In other words, the cause of each species’ form is extrinsic. In fact, writes Thomas, “God is the first exemplar cause of all things” (p. 84-85 = p. 6-7 of the pdf)

The article says that the critics fail to mention this doctrine. That’s true. As far as I have followed the debate, exemplar causation has been mentioned only once by an ID critic in the debate, namely by Edward Feser (ID critic) when he says Vincent Torley’s (ID apologist) understanding of it is “worse than tenuous”. The rest of the mentions of exemplar causes I have seen in the debate are employed by the defenders of ID.

However, the problem with this defense is that it remains metaphysical and never touches on the physics and biology of design that ID is supposed to be about. Appeal to exemplar causes, while being relevant to Thomistic understanding of design, has no direct relevance to ID as an empirical theory whose mission is to measurably detect stuff. At least no advocate ever managed to clarify the connection to me and this article is no exception.

The real thrust of criticism

The thrust of theist criticism against ID is this: Teleology is beyond the empirical world. It cannot be measured or detected as a cause of this or that. Formal causes do not create or generate things and events, but rather “inform” things (with purpose, i.e. function, both intrinsic, special and contextual; it’s not a separately examinable part or appendage, like souls are often imagined to be separable ghosts). For example, a formal cause does not cause a dog to be, but rather determines what a dog is, what qualifies as a (natural or normal) dog and what doesn’t. The thrust of theist criticism against ID is meant to point out this category error between empirical and unempirical causation. The latter (namely, unempirical causation in Aristotelian metaphysics) would likely correspond to a “category” or “taxonomy” in scientific terminology. As long as ID fails to comply with the scientific terminology, it is doomed to remain a pseudoscience. And as long as ID trivializes Scholastic metaphysics, assuming empiricism where there is none, it is rightly criticized by Thomists and Scholastics.

This crucial criticism is sadly misrepresented in the article, sometimes subtly, sometimes grossly. For example, the article complains about the critics’ obsession with secondary causation (as distinguished from direct causation by God whereas, as rightly pointed out in the article, Thomas has no problem with direct causation) and aversion to God’s intervention and miracles. In reality, critics have no such obsessions and aversions. Instead, the criticism is that God’s direct intervention and miracles remain empirically undetectable after the fact. God’s intervention is indistinguishable from natural causes, because God is the author of natural causation. Intervention or miracles would be no different from natural causes, because God’s action is a single timeless act (a.k.a. pure actuality): When God acts, the outcome is most natural, nature itself.

Take a particular miracle such as raising Lazarus from the dead. After the raising, would modern physicians be able determine after examination, “Yup, God did it.” or “This is caused by design, not by natural causes”? No. There would be no empirical signs of miraculous intervention after the fact. And, incidentally, this is not how the Catholic Church goes about determining miracles. Yet this is how ID apparently proposes to proceed.

After all this, the article turns and says “ID is a very minimal claim which does not require intervention.” (p. 85 = p. 7 in the pdf) Then why all that accusation of critics with their obsessions and aversions concerning the matter?

Where did ID go this wrong?

There are other fundamental problems with ID theory that become evident in the article, mainly conceptual. For example, it’s never clear what is meant by “design”. Is it a cause or an effect? At one point, Behe is quoted definitionally, “Design is simply the purposeful arrangement of parts” and Dembski is claimed to have pointed out that Paley “made no appeal to miracles in the production of design.” (p. 85-86) So, if design is a production and an arrangement, it seems to be more like an effect. Yet there’s the rampant “caused by design” assertion in the ID community as we know it (from UD, originally Dembski’s forum). The article does not mention it. Dembski uses (at UD: Resources/ID defined) the term “intelligent cause” which is supposed to “best explain” “certain features of the universe and of living things” (the same as “design”?) while the relation between design and intelligence is never explained. That’s a problem created by, or at least amplified by, Dembski, I’d say.

Another is the term “irreducible complexity”. The article defends the term citing Aquinas.

Contrary to the claims of Feser (2010, 154–155), the presence of complexity is relevant to Aquinas’s argument for design:… It is impossible for things contrary and discordant to fall into one harmo-
nious order
always or for the most part, except under some one guidance… (p. 86, underline in the original)

Now, does everybody agree with the implication that “one harmonious order” means something even remotely akin to “complexity”? Didn’t think so. The article is full of such misapplied quotes from Aquinas. They can be hunted for fun when reading. “Complexity” is like a square peg to a round hole when it comes to Scholastic metaphysics with its doctrine of divine simplicity. This is a problem invented by Behe.

Conclusion

The conclusion of the article says that “The Thomistic critics of ID understand neither ID nor the heart of Darwinian evolution… ID is not a competing metaphysical system for the simple reason that it is not a metaphysical system.” (p. 91-92 = p. 13-14 in the pdf) I’d say that if ID can be defended by means of Thomist metaphysics, then it must be a metaphysical system, except that it demonstrably cannot be defended by means of Thomist metaphysics, so it’s evidently something else. My conclusion is that ID is indefensible due to conceptual inconsistencies stemming from the fact that its advocates and apologists never figured out whether it’s a metaphysics or a science. Unfortunately, pace KN, metaphysics and science are two distinct worlds and need to be sorted out before engaging in either one.

271 thoughts on “Koons, Aquinas, and Intelligent Design

  1. colewd,

    Can you create an argument that supports this statement from Richard Dawkins?

    No. I don’t support that statement from Richard Dawkins. Organisms don’t look designed to me. Other people think they do – that’s an assertion, the obvious and simple answer to which is counter-assertion. One would hope to be able to deal in more than simple assertion.

  2. I agree with Allan. Organisms don’t look designed.

    colewd: Why don’t they looked designed?

    If I’m in an automobile show room, the autos look designed. If a mouse runs across the floor — well, that does not look at all like those designed things.

    A hand knit sweater looks very different from the sweater that you buy at the department store. The hand knit sweater looks crafted, but not designed.

    Biological organisms look more like crafted things than designed things. And, moreover, they look as if self-crafted.

    The difference: With design, you first come up with the design. And then you build objects to fit the design. With crafting, there never is a design. Plans change at every step along the way, fitting the crafted object to the intended purpose. With biological organism, that “fitting for the purpose” largely occurs during development, which is why they look self-crafted.

  3. Neil Rickert,

    A hand knit sweater looks very different from the sweater that you buy at the department store. The hand knit sweater looks crafted, but not designed.

    Would you consider a model car built by 3D printing crafted or designed?

  4. colewd:
    Neil Rickert,

    Would you consider a model car built by 3D printing crafted or designed?

    If it is build by 3D printing, then it is probably designed.

    If it is hand-carved out of a block of wood, that’s more like crafting.

  5. Neil Rickert,

    If it is build by 3D printing, then it is probably designed.

    If it is hand-carved out of a block of wood, that’s more like crafting.

    With these ideas as a reference could you come up with a definition of design and crafting.

    I am not sure that a purposeful arrangement of parts that perform a function works.

  6. colewd: I am not sure that a purposeful arrangement of parts that perform a function works.

    Who’s problem is that? I mean, have you considered the possibility that “purposeful arrangement of parts” might be a useless criteria of identifying design?

  7. Neil Rickert,

    No. I don’t see that these are easily definable.

    I agree this is tough. Do you agree that a commonality between these two verbs is they both require intelligence.

  8. colewd:
    Neil Rickert,

    I agree this is tough.Do you agree that a commonality between these two verbs is they both require intelligence.

    Hard to say. It depends on what you mean by “intelligent”. And there’s no agreement on that, as far as I can tell.

  9. Neil Rickert: The difference: With design, you first come up with the design. And then you build objects to fit the design. With crafting, there never is a design. Plans change at every step along the way, fitting the crafted object to the intended purpose.

    I think crafting is part of the spectrum of design. Its emphasis is on the individual rather than the group.

    With biological organism, that “fitting for the purpose” largely occurs during development, which is why they look self-crafted.

    I agree ,individuality is asset.

  10. colewd,

    Why don’t they looked designed?

    Just don’t. When I look around a crowded room I see the things people made, and I see people. I don’t see ‘design’ in the people. I can’t always tell for sure when an object is designed or is not. But I don’t see any reason to lump the entirety of biology into the ‘designed’ set.

  11. colewd:
    Allan Miller,

    Why don’t they looked designed?

    Things that looked designed are usually made of plastic, metal and/or polished and machined wood. It has tool marks, holes for screws or nails, attachment points or surfaces intended to be glued together, it has mold lines, manufacturing logos etc. etc.
    All of those are things that make me think it’s designed. If it has none of those, then it doesn’t look designed. Biological organisms aren’t made of plastic, or metal, or polished and machined wood. They don’t have tool marks, holes for screws and nails, attachment points with glue, mold lines, manufacturing logos (Nike, nVIDIA, Intel, Hugo Boss etc.) or anything like it.

    They also seem to make themselves slowly through gradual growth, rather than de novo assembly from large macroscopic parts individually fabricated elsewhere and then brought to gether. Simply put, biological organisms are wholly unlike anything designed.

    The only commonality I see is between very very complex things like microchips and organisms, which is that they have many many tiny and intricate parts. But so does a beach, and beaches aren’t designed, so having many many intricate parts can’t be a tell-tale sign of design, since beaches have them yet aren’t designed.

  12. colewd:
    Allan Miller,

    Why don’t they looked designed?

    Well, you know, they evolved.

    OK, that wasn’t the desired answer, in more than one way, but it is correct. That said, let’s note that they don’t look designed because they reproduce rather than being manufactured (no machine marks, say), they’re far more limited in types of materials (no metals for hot operation, for instance), biologic types (species) vary far more than do the manufactured types, and they have bizarre (in design terms) throwbacks to ancestral forms (coccyx in humans, but more generally humans have a quadrupedal skeleton modified into bipedal form).

    They (including plants, prokaryotes) also have a kind of autonomous activity quite unlike machine activity. That gets to the fact that machines are made for a purpose, while organisms are not.

    Glen Davidson

  13. Rumraket,

    The only commonality I see is between very very complex things like microchips and organisms, which is that they have many many tiny and intricate parts. But so does a beach, and beaches aren’t designed, so having many many intricate parts can’t be a tell-tale sign of design, since beaches have them yet aren’t designed.

    Microchips (organized inside a computer) and a living organism are very complex and can turn energy into a repeatable processes.

  14. Allan Miller,

    The guy on the left can perform more functions but takes lots of more components then the guy on the right. Plus the guy on the right is perfectly house trained 🙂

  15. Neil Rickert,

    Yes, I agree intelligence is a complex subject, Per wiki on artificial intelligence.

    Goals[edit]
    The overall research goal of artificial inteligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[7]

    Erik Sandwell emphasizes planning and learning that is relevant and applicable to the given situation.[40]

    Reasoning, problem solving[edit]
    Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions (reason).[41] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[42]

    For difficult problems, algorithms can require enormous computational resources—most experience a “combinatorial explosion”: the amount of memory or computer time required becomes astronomical for problems of a certain size. The search for more efficient problem-solving algorithms is a high priority.[43]

    Human beings ordinarily use fast, intuitive judgments rather than step-by-step deduction that early AI research was able to model.[44] AI has progressed using “sub-symbolic” problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the human ability.

    Knowledge representation[edit]

    An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.
    Main articles: Knowledge representation and Commonsense knowledge
    Knowledge representation[45] and knowledge engineering[46] are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects;[47] situations, events, states and time;[48] causes and effects;[49] knowledge about knowledge (what we know about what other people know);[50] and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts and so on that the machine knows about. The most general are called upper ontologies, which attempt to provide a foundation for all other knowledge.[51]

    Among the most difficult problems in knowledge representation are:

    Default reasoning and the qualification problem
    Many of the things people know take the form of “working assumptions”. For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969[52] as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.[53]
    The breadth of commonsense knowledge
    The number of atomic facts that the average person knows is very large. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering—they must be built, by hand, one complicated concept at a time.[54] A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the Internet, and thus be able to add to its own ontology.[citation needed]
    The subsymbolic form of some commonsense knowledge
    Much of what people know is not represented as “facts” or “statements” that they could express verbally. For example, a chess master will avoid a particular chess position because it “feels too exposed”[55] or an art critic can take one look at a statue and realize that it is a fake.[56] These are intuitions or tendencies that are represented in the brain non-consciously and sub-symbolically.[57] Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.[57]
    Planning[edit]

    A hierarchical control system is a form of control system in which a set of devices and governing software is arranged in a hierarchy.
    Main article: Automated planning and scheduling
    Intelligent agents must be able to set goals and achieve them.[58] They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or “value”) of the available choices.[59]

    In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be.[60] However, if the agent is not the only actor, it must periodically ascertain whether the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.[61]

    Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[62]

    Learning[edit]
    Main article: Machine learning
    Machine learning is the study of computer algorithms that improve automatically through experience[63][64] and has been central to AI research since the field’s inception.[65]

    Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning[66] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space. These three types of learning can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.[67]

    Within developmental robotics, developmental learning approaches were elaborated for lifelong cumulative acquisition of repertoires of novel skills by a robot, through autonomous self-exploration and social interaction with human teachers, and using guidance mechanisms such as active learning, maturation, motor synergies, and imitation.[68][69][70][71]

    Natural language processing[edit]

    A parse tree represents the syntactic structure of a sentence according to some formal grammar.
    Main article: Natural language processing
    Natural language processing[72] gives machines the ability to read and understand the languages that humans speak. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[73] and machine translation.[74]

    A common method of processing and extracting meaning from natural language is through semantic indexing. Increases in processing speeds and the drop in the cost of data storage makes indexing large volumes of abstractions of the user’s input much more efficient.

    Perception[edit]
    Main articles: Machine perception, Computer vision, and Speech recognition
    Machine perception[75] is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others more exotic) to deduce aspects of the world. Computer vision[76] is the ability to analyze visual input. A few selected subproblems are speech recognition,[77] facial recognition and object recognition.[78]

    Motion and manipulation[edit]
    Main article: Robotics
    The field of robotics[79] is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation[80] and navigation, with sub-problems of localization (knowing where you are, or finding out where other things are), mapping (learning what is around you, building a map of the environment), and motion planning (figuring out how to get there) or path planning (going from one point in space to another point, which may involve compliant motion – where the robot moves while maintaining physical contact with an object).[81][82]

    Social intelligence[edit]
    Main article: Affective computing

    Kismet, a robot with rudimentary social skills[83]
    Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects.[84][85] It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science.[86] While the origins of the field may be traced as far back as to early philosophical inquiries into emotion,[87] the more modern branch of computer science originated with Rosalind Picard’s 1995 paper[88] on affective computing.[89][90] A motivation for the research is the ability to simulate empathy. The machine should interpret the emotional state of humans and adapt its behaviour to them, giving an appropriate response for those emotions.

    Emotion and social skills[91] play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, in an effort to facilitate human-computer interaction, an intelligent machine might want to be able to display emotions—even if it does not actually experience them itself—in order to appear sensitive to the emotional dynamics of human interaction.

    Creativity[edit]
    Main article: Computational creativity
    A sub-field of AI addresses creativity both theoretically (from a philosophical and psychological perspective) and practically (via specific implementations of systems that generate outputs that can be considered creative, or systems that identify and assess creativity). Related areas of computational research are Artificial intuition and Artificial thinking.

  16. colewd: Yes, I agree intelligence is a complex subject, Per wiki on artificial intelligence.

    I won’t requote the wiki excerpt. Actually, you could have just provided a link, instead of quoting.

    There has long been a tendency to equate intelligence with logic. And I just don’t see that. It has never seemed right.

    There are plenty of examples of logic being used for intelligent things. But, to me, that’s the intelligent use of logic. The intelligence is not in the logic, but in the way that it is used.

    And the same goes with reasoning. There’s a tendency to equate reasoning with the use of logic. But, on my own analysis, reasoning is more clearly connected to perception. We use logic to try out our ideas (with thought). But then we use our perception of those ideas to evaluate them. And the evaluation is more important than the logic.

    From my perspective, evolution is intelligent. It does have an evaluation system, and natural selection is part of that evaluation. So I see ID as going after the wrong ideas of intelligence.

    And if you want to give credit to God, then give Him credit for coming up with evolution as an intelligent way to build and maintain the biosphere.

  17. Neil Rickert,

    From my perspective, evolution is intelligent. It does have an evaluation system, and natural selection is part of that evaluation. So I see ID as going after the wrong ideas of intelligence.

    And if you want to give credit to God, then give Him credit for coming up with evolution as an intelligent way to build and maintain the biosphere.

    Can you expand more on your thoughts why evolution is intelligent?

    Where I am currently struggling is that the cell appears to be designed to remain what its because variation is deadly i.e. cancer etc.

    Unless mutations are carefully directed through the mine field of delirious mutations then the organism will cease to exist. DNA repair is part of this solution but as it is designed it will keep the genome in a very tight window. So how then does evolution occur? What is the mechanism?

    I agree that ID does not address this either.

  18. colewd: Can you expand more on your thoughts why evolution is intelligent?

    It would take more than a simple blog comment to explain that. So I’ll just give a brief outline.

    Consider an evolving population. It is generating variants. These allow it to explore the eco-sphere (or nearby niches) to see whether it can find ways of expanding into those niches.

    To me, this looks very much like perception. On J.J. Gibson’s ecological account of perception, what we perceive are affordances — things in the environment that afford us opportunities to achieve our goals. If we take the goal of a population to survive and spread, then this seems to fit. So the population isn’t perceiving what we think of as objects, but it is perceiving opportunities for that population to expand its range.

    And then it is acting on those perceived opportunities by expanding into those niches that it has learned how to exploit.

  19. colewd: the cell appears to be designed to remain what its because variation is deadly i.e. cancer etc.

    How do you know it wasn’t designed to produce cancer?

    colewd: Unless mutations are carefully directed through the mine field of delirious mutations then the organism will cease to exist

    Unsupported assertion. Let me try a counter-assertion: “Unless some incompetent deity messes with the genome, the organism will evolve just fine”

    colewd: DNA repair is part of this solution but as it is designed it will keep the genome in a very tight window

    Unsupported assertion. Let me try a counter-assertion: “DNA repair evolved”

    That was easy

Leave a Reply