On the Circularity of the Argument from Intelligent Design

There is a lot of debate in the comments to recent posts about whether the argument from ID is circular.  I thought it would be worth calling this out as a separate item.

I plead that participants in this discussion (whether they comment here or on UD):

  • make a real effort to stick to Lizzie’s principles (and her personal example) of respect for opposing viewpoints and politeness
  • confine the discussion to this specific point (there is plenty of opportunity to discuss other points elsewhere and there is the sandbox)

What follows has been covered a thousand times. I simple repeat it in as rigorous a manner as I can to provide a basis for the ensuing discussion (if any!)

First, a couple of definitions.

A) For the purposes this discussion I will use “natural” to mean “has no element of design”. I do not mean to imply anything about materialism versus supernatural or such like. It is just an abbreviation for “not-designed”.

B) X is a “good explanation” for Y if and only if:

i) We have good reason to suppose X exists

ii) The probability of Y given X is reasonably high (say 0.1 or higher). There may of course be better explanations for Y where the   probability is even higher.

Note that X may include design or be natural.

As I understand it, a common form of the ID argument is:

1) Identify some characteristic of outcomes such as CSI, FSCI or dFSCI. I will use dFSCI as an example in what follows but the point applies equally to the others.

2) Note that in all cases where an outcome has dFSCI, and a good explanation of the outcome is known, then the good explanation includes design and there is no good natural explanation.

3) Conclude there is a strong empirical relationship between dFSCI and design.

4) Note that living things include many examples of dFSCI.

5) Infer that there is a very strong case that living things are also designed.

This argument can be attacked from many angles but I want to concentrate on the circularity issue. The key point being that it is part of the definition of dFSCI (and the other measures) that there is no good natural explanation.

It follows that if a good natural explanation is identified then that outcome no longer has dFSCI.  So it is true by definition that all outcomes with dFSCI fall into two categories:

  • A good explanation has been identified and it is design
  • No good explanation has yet been identified

Note that it was not necessary to do any empirical observation to prove this. It must always be the case from the definition of dFSCI that whenever a good explanation is identified it includes design.

I appreciate that as it stands this argument does not do justice to the ID position. If dFSCI was simply a synonym for “no  good natural explanation” then the case for circularity would be obviously true. But is incorporates other features (as do its cousins CSI and FSCI). So for example dFSCI incorporates attributes such as digital, functional and not compressible – while CSI (in its most recent definition) includes the attribute compressible. So if we describe any of the measures as a set of features {F} plus the condition that if a good natural explanation is discovered then measure no longer applies – then it is possible to recast the ID argument this way:

“For all outcomes where {F} is observed then when a good  explanation is identified it turns out to be designed and there is no good natural explanation. Many aspects of life have {F}.  Therefore, there is good reason to suppose that design will be a good explanation and there will be no good natural explanation.”

The problem here is that while CSI, FSCI and dFSCI all agree on the “no good natural explanation” clause they differ widely on {F}. For Dembski’s CSI {F} is essentially equivalent to compressible (he refers to it as “simple” but defines “simple” mathematically in terms of easily compressible). While for FSCI {F} includes “has a function” and in some descriptions “not compressible”. dFSCI adds the additional property of being digital to FSCI.

By themselves both compressible and non-compressible phenomena clearly can have both natural and designed explanations.  The structure of a crystal is highly compressible. CSI has no other relevant property and the case for circularity seems to be made at this point. But  FSCI and dFSCI  add the condition of being functional which perhaps makes all the difference.  However, the word “functional” also introduces a risk of circularity.  “Functional” usually means “has a purpose” which implies a purpose which implies a mind.  In archaeology an artefact is functional if it can be seen to fulfil some past person’s purpose – even if that purpose is artistic. So if something has the attribute of being functional it follows by definition that a mind was involved. This means that by definition it is extremely likely, if not certain, that it was designed (of course, it is possible that it may have a good natural explanation and by coincidence also happen to fulfil someone’s purpose). To declare something to be functional is to declare it is engaged with a purpose and a mind – no empirical research is required to establish that a mind is involved with a functional thing in this sense.

But there remains a way of trying to steer FSCI and dFSCI away from circularity. When the term FSCI is applied to living things it appears a rather different meaning of “functional” is being used.  There is no mind whose purpose is being fulfilled. It simply means the object (protein, gene or whatever) has a role in keeping the organism alive. Much as greenhouse gasses have a role in keeping the earth’s surface temperature at around 30 degrees. In this case of course “functional” does not imply the involvement of a mind. But then there are plenty of examples of functional phenomena in this sense which have good natural explanations.

The argument to circularity is more complicated than it may appear and deserves careful analysis rather than vitriol – but if studied in detail it is compelling.

Conflicting Definitions of “Specified” in ID

I see that in the unending TSZ and Jerad Thread Joe has written in response to R0bb

Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI. Even Stephen C. Meyer says CSI is not amendable to compression.

A protein sequence is not compressable- CSI.

So please reference Dembski and I will find Meyer’s quote

To save Robb the effort.  Using Specification: The Pattern That Signifies Intelligence by William Dembski which is his most recent publication on specification;  turn to page 15 where he discusses the difference between two bit strings (ψR) and (R). (ψR) is the bit stream corresponding to the integers in binary (clearly easily compressible).  (R) to quote Dembksi “cannot, so far as we can tell, be described any more simply than by repeating the sequence”.  He then goes onto explain that (ψR) is an example of a specified string whereas (R) is not.

This conflict between Dembski’s definition of “specified” which he quite explicitly links to low Kolmogorov complexity (see pp 9-12) and others which have the reverse view appears to be a problem which most of the ID community don’t know about and the rest choose to ignore.  I discussed this with Gpuccio a couple of years ago. He at least recognised the conflict and his response was that he didn’t care much what Dembski’s view is – which at least is honest.

The LCI and Bernoulli’s Principle of Insufficent Reason

(Just found I can post here – I hope it is not a mistake. This is a slightly shortened version of a piece which I have published on my blog. I am sorry it is so long but I struggle to make it any shorter. I am grateful for any comments. I will look at UD for comments as well – but not sure where they would appear.).

I have been rereading Bernoulli’s Principle of Insufficient Reason and Conservation of Information in Computer Search by William Dembski and Robert Marks. It is an important paper for the Intelligent Design movement as Dembski and Marks make liberal use of Bernouilli’s Principle of Insufficient Reason (BPoIR) in their papers on the Law of Conservation of Information (LCI).  For Dembski and Marks BPoIR provides a way of determining the probability of an outcome given no prior knowledge. This is vital to the case for the LCI.

The point of Dembski and Marks paper is to address some fundamental criticisms of BPoIR. For example  J M Keynes (along with with many others) pointed out that the BPoIR does not give a unique result. A well-known example is applying BPoIR to the specific volume of a given substance. If we know nothing about the specific volume then someone could argue using BPoIR that all specific volumes are equally likely. But equally someone could argue using BPoIR all specific densities are equally likely. However, as one is the reciprocal of the other, these two assumptions are incompatible. This is an example based on continuous measurements and Dembski and Marks refer to it in the paper. However, having referred to it, they do not address it. Instead they concentrate on the examples of discrete measurements where they offer a sort of response to Keynes’ objections. What they attempt to prove is a rather limited point about discrete cases such as a pack of cards or protein of a given length. It is hard to write their claim concisely – but I will give it a try.

Imagine you have a search space such as a normal pack of cards and a target such as finding a card which is a spade. Then it is possible to argue by BpoIR that, because all cards are equal, the probability of finding the target with one draw is 0.25. Dembski and Marks attempt to prove that in cases like this that if you decide to do a “some to many” mapping from this search space into another space then you have at best a 50% chance of creating a new search space where BPoIR gives a higher probability of finding a spade. A “some to many” mapping means some different way of viewing the pack of cards so that it is not necessary that all of them are considered and some of them may be considered more often than others. For example, you might take a handful out of the pack at random and then duplicate some of that handful a few times – and then select from what you have created.

There are two problems with this.

1) It does not address Keynes’ objection to BPoIR

2) The proof itself depends on an unjustified use of BPoIR.

But before that a comment on the concept of no prior knowledge.

The Concept of No Prior Knowledge

Dembski and Marks’ case is that BPoIR gives the probability of an outcome when we have no prior knowledge. They stress that this means no prior knowledge of any kind and that it is “easy to take for granted things we have no right to take for granted”.  However, there are deep problems associated with this concept. The act of defining a search space and a target implies prior knowledge. Consider finding a spade in pack of cards. To apply BPoIR at minimum you need to know that a card can be one of four suits, that 25% of the cards have a suit of spades, and that the suit does not affect the chances of that card being selected. The last point is particularly important. BPoIR provides a rationale for claiming that the probability of two or more events are the same. But the events must differ in some respects (even if it is only a difference in when or where they happen) or they would be the same event. To apply BPoIR we have to know (or assume) that these differences are not relevant to the probability of the events happening. We must somehow judge that the suit of the card, the head or tails symbols on the coin, or the choice of DNA base pair is irrelevant to the chances of that card, coin toss or base pair being selected. This is prior knowledge.

In addition the more we try to dispense with assumptions and knowledge about an event then the more difficult it becomes to decide how to apply BPoIR. Another of Keynes’ examples is a bag of 100 black and white balls in an unknown ratio of black to white. Do we assume that all ratios of black to white are equally likely or do we assume that each individual ball is equally likely to be black or white? Either assumption is equally justified by BPoIR but they are incompatible. One results in a uniform probability distribution for the number of white balls from zero to 100; the other results in a binomial distribution which greatly favours roughly equal numbers of black and while balls.

Looking at the problems with the proof in Dembski and Marks’ paper.

The Proof does not Address Keynes’ objection to BPoIR

Even if the proof were valid then it does nothing to show that the assumption of BPoIR is correct. All it would show (if correct) was that if you do not use BPoIR then you have 50% or less chance of improving your chances of finding the target. The fact remains that there are many other assumptions you could make and some of them greatly increase your chances of finding the target. There is nothing in the proof that in anyway justifies assuming BPoIR or giving it any kind of privileged position.

But the problem is even deeper. Keynes’ point was not that there are alternatives to using BPoIR – that’s obvious. His point was that there are different incompatible ways of applying BPoIR. For example, just as with the example of black and white balls above, we might use BPoIR to deduce that all ratios of base pairs in a string of DNA are equally likely. Dembski and Marks do not address this at all. They point out the trap of taking things for granted but fall foul of it themselves.

The Proof Relies on an Unjustified Use of BPoIR

The proof is found in appendix A of the paper and this is the vital line:

image

This is the probability that a new search space created from an old one will include k members which were part of the target in the original search space. The equation holds true if the new search space is created by selecting elements from old search space at random; for example, by picking a random number of cards at random from a pack. It uses BPoIR to justify the assumption that each unique way of picking cards is equally likely. This can be made clearer with an example.

Suppose the original search space comprises just the four DNA bases, one of which is the target. Call them x, y, z and t. Using BPoIR, Dembski and Marks would argue that all of them are equally likely and therefore the probability of finding t with a single search is 0.25. They then consider all the possible ways you might take a subset of that search space. This comprises:

Subsets with

no items

just one item: x,y,z and t

with two items: xy, xz, yz, tx, ty, tz

with three items: xyz, xyt, xzt, yzt

with four items: xyzt

A total of 16 subsets.

Their point is that if you assume each of these subsets is equally likely (so the probability of one of them being selected is 1/16) then 50% of them have a probability of finding t which is greater than or equal to probability in the original search space (i.e. 0.25). To be specific new search spaces where probability of finding t is greater than 0.25 are t, tx, ty, tz, xyt, xzt, yzt and xyzt. That is 8 out of 16 which is 50%.

But what is the justification for assuming each of these subsets are equally likely? Well it requires using BPoIR which the proof is meant to defend. And even if you grant the use of BPoIR Keynes’ concerns apply. There is more than one way to apply BPoIR and not all of them support Dembski and Marks’ proof. Suppose for example the subset was created by the following procedure:

    • Start with one member selected at random as the subset
    • Toss a dice,
      • If it is two or less then stop and use current set as subset
      • If it is a higher than two then add another member selected at random to the subset
    • Continue tossing until dice throw is two or less or all four members in are in subset

This gives a completely different probability distribution.

The probability of:

single item subset (x,y,z, or t) = 0.33/4 = 0.083

double item subset (xy, xz, yz, tx, ty, or tz) = 0.66*0.33/6 = 0.037

triple item subset (xyz, xyt, xzt, or yzt) = 0.66*0.33*0.33/4 = 0.037

four item subset (xyzt) = 0.296

So the combined probability of the subsets where probability of selecting t is ≥ 0.25 (t, tx, ty, tz, xyt, xzt, yzt, xyzt) = 0.083+3*(0.037)+3*(0.037)+0.296 = 0.60 (to 2 dec places) which is bigger than 0.5 as calculated using Dembski and Marks assumptions. In fact using this method, the probability of getting a subset where the probability of selecting t ≥ 0.25 can be made as close to 1 as desired by increasing the probability of adding a member. All of these methods treat all four members of the set equally and are equally justified under BpoIR as Dembski and Marks assumption.

Conclusion

Dembski and Marks paper places great stress on BPoIR being the way to calculate probabilities when there is no prior knowledge. But their proof itself includes prior knowledge. It is doubtful whether it makes sense to eliminate all prior knowledge, but if you attempt to eliminate as much prior knowledge as possible, as Keynes does, then BPoIR proves to be an illusion. It does not give a unique result and some of the results are incompatible with their proof.

The Law(?) of Conservation of Information

(Preamble: I apologize in advance for cluttering TSZ with these three posts. There are very few people on either side of the debate that actually care about the details of this “conservation of information” stuff, but these posts make good on some claims I made at UD.)

For the past three years Dembski has been promoting his Law of Conservation of Information (LCI), most recently here. The paper he most often promotes is this one, which begins as follows:

Laws of nature are universal in scope, hold with unfailing regularity, and receive support from a wide array of facts and observations. The Law of Conservation of Information (LCI) is such a law.

Dembski hasn’t proven that the LCI is universal, and in fact he claims that it can’t be proven, but he also claims that to date it has always been confirmed. He doesn’t say whether he as actually tried to find counterexamples, but the reality is that they are trivial to come up with. This post demonstrates one very simple counterexample.

Definitions

First we need to clarify Dembski’s terminology. In his LCI math, a search is described by a probability distribution over a sample space Ω. In other words, a search is nothing more than an Ω-valued random variable. Execution of the search consists of a single query, which is simply a realization of the random variable. The search is deemed successful if the realized outcome resides in target T ⊆ Ω. (We must be careful to not read teleology into the terms search, query, and target, despite the terms’ connotations. Obviously, Dembski’s framework must not presuppose teleology if it is to be used to detect design.)

If a search’s parameters depend on the outcome of a preceding search, then the preceding search is a search for a search. It’s this hierarchy of two searches that is the subject of the LCI, which we can state as follows.

Given a search S, we define:

  • q as the probability of S succeeding
  • p2 as the probability that S would succeed if it were a uniform distribution
  • p1 as the probability that a uniformly distributed search-for-a-search would yield a search at least as good as S

The LCI says that p1 ≤ p2/q.

Counterexample

In thinking of a counterexample to the LCI, we should remember that this two-level search hierarchy is nothing more than a chain of two random variables. (Dembski’s search hierarchy is like a Markov chain, except that each transition is from one state space to another, rather than within the same state space.) One of the simplest examples of a chain of random variables is a one-dimensional random walk. Think of a system that periodically changes state, with each state transition represented by a shift to the left or to the right on an state diagram. If we know at a certain point in time that it is in one of, say, three states, namely n-1 or n or n+1, then after the next transition it will be in n-2, n-1, n, n+1, or n+2, as in the following diagram:

Assume that the system is always equally likely to shift left as to shift right, and let the “target” be defined as the center node n. If the state at time t is, say, n-1, then the probability of success q is 1/2. Of the three original states, two (namely n-1 and n+1) yield this probability of success, so p1 is 2/3. Finally, p2 is 1/5 since the target consists of only one of the final five states. The LCI says that p1 ≤ p2/q. Plugging in our numbers for this example, we get 2/3 ≤ (1/5)/(1/2), which is clearly false.

Of course, the LCI does hold under certain conditions. To show that the LCI to biological evolution, Dembski needs to show that his mathematical model of evolution meets those conditions. This model would necessarily include the higher-level search that gave rise to the evolutionary process. As will be shown in the next post, the good news for Dembski is that any process can be modeled such that it obeys the LCI. The bad news is that any process can also be modeled such that it violates the LCI.

Is Any Form Of Atheism Rationally Justifiable?

Definition of God:   First cause, prime mover, objective source of human purpose (final cause) and resulting morality, source of free will; omnipotent, omniscient and omnipresent inasmuch as principles of logic allow. I am not talking in particular about any specifically defined religious interpretation of god, such as the christian or islamic god.

Definition: Intellectual dishonesty occurs when (1)one deliberately mischaracterizes their position or view in order to avoid having to logically defend their actual views; and/or (2) when someone is arguing, or making statements against a position while remaining willfully ignorant about that position, and/or (3) when someone categorically and/or pejoratively dismisses all existent and/or potential evidence in favor of a conclusion they claim to be neutral about, whether they are familiar with that evidence or not.

Continue reading

Is purpose necessary to acquire any apparently purposeful effects?

For purposes of this discussion.

.
Chance = non-teleological causes that happen to result in particular effects via regularities referred to as “lawful” and stochastic in nature.

.
Purpose = teleological causes that are intended to result in particular effects; the organization of causes towards a pre-defined future goal.

.
My question is: can chance causes generate all of the effects normally associated with purpose,but without purpose? IOW, is purpose necessary to produce all, most, or some apparently purposeful effects, or is purpose, in effect, only an associated sensation by-product or side-effect that isn’t necessary to the generation of any particular effect normally associated with it?

The LCI and Bertrand’s Box

Tom English has recommended that we read Dembski and Marks’ paper on their Law of Conservation of Information (not to be confused with the Dembski’s previous LCI from his book No Free Lunch). Dembski also has touted the paper several times, and I too recommend it as a stark display of the the authors’ thinking.

Most people won’t take the time to carefully read a 34-page paper, but I submit that the authors’ core concept of “conservation of information” is very easily understood if we avoid equivocal and misleading terms such as information, search, and target. I’ll illustrate it with a setup borrowed from Joseph Bertrand.

The “Bertrand’s box” scenario is as follows: We’re presented with three small outwardly identical boxes, each containing two coins. One has a two silver coins, one has two gold coins, and one has a silver coin and a gold coin. We’ll call the boxes SS, GG, and SG. We are to randomly choose a box, and then randomly pull a coin from the chosen box.

Continue reading

A Few Comments on A Vivisection of the ev Computer Organism

I’ll follow Patrick’s lead and offer a few comments on another paper from the Evolutionary Informatics Lab. The paper analyzes Tom Schneider’s ev program, and while there are several problems with the analysis, I’ll focus on the first two sentences of the conclusions:

The success of ev is largely due to active information introduced by the Hamming oracle and from the perceptron structure. It is not due to the evolutionary algorithm used to perform the search.

To explain the authors’ terminology, active information is defined quantitatively as a measure of relative search performance — to say that something provides N bits of active information is to say that it increases the probability of success by a factor of 2N. The Hamming oracle is a function that reports the Hamming distance between the its input and a fixed target. The perceptron structure is another function whose details aren’t important to this post. Figure 1 shows how these three components are connected in an iterative feedback loop.

Continue reading

Natural Selection- What is it and what does it do?

Well let’s look at what natural selection is-

 “Natural selection is the result of differences in survival and reproduction among individuals of a population that vary in one or more heritable traits.” Page 11 “Biology: Concepts and Applications” Starr fifth edition

“Natural selection is the simple result of variation, differential reproduction, and heredity—it is mindless and mechanistic.” UBerkley

“Natural selection is the blind watchmaker, blind because it does not see ahead, does not plan consequences, has no purpose in view.” Dawkins in “The Blind Watchmaker”?

“Natural selection is therefore a result of three processes, as first described by Darwin:

Variation
Inheritance
Fecundity

which together result in non-random, unequal survival and reproduction of individuals, which results in changes in the phenotypes present in populations of organisms over time.”- Allen McNeill prof. introductory biology and evolution at Cornell University

OK so it is a result of three processes- ie an output. But is it really non-random as Allen said? Nope, whatever survives to reproduce survives to reproduce. And that can be any number of variations taht exist in a population.

Continue reading

Reservations About ID, Rottenness in Creationism

As a card carrying creationist, I’ve sometimes wanted to post about my reservations regarding the search for evidence of Intelligent Design (ID) and some of the rottenness in the search for evidence in young earth creation. I’ve refrained from speaking my mind on these matters too frequently lest I ruffle the feathers of the few friends I have left in the world (the ID community and the creationist community). But I must speak out and express criticism of my own side of the aisle on occasion.

Before proceeding, I’d like to thank Elizabeth for her hospitality in letting me post here. She invited me to post some things regarding my views of Natural Selection and Genetic Algorithms, but in the spirit of skepticism I want to offer criticism of some of my own ideas.So this essay will sketch what I consider valid criticism of ID, creationism in general and Young Earth Creationism (YEC) in particular.

Continue reading

Libertarian Free Will

The concept of Libertarian Free Will (and the contextualizations that must accompany it) is really just too big to tackle all at once, so I’m going to begin with a thread to serve as a basic primer about my view of Libertarian Free Will (LFW) – what I posit it to be, ontologically speaking, and how I describe it.

The basic difference between compatibilist free will and libertarian free will is that compatibilist intents are ultimately manufactured effects of unintentional brute processes. No matter how many layers of “pondering” “meta-pondering” one adds, or how many “modules” or “partitions” are added to the mix, it all still ultimately boils down to intentions being sufficiently explained as effects of brute (unintentional) forces. That is the root of all will in the compatibilist view; ultimately, humans do as they will, but do not will as they will, regardless of how many pre-action “intentions” they put in the chain.

Continue reading

A Second Look at the Second Law…

…is the title of Granville Sewell’s manuscript that almost got published in Applied Mathematics Letters last year. It was withdrawn at the last minute by the editor, but you can still download the manuscript from Sewell’s web page. The purpose of this thread is to discuss the technical merits of Sewell’s arguments.

Continue reading

No Free Lunch

My husband, mother, father, myself, and my four-year-old son were going out for a walk.  It was raining. My son refused (as usual) to wear his raincoat.  Instead, he carried a cup, which he held out in front of him.  He argued that he was going to catch the rain drops in the cup so that by the time he got to the place the raindrops had been, they’d be in the cup and he’d be dry. Half an hour later, four adults were still standing around, drawing diagrams on the backs of envelopes, arguing about Pythagoras and trigonometry, all to no avail.  We went out, with cup, sans rain coat.  My son got wet.  He insisted he remained dry.

Bryce Canyon, Utah.

I’ve got as far as Chapter 5 of Dembski’s book No Free Lunch, the chapter called Evolutionary Algorithms, and about which he says in his Preface: “This chapter is the climax of the book”.  He claims that in it he shows that “An elementary combinatorial analysis shows that evolutionary algorithms can no more generate specified complexity than can five letters fill ten mailboxes.”

I think he’s making the same kind of error as my son made.

Continue reading

Why the NDE/ID Debate Is Really (For Most) A Proxy Fight

To define:

NDE (Neo-Darwinian Evolution) = OOL & evolution without prescriptive goals, both being nothing more in essence than functions of material forces & interactions.

ID (Intelligent Design) = Deliberate OOL & evolution with prescriptive goals

(I included OOL because if OOL contains purposefully written code that provides guidelines for evolutionary processes towards goals, then evolutionary processes are not neo-Darwinian as they utilize oracle information).

I’m not an evolutionary biologist, nor am I a mathematician. Therefore, when I argue about NDE and ID, the only cases I attempt to make are logical ones based on principles involved because – frankly – I lack the educational, application & research expertise to legitimately parse, understand and criticize most papers published in those fields. I suggest that most people who engage in NDE/ID arguments (on either side) similarly lack the necessary expertise to evaluate (or conduct) such research on their own.

Continue reading

“Tiktaalik”, Why it is a failed Prediction

Tiktaalik is still being used as a successful prediction of something. I know it was supposed to be a successful prediction of universal common descent because it is A) Allegedly a transitional form between fish and tetrapods and B) It was found in the “correct” strata because allegedly no evidence of tetrapods before 385 million years ago- plenty of fish though and plenty of evidence for tetrapods around 365 million years ago- Tiktaalik was allegedly found in strata about 375 million years old- Shubin said that is the strata he looked in because of the 365-385 range already bracketed by existing data.

The thinking was tetrapods existed 365 mya and fish existed 385 mya, so the transition happened sometime in that 20 million years.

Sounds very reasonable. And when they looked they found Tiktaalik and all was good.

Then along comes another find that put the earliest tetrapods back to over 390 million years ago.

Now had this find preceded Tiktaalik then Shubin et al. would not have been looking for the transitional after the transition had occurred- that doesn’t make any sense. And that is why it is a failed prediction- the transition occurred some 25 million years before, Shubin et al., were looking in the wrong strata.

Continue reading

Intelligent Design is NOT Anti-Evolution

Thank you Elizabeth for this opportunity-

Good day- Over the past many, many years, IDists have been telling people that intelligent design is not anti-evolution. Most people understand and accept that, while others just refuse to, no matter what.

With that said, in this post I will provide the evidence (again) that firmly demonstrates that ID is not anti-evolution. I will be presenting several authoritative definitions of “evolution” followed by what the ID leadership has to say about evolution. So without any further adieu, I give you-

Intelligent Design is NOT Anti-Evolution

Continue reading

Why Methodological Naturalism is a Questionable Philosophy of Science

Elizabeth started another thread (http://theskepticalzone.com/wp/?p=256) stating that methodological naturalism (MN) “underlies the methodology that we call science.” Later she spoke of “methodological naturalism, as in the working assumption that scientists make about the world in order to predict things.” Then she quoted Wikipedia, which states: “all scientific endeavors—all hypotheses and events—are to be explained and tested by reference to natural causes and events,” adding that this is “more or less the definition I have been assuming.” In other words, science studies ‘nature-only’ because it is naturalistic – it sees nothing other than nature that *could* be studied. Elizabeth sticks with this definition when she says “Science occupies the domain of natural explanations.”

Still later, Elizabeth admitted she is ‘not wild about’ MN (or what I suggested as more accurate of her statements: science applies ‘methodological probabilism’) and also that “‘methodological naturalism’ is a poor term.” Thus, her concession: “now that I realise that the term [MN] appears to denote different things to different people, I will avoid it.” So, the main argument in the OP was deserted.

Continue reading