A specific instance of the problem of evil

This is The Skeptical Zone, so it’s only fitting that we turn our attention to topics other than ID from time to time.

The Richard Mourdock brouhaha provides a good opportunity for this. Mourdock, the Republican Senate candidate from the state of Indiana, is currently in the spotlight on my side of the Atlantic for a statement he made on Wednesday during a debate with his Democratic opponent:

You know, this is that issue that every candidate for federal or even state office faces. And I, too, certainly stand for life. I know there are some who disagree and I respect their point of view but I believe that life believes at conception. The only exception I have for – to have an abortion is in that case for the life of the mother. I just – I struggle with it myself for a long time but I came to realize that life is that gift from God, and I think even when life begins in that horrible situation of rape that it is something that God intended to happen. [emphasis mine]

Continue reading

Gpuccio’s challenge

29th Oct: I have offered a response to Gpuccio’s challenge below.

I think this is worth a new post.

Gpuccio has issued a challenge here and here. I have repeated the essential text below. Others may wish to try it and/or seek their own clarifications. Could be interesting. Something tells me that it is not going to end up in a clear cut result. But it may clarify the deeply confusing world of dFSCI.

Challenge:

Give me any number of strings of which you know for certain the origin. I will assess dFSCI in my way. If I give you a false positive, I lose. I will accept strings of a predetermined length (we can decide), so that at least the search space is fixed.

Conditions:

a) I would say binary strings of 500bits. Or language strings of 150 characters. Or decimal strings of 150 digits. Something like that. Even a mix of them would be fine.

No problem with that.

b) I will literally apply my procedure. If I cannot easily see any function for the string, I will not go on in the evaluation, and I will not infer design. If you, or any other, wants to submit strings whose function you know, you are free to tell me what the function is, and I will evaluate it thoroughly.

That’s OK. I will supply the function in each case. I note that when I tried to define function precisely you said that the function can be anything the observer wishes provided it is objectively defined, so, for example, “adds up to 1000” would be a function. So I don’t think that’s going to be an issue!

c) I will be cautious, and I will not infer design if I have doubts about any of the points in the procedure.

I am not happy with this. If the string meets the criteria for dFSCI you should be able to able to infer design. You can’t pick and choose when to apply it. At the very least you must prove that the string does not have dFSCI if you are going to avoid inferring design.

d) Ah, and please don’t submit strings outputted by an algorithm, unless you are ready to consider them as designed if the algorithm is more than 150 bits long. We should anyway agree, before we start, on which type of system and what time span we are testing.

I don’t understand this – a necessity system for generating digital strings can always be expressed as an algorithm e.g Fibonacci series. Otherwise it is just a copy of the string. Also it is not clear how to define how many bits long an algorithm is. Maybe it will suffice if I confine myself to algorithms that can be expressed mathematically in less than 20 symbols?

And anyway, I am afraid we have to wait next week for the test. My time is almost finished.

That’s OK. I need time to think anyway. But also I need to get your clarification on b,c and d.

 

On the Circularity of the Argument from Intelligent Design

There is a lot of debate in the comments to recent posts about whether the argument from ID is circular.  I thought it would be worth calling this out as a separate item.

I plead that participants in this discussion (whether they comment here or on UD):

  • make a real effort to stick to Lizzie’s principles (and her personal example) of respect for opposing viewpoints and politeness
  • confine the discussion to this specific point (there is plenty of opportunity to discuss other points elsewhere and there is the sandbox)

What follows has been covered a thousand times. I simple repeat it in as rigorous a manner as I can to provide a basis for the ensuing discussion (if any!)

First, a couple of definitions.

A) For the purposes this discussion I will use “natural” to mean “has no element of design”. I do not mean to imply anything about materialism versus supernatural or such like. It is just an abbreviation for “not-designed”.

B) X is a “good explanation” for Y if and only if:

i) We have good reason to suppose X exists

ii) The probability of Y given X is reasonably high (say 0.1 or higher). There may of course be better explanations for Y where the   probability is even higher.

Note that X may include design or be natural.

As I understand it, a common form of the ID argument is:

1) Identify some characteristic of outcomes such as CSI, FSCI or dFSCI. I will use dFSCI as an example in what follows but the point applies equally to the others.

2) Note that in all cases where an outcome has dFSCI, and a good explanation of the outcome is known, then the good explanation includes design and there is no good natural explanation.

3) Conclude there is a strong empirical relationship between dFSCI and design.

4) Note that living things include many examples of dFSCI.

5) Infer that there is a very strong case that living things are also designed.

This argument can be attacked from many angles but I want to concentrate on the circularity issue. The key point being that it is part of the definition of dFSCI (and the other measures) that there is no good natural explanation.

It follows that if a good natural explanation is identified then that outcome no longer has dFSCI.  So it is true by definition that all outcomes with dFSCI fall into two categories:

  • A good explanation has been identified and it is design
  • No good explanation has yet been identified

Note that it was not necessary to do any empirical observation to prove this. It must always be the case from the definition of dFSCI that whenever a good explanation is identified it includes design.

I appreciate that as it stands this argument does not do justice to the ID position. If dFSCI was simply a synonym for “no  good natural explanation” then the case for circularity would be obviously true. But is incorporates other features (as do its cousins CSI and FSCI). So for example dFSCI incorporates attributes such as digital, functional and not compressible – while CSI (in its most recent definition) includes the attribute compressible. So if we describe any of the measures as a set of features {F} plus the condition that if a good natural explanation is discovered then measure no longer applies – then it is possible to recast the ID argument this way:

“For all outcomes where {F} is observed then when a good  explanation is identified it turns out to be designed and there is no good natural explanation. Many aspects of life have {F}.  Therefore, there is good reason to suppose that design will be a good explanation and there will be no good natural explanation.”

The problem here is that while CSI, FSCI and dFSCI all agree on the “no good natural explanation” clause they differ widely on {F}. For Dembski’s CSI {F} is essentially equivalent to compressible (he refers to it as “simple” but defines “simple” mathematically in terms of easily compressible). While for FSCI {F} includes “has a function” and in some descriptions “not compressible”. dFSCI adds the additional property of being digital to FSCI.

By themselves both compressible and non-compressible phenomena clearly can have both natural and designed explanations.  The structure of a crystal is highly compressible. CSI has no other relevant property and the case for circularity seems to be made at this point. But  FSCI and dFSCI  add the condition of being functional which perhaps makes all the difference.  However, the word “functional” also introduces a risk of circularity.  “Functional” usually means “has a purpose” which implies a purpose which implies a mind.  In archaeology an artefact is functional if it can be seen to fulfil some past person’s purpose – even if that purpose is artistic. So if something has the attribute of being functional it follows by definition that a mind was involved. This means that by definition it is extremely likely, if not certain, that it was designed (of course, it is possible that it may have a good natural explanation and by coincidence also happen to fulfil someone’s purpose). To declare something to be functional is to declare it is engaged with a purpose and a mind – no empirical research is required to establish that a mind is involved with a functional thing in this sense.

But there remains a way of trying to steer FSCI and dFSCI away from circularity. When the term FSCI is applied to living things it appears a rather different meaning of “functional” is being used.  There is no mind whose purpose is being fulfilled. It simply means the object (protein, gene or whatever) has a role in keeping the organism alive. Much as greenhouse gasses have a role in keeping the earth’s surface temperature at around 30 degrees. In this case of course “functional” does not imply the involvement of a mind. But then there are plenty of examples of functional phenomena in this sense which have good natural explanations.

The argument to circularity is more complicated than it may appear and deserves careful analysis rather than vitriol – but if studied in detail it is compelling.

A challenge to kairosfocus

A few weeks ago, commenter ‘kairosfocus’ (aka ‘KF’) posted a Pro-Darwinism Essay Challenge at Uncommon Descent. The challenge was for an ID critic to submit a 6000-word essay in defense of ‘Darwinism’, written to KF’s specifications. The essay would be posted at UD and a discussion would ensue.

The challenge generated no interest among the pro-evolution commenters here at TSZ, mainly because no one wanted to write an essay of KF’s specified length, on KF’s specified topic, in KF’s ridiculously specific format (and presumably double-spaced with a title page addressed to ‘Professor Kairosfocus’). We also had no interest in posting an essay at UD, a website that is notorious in the blogosphere for banning and censoring dissenters. Kairosfocus himself, in a ridiculous display of tinpot despotism, censored no less than 20 comments in the “Essay Challenge” thread itself!

(Link, Link, Link, Link, Link, Link, Link, Link, Link, Link, Link, Link, Link, Link, Link, Link, Link, Link, Link, Link)

The commenter in question, ‘critical rationalist’, was banned from UD and has taken refuge here at TSZ, where open discussion is encouraged, dissent is welcome, comments are not censored, and only one commenter has ever been banned (for posting a photo of female genitalia).

Given the inhospitable environment at Uncommon Descent (from which I, like most of the ID critics at TSZ, have also been banned), I had (and have) no desire to submit an essay for publication at UD. However, I did respond to the spirit of KF’s challenge by writing a blog post here at TSZ that explains why unguided evolution, as a theory, is literally trillions of times better than Intelligent Design at explaining the evidence for common descent.

In his challenge, kairosfocus wrote:

It would be helpful if in that essay you would outline why alternatives such as design, are inferior on the evidence we face.

I have done exactly that, and so my challenge to kairosfocus is this: I have presented an argument showing that ID is vastly inferior to unguided evolution as an explanation of the evidence for common descent. Can you defend ID, or will you continue to claim your bogus daily victories despite being unable to rise to the challenge presented in my post?

Conflicting Definitions of “Specified” in ID

I see that in the unending TSZ and Jerad Thread Joe has written in response to R0bb

Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI. Even Stephen C. Meyer says CSI is not amendable to compression.

A protein sequence is not compressable- CSI.

So please reference Dembski and I will find Meyer’s quote

To save Robb the effort.  Using Specification: The Pattern That Signifies Intelligence by William Dembski which is his most recent publication on specification;  turn to page 15 where he discusses the difference between two bit strings (ψR) and (R). (ψR) is the bit stream corresponding to the integers in binary (clearly easily compressible).  (R) to quote Dembksi “cannot, so far as we can tell, be described any more simply than by repeating the sequence”.  He then goes onto explain that (ψR) is an example of a specified string whereas (R) is not.

This conflict between Dembski’s definition of “specified” which he quite explicitly links to low Kolmogorov complexity (see pp 9-12) and others which have the reverse view appears to be a problem which most of the ID community don’t know about and the rest choose to ignore.  I discussed this with Gpuccio a couple of years ago. He at least recognised the conflict and his response was that he didn’t care much what Dembski’s view is – which at least is honest.

Things That IDers Don’t Understand, Part 1 — Intelligent Design is not compatible with the evidence for common descent

Since the time of the Dover trial in 2005, I’ve made a hobby of debating Intelligent Design proponents on the Web, chiefly at the pro-ID website Uncommon Descent. During that time I’ve seen ID proponents make certain mistakes again and again. This is the first of a series of posts in which (as time permits) I’ll point out these common mistakes and the misconceptions that lie behind them.

I encourage IDers to read these posts and, if they disagree, to comment here at TSZ. Unfortunately, dissenters at Uncommon Descent are typically banned or have their comments censored, all for the ‘crime’ of criticizing ID or defending evolution effectively. Most commenters at TSZ, including our blog host Elizabeth Liddle and I, have been banned from UD. Far better to have the discussion here at TSZ where free and open debate is encouraged and comments are not censored.

The first misconception I’ll tackle is a big one: it’s the idea that the evidence for common descent is not a serious threat to ID. As it turns out, ID is not just threatened by the evidence for common descent — it’s literally trillions of times worse than unguided evolution at explaining the evidence. No exaggeration. If you’re skeptical, read on and I’ll explain.

Continue reading

The LCI and Bernoulli’s Principle of Insufficent Reason

(Just found I can post here – I hope it is not a mistake. This is a slightly shortened version of a piece which I have published on my blog. I am sorry it is so long but I struggle to make it any shorter. I am grateful for any comments. I will look at UD for comments as well – but not sure where they would appear.).

I have been rereading Bernoulli’s Principle of Insufficient Reason and Conservation of Information in Computer Search by William Dembski and Robert Marks. It is an important paper for the Intelligent Design movement as Dembski and Marks make liberal use of Bernouilli’s Principle of Insufficient Reason (BPoIR) in their papers on the Law of Conservation of Information (LCI).  For Dembski and Marks BPoIR provides a way of determining the probability of an outcome given no prior knowledge. This is vital to the case for the LCI.

The point of Dembski and Marks paper is to address some fundamental criticisms of BPoIR. For example  J M Keynes (along with with many others) pointed out that the BPoIR does not give a unique result. A well-known example is applying BPoIR to the specific volume of a given substance. If we know nothing about the specific volume then someone could argue using BPoIR that all specific volumes are equally likely. But equally someone could argue using BPoIR all specific densities are equally likely. However, as one is the reciprocal of the other, these two assumptions are incompatible. This is an example based on continuous measurements and Dembski and Marks refer to it in the paper. However, having referred to it, they do not address it. Instead they concentrate on the examples of discrete measurements where they offer a sort of response to Keynes’ objections. What they attempt to prove is a rather limited point about discrete cases such as a pack of cards or protein of a given length. It is hard to write their claim concisely – but I will give it a try.

Imagine you have a search space such as a normal pack of cards and a target such as finding a card which is a spade. Then it is possible to argue by BpoIR that, because all cards are equal, the probability of finding the target with one draw is 0.25. Dembski and Marks attempt to prove that in cases like this that if you decide to do a “some to many” mapping from this search space into another space then you have at best a 50% chance of creating a new search space where BPoIR gives a higher probability of finding a spade. A “some to many” mapping means some different way of viewing the pack of cards so that it is not necessary that all of them are considered and some of them may be considered more often than others. For example, you might take a handful out of the pack at random and then duplicate some of that handful a few times – and then select from what you have created.

There are two problems with this.

1) It does not address Keynes’ objection to BPoIR

2) The proof itself depends on an unjustified use of BPoIR.

But before that a comment on the concept of no prior knowledge.

The Concept of No Prior Knowledge

Dembski and Marks’ case is that BPoIR gives the probability of an outcome when we have no prior knowledge. They stress that this means no prior knowledge of any kind and that it is “easy to take for granted things we have no right to take for granted”.  However, there are deep problems associated with this concept. The act of defining a search space and a target implies prior knowledge. Consider finding a spade in pack of cards. To apply BPoIR at minimum you need to know that a card can be one of four suits, that 25% of the cards have a suit of spades, and that the suit does not affect the chances of that card being selected. The last point is particularly important. BPoIR provides a rationale for claiming that the probability of two or more events are the same. But the events must differ in some respects (even if it is only a difference in when or where they happen) or they would be the same event. To apply BPoIR we have to know (or assume) that these differences are not relevant to the probability of the events happening. We must somehow judge that the suit of the card, the head or tails symbols on the coin, or the choice of DNA base pair is irrelevant to the chances of that card, coin toss or base pair being selected. This is prior knowledge.

In addition the more we try to dispense with assumptions and knowledge about an event then the more difficult it becomes to decide how to apply BPoIR. Another of Keynes’ examples is a bag of 100 black and white balls in an unknown ratio of black to white. Do we assume that all ratios of black to white are equally likely or do we assume that each individual ball is equally likely to be black or white? Either assumption is equally justified by BPoIR but they are incompatible. One results in a uniform probability distribution for the number of white balls from zero to 100; the other results in a binomial distribution which greatly favours roughly equal numbers of black and while balls.

Looking at the problems with the proof in Dembski and Marks’ paper.

The Proof does not Address Keynes’ objection to BPoIR

Even if the proof were valid then it does nothing to show that the assumption of BPoIR is correct. All it would show (if correct) was that if you do not use BPoIR then you have 50% or less chance of improving your chances of finding the target. The fact remains that there are many other assumptions you could make and some of them greatly increase your chances of finding the target. There is nothing in the proof that in anyway justifies assuming BPoIR or giving it any kind of privileged position.

But the problem is even deeper. Keynes’ point was not that there are alternatives to using BPoIR – that’s obvious. His point was that there are different incompatible ways of applying BPoIR. For example, just as with the example of black and white balls above, we might use BPoIR to deduce that all ratios of base pairs in a string of DNA are equally likely. Dembski and Marks do not address this at all. They point out the trap of taking things for granted but fall foul of it themselves.

The Proof Relies on an Unjustified Use of BPoIR

The proof is found in appendix A of the paper and this is the vital line:

image

This is the probability that a new search space created from an old one will include k members which were part of the target in the original search space. The equation holds true if the new search space is created by selecting elements from old search space at random; for example, by picking a random number of cards at random from a pack. It uses BPoIR to justify the assumption that each unique way of picking cards is equally likely. This can be made clearer with an example.

Suppose the original search space comprises just the four DNA bases, one of which is the target. Call them x, y, z and t. Using BPoIR, Dembski and Marks would argue that all of them are equally likely and therefore the probability of finding t with a single search is 0.25. They then consider all the possible ways you might take a subset of that search space. This comprises:

Subsets with

no items

just one item: x,y,z and t

with two items: xy, xz, yz, tx, ty, tz

with three items: xyz, xyt, xzt, yzt

with four items: xyzt

A total of 16 subsets.

Their point is that if you assume each of these subsets is equally likely (so the probability of one of them being selected is 1/16) then 50% of them have a probability of finding t which is greater than or equal to probability in the original search space (i.e. 0.25). To be specific new search spaces where probability of finding t is greater than 0.25 are t, tx, ty, tz, xyt, xzt, yzt and xyzt. That is 8 out of 16 which is 50%.

But what is the justification for assuming each of these subsets are equally likely? Well it requires using BPoIR which the proof is meant to defend. And even if you grant the use of BPoIR Keynes’ concerns apply. There is more than one way to apply BPoIR and not all of them support Dembski and Marks’ proof. Suppose for example the subset was created by the following procedure:

    • Start with one member selected at random as the subset
    • Toss a dice,
      • If it is two or less then stop and use current set as subset
      • If it is a higher than two then add another member selected at random to the subset
    • Continue tossing until dice throw is two or less or all four members in are in subset

This gives a completely different probability distribution.

The probability of:

single item subset (x,y,z, or t) = 0.33/4 = 0.083

double item subset (xy, xz, yz, tx, ty, or tz) = 0.66*0.33/6 = 0.037

triple item subset (xyz, xyt, xzt, or yzt) = 0.66*0.33*0.33/4 = 0.037

four item subset (xyzt) = 0.296

So the combined probability of the subsets where probability of selecting t is ≥ 0.25 (t, tx, ty, tz, xyt, xzt, yzt, xyzt) = 0.083+3*(0.037)+3*(0.037)+0.296 = 0.60 (to 2 dec places) which is bigger than 0.5 as calculated using Dembski and Marks assumptions. In fact using this method, the probability of getting a subset where the probability of selecting t ≥ 0.25 can be made as close to 1 as desired by increasing the probability of adding a member. All of these methods treat all four members of the set equally and are equally justified under BpoIR as Dembski and Marks assumption.

Conclusion

Dembski and Marks paper places great stress on BPoIR being the way to calculate probabilities when there is no prior knowledge. But their proof itself includes prior knowledge. It is doubtful whether it makes sense to eliminate all prior knowledge, but if you attempt to eliminate as much prior knowledge as possible, as Keynes does, then BPoIR proves to be an illusion. It does not give a unique result and some of the results are incompatible with their proof.