Per Gregory and Dr. Felsenstein’s request, here is an off the top of my head listing of the major theories I can think of that are related to Dembski’s ID theory. There are many more connections I see to mainstream theory, but these are the most easily connected. I won’t provide links, at least in this draft, but the terms are easily googleable. I also may update this article as I think about it more.

First, fundamental elements of Dembski’s ID theory:

- We can distinguish intelligent design from chance and necessity with complex specified information (CSI).
- Chance and necessity cannot generate CSI due to the conservation of information (COI).
- Intelligent agency can generate CSI.

Things like CSI:

- Randomness deficiency
- Martin Löf test for randomness
- Shannon mutual information
- Algorithmic mutual information

Conservation of information theorems that apply to the previous list:

- Data processing inequality (chance)
- Chaitin’s incompleteness theorem (necessity)
- Levin’s law of independence conservation (both chance and necessity addressed)

Theories of things that can violate the previous COI theorems:

- Libertarian free will
- Halting oracles
- Teleological causation

Eric,

Could you quote the specific questions/requests to which this OP is a response?

Thanks.

Here’s the link to Gregory’s request:

http://theskepticalzone.com/wp/what-does-s-joshua-swamidass-mean-by-secular-scientist/comment-page-4/#comment-253384

EricMH,

“I can document all the connections I’ve discovered between the mathematical theory behind ID and mainstream statistics, information theory and computer science.”

Gregory,

Then why haven’t you yet?

I’ll have more time to comment later today, but just one point for now. Conservation of Information does not apply to specified information, if you keep the specification the same (say “codes for an organism with fitness at least X”). A particular example is given in my 2007 article refuting Dembski (Google my name and Dembski). The example involves an image which has the specification “looks like a flower”. A permutation of it’s bits leaves the information recoverable but destroys the flowerness. Permuting in reverse makes noise become a flower.

FWIW, I have an alternate approach to ID and Creationism more along the lines of Behe and Tour (who actually doesn’t call himself an ID proponent).

It’s well within science to say an event or a structure requires highly specific conditions for it to emerge vs. a broad range of structures. An example is below.

Such structures do not emerge from a random choice of conditions. For example random orientation of the dominoes, random velocities, random z positions, etc. will not result in dominoes standing up. Random choices are expected for the most part in the dominoes lying down. Standing up and lying down can be said to be macrostates (borrowing from the language of statistical mechanics and thermodynamics).

The above claims are well within science. I don’t believe it is a provable postulate that ID is the best explanation for such improbable systems, BUT one can adopt it as an axiom or article of faith.

What can be falsified is demonstrating a macrostate that is postulated as improbable actually is not. That’s well within science.

So it’s well within science to say the origin of life and certain phylogenetic scenarios (like emergence of Eukaryotes) are violations of expectation by astronomical levels. What one does with that inference is a separate set of axioms.

@Sal

Still going on with the assumption the origin of life must have been like the spontaneous assembly of E coli out of peanut butter.

EricMH:

No, because CSI is

definedas excluding chance and necessity. Thus, any argument that “X has CSI, therefore X can’t have evolved” is circular.Even Winston Ewert acknowledged this in an OP at Uncommon Descent:

EricMH:

Chance and necessity can’t generate CSI

by definition. See the previous comment.This assumes that intelligence involves something other than chance and necessity. That needs to be demonstrated, not assumed.

Regarding Dembski and Marks’ “Law of Conservation of Information”, I remember pointing out to Dembski that it isn’t a law, it isn’t about conservation, and it isn’t about information.

More on this later.

keiths is correct about establishing the extreme improbability of getting a well-adapted organism by citing it as having CSI. After 2005-2006 Dembski acknowledged that one needed to calculate that probability in order to know whether there was CSI present.

Before then, for example in

No Free Lunchin 2002, Dembski presented CSI as based on a calculation of the probability under random sampling from all genomes, equally weighted. He relied on a Law of Conservation of Complex Specified Information to show that evolutionary forces could not make a genome have CSI. That argument was fallacious, as I have argued in my 2007 article. It failed to keep the specification the same before and after. After 2005-2006 the LCCSI gets mentioned less and less and CSI becomes a useless tacked-on designation that adds nothing to one’s conclusions.Anyway, Eric, do you acknowledge that there is no conservation law for Specified Information?

Hi, Eric. I’ve been sort-of following your arguments along these lines for a while, and I’m afraid I don’t think you’ve adequately shown that humans (or free will or teleological causation as in your statement here) exceed the proven limits on what algorithmic processes can do. I’ll run through some examples, and explain why I don’t find your case convincing.

Let me start with Chaitin’s incompleteness theorem. This essentially says that no proof system (or algorithm) can reliably identify strings (or other entities) with very high algorithmic complexity. Applied to humans, this means there’s no way you can look at a string and correctly say “I’m completely certain there’s no pattern here.” You can certainly say “I haven’t noticed a pattern here … yet”, but for sufficiently long strings there’ll always remain that possibility that there is a pattern and you just haven’t spotted it. Therefore, I don’t see how humans (or in general any entity with free will) can violate Chaitin’s incompleteness theorem.

A halting oracle certainly could violate Chaitin’s incompleteness theorem, but since there’s no reason to think they exist, they’re not particularly relevant. In other places, you’ve said you think humans are halting oracles, but we’re clearly not. Consider the following simple program:

for each even integer N starting with 4:

for each integer M starting with 2:

if isprime(M) and isprime(N-M), then:

Skip to the next value of N (“break” out of the inner loop)

else if M > N/2:

print “Goldbach’s conjecture is false!” and halt

The question of whether or not this program halts is equivalent (in reverse) to one form of Goldbach’s conjecture, and humans have been trying to figure out if that’s true since 1742. If we were halting oracles

we’d know.As for teleological causation, I’m not even sure how the theorem would apply.

I’ll address the data processing inequality and Levin’s law of independence conservation together, since they’re essentially the same thing, but stated in terms of statistical information theory vs algorithmic information theory. The basic idea of both is that it’s not possible to produce information about some external thing, without a source of information about that thing (where “information”, “about”, etc have different definitions in the different information theories). Again, I don’t see any way for humans to violate this. Consider the game of 20 questions: “I’m thinking of something… you need to ask me questions about it and try to figure out what it is.” And you

doneed to ask questions; without the answers (or some other source of information), you can’t do any better than guess.In other places, you’ve suggested that human understanding of mathematics has exceeded the limits of Levin’s law. I’d have to see the details of your analysis, but I don’t see how that could work. The things we know about math come (at least in principle) from a small set of inference rules applied to a small set of axioms. (It’s a bit more complicated than that because of things like the axiom of choice, where we’re not sure if it’s true or even if the question of its truth is entirely meaningful. But let me duck that complication…) What this means is that our knowledge of mathematics is (in principle) algorithmically derivable from those axioms and inference rules, and therefore shouldn’t contain any algorithmic information about mathematical truth that isn’t already contained in those axioms and rules. And since the axioms and rules are small, they contain negligible algorithmic information.

Actually, I’ll expand on this a bit, because there’s another subtlety I ducked. Consider four sets of mathematical statements:

A) Our favorite set of axioms. This is very small, so it doesn’t contain much algorithmic information.

B) The set of theorems we’ve (validly) proven from our axioms.

C) The set of all theorems that

can be derivedfrom our axioms. This is a proper superset of B. Most important for our purposes, it can be algorithmically derived from the axioms by a very small program, so (intuitively) it has negligible algorithmic information beyond the information in the axioms.D) The set of all statements that are actually true (in some particular model). This is a proper superset of C.

I claim that in order for our knowledge of mathematics to indicate a violation of Levin’s law, B would have to contain algorithmic information about D that is not also in C. This is theoretically possible — if the choice of

whichtheorems we’ve proven vs those we haven’t somehow contains information about truths thatcannotbe proven from our axioms — but I don’t see how it’s at all plausible.A halting oracle

wouldbe able to violate Levin’s law here, but again they don’t seem to exist. And again I don’t see how teleological causation would apply.There’s a related claim you’ve made in other places, that humans can create new axioms but algorithms can’t. I don’t think this claim is sufficiently specific to be evaluated; in particular, I don’t know of anywhere you’ve stated the requirements for creating new axioms, and most specifically what quality requirements you demand for them. Basically, I think that if you place any nontrivial quality requirements on proposed new axioms (e.g. a guarantee of actual truth in some model, independence of other axioms, consistency with other axioms, etc) then humans will fail it. If you have only trivial quality requirements, then an algorithm (possibly augmented with simple randomness)

cando it.To illustrate this point, consider the early attempts at a formal set theory, which implied Russell’s paradox, and were therefore internally inconsistent. Or Euclid’s parallel postulate, which suspected for centuries of not being independent of his other postulates, then eventually proven to be independent of them, and then convincingly shown (by Einstein) to be false of the physical world.

There are a few at this point.

1. Dembskis’ partial proof of a law in his NFLT book. He does well explaining everything, but doesn’t quite boil it down to formal specifics.

2. Dr. Ewert’s “Improbability of Algorithmic Specified Complexity”, a formal proof: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.672.8155&rep=rep1&type=pdf

3. Dr. Montañez expanded on Dr. Ewert’s work and proved a conservation law for canonical specified complexity in his recent Bio-Complexity article: http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2018.4

4. I have a more modest COI for ASC currently going through the Bio-C review process. This one is closer in form to Dembski’s original idea.

Thanks Gordon, some very good comments. I’ll address this one for now, because it is a very common criticism, and underlies your other points.

Just because we cannot solve a particular halting problem doesn’t mean we are not a halting oracle, of a sort.

For example, take the whole set of halting problems that the halting oracle can solve. Subtract one element from the set. The set is still not computable. You can even subtract an infinite number of elements from the set and not render it computable. This more limited, yet still uncomputable set, I term a partial halting oracle.

So, just because we cannot solve the Goldbach conjecture algorithm does not logically imply we are not partial halting oracles, which are still uncomputable.

Of course, neither does this argument imply we are partial halting oracles. I am only saying your proposed counter example does not falsify the hypothesis. Other arguments are necessary to make the positive case.

If we just rely on the words I can see how it seems circular. But, mathematically it is not. Canonical CSI is a kind of metric known as randomness deficiency, first formally proposed by Kolmogorov (to my knowledge), but the basic insight was mentioned as far back as Laplace who noted that some occurrences somehow stand out as being unlikely to be random, e.g. flipping 100 heads in a row, even though every particular sequence of 100 coin flips has exactly the same very small probability of occurring.

Dembski’s original explanatory filter makes this point clear. First of all, small probability by itself cannot exclude an explanation if everything has a small probability.

E.g. every evolutionary outcome can have the same very small probability of occurring, so small probability in and of itself does not disqualify the evolutionary explanation. That’s why Dembski has the second criterion of “specified”.

Returning to the standard coin flip example, even though each individual sequence of 100 flips has exactly the same probability of occurring, sequence patterns have different probabilities of occurring. The pattern of equal heads and tails occurs much more frequently than all heads or all tails. The all heads/tails sequences are said to be specified.

Now, turning to the case of evolution, even if every outcome has exactly the same very small probability, only a very small number of these outcomes are also highly specified. So, if these outcomes occur, then it implies the evolution explanation is highly unlikely, and another explanation is in order. To find this other explanation we look for some other cause that creates artifacts with the same specification, but much higher probability. With the close analogy between biological organisms and human engineering some kind of intelligence analogous to human intelligence seems to be a much better explanation. There may yet be some explanation other than intelligence that is even better, but intelligence is certainly better than evolution.

Now, the conservation of information comes into play here. We might say, yes, this sequence has high CSI. But, perhaps there is a trivial way to take pretty much any random sequence and give it high CSI. Conservation of information says no.

So, hopefully this explanation shows why CSI -> not evolved is not circular, since small probability by itself is insufficient to disqualify the evolutionary explanation. We need the second criterion of specificity.

At some point in the future I’ll go into detail why Dembski’s concept of semiotic agents is important for defining specificity. Briefly, it is because as we go up the hierarchy of semiotic agents (e.g. finite automata -> push down automata -> Turing machines -> oracle machines) we are able to achieve shorter and shorter descriptions. Conciseness of description goes hand in hand with greater computational power.

How would you distinguish this from Dembski’s explanatory filter? Seems to have the same elements of a wide range of possibilities and very narrow target.

But partial halting oracles

arecomputable. Or, more precisely, there are algorithms that act as partial halting oracles. For instance, consider an algorithm that simulates an arbitrary Turing machine, and if it ever sees the complete state of the simulated TM repeat (that is, the machine’s state, head position, and tape contents are the same at steps N and M, where N > M), then it concludes that the simulated TM will never halt.More generally, some subsets of the halting problem are computable, and some are uncomputable. In order to show that humans can exceed the capabilities of algorithms, you’d have to show that humans can solve the halting problem for one of the uncomputable subproblems.

Right, I agree some sort of positive case has to be made. My only point was the counter example did not falsify the hypothesis, so you cannot take it as a foregone conclusion that humans have to be a computable set. That seemed to be the premise behind much of your argumentation.

keiths:

EricMH:

Even mathematically it is circular. Here Dembski defines the P(T|H) term he uses in the expression for specified complexity:

An object like the flagellum therefore can’t exhibit CSI unless P(T|H) is sufficiently low. Hence the circularity:

1. Determine that the object is extremely unlikely to have evolved (or been formed by some other “material mechanism”).

2. Conclude that it exhibits CSI.

3. Conclude that since it exhibits CSI, it’s extremely unlikely to have evolved.

Whether or not you call that “circularity”, the point is that at the end of step 1 you have your conclusions about evolution. Steps 2 and 3 add nothing.

Eric, I’d like to hold off on ASC for the moment (I’ll get back to it, I promise).

Joe:

Right. CSI is useless.

I consider the reasoning circular because it uses the premise “X is extremely unlikely to have evolved” to conclude, via an intermediate step, that “X is extremely unlikely to have evolved”. That isn’t helpful, and we don’t need Dembski to tell us that. We’d like for him (or anyone else) to tell us how to determine P(T|H).

EricMH,

I addressed that question in an OP several years ago:

Eric,

Even Dembski’s application of “specification” is flawed.

He introduced it to avoid the metaphorical problem of drawing a bullseye around the arrow after it had already landed. For example, in the case of the bacterial flagellum, he reasoned that the evolutionary target should not be considered to be the

exactflagellum we see today, but rather any object satisfying the specification of “level 4 concept or less”:There are lots of problems with this, but perhaps the biggest one is that the bullseye is still being drawn too narrowly. Evolution doesn’t care doesn’t care whether a concept is “level 4 or less”, and it certainly doesn’t care whether something can be described as a “bidirectional rotary motor-driven propeller.” Evolution doesn’t care about anything except fitness, and the only legitimate target is therefore “anything at all that would sufficiently increase fitness starting from a given ancestral population”. (Keeping in mind that the fitness landscape changes over time.)

Good luck to anyone trying to quantify

that.CSI is hopeless. Dembski seems to have realized that and moved on.

I didn’t mean to imply it was a foregone conclusion, just that you needed to make a positive case (and that the specific claim that we are halting oracles is false). But let me go further and argue that we’re almost certainly

less capablethan algorithms at this sort of thing.Turing’s original proof allows an algorithm to either solve the halting problem incompletely (i.e. say “dunno” for some programs), or completely but sometimes incorrectly. To violate this (in the original form) would require that we both always answer and

never answer incorrectly. In practice, humans’ answers are neither complete, nor consistently correct, and if you want to say that humans exceed the capability of algorithms at this, you have to take both of our failure modes into account.But when it comes to correctly figuring out what a given program will do, even over short periods of time (not the potentially infinite time needed to answer the halting problem), humans are really pretty bad. We get it wrong even on programs we’ve written ourselves, and written using programming practices that (try to) make our programs easier to understand. I don’t have the exact quote, but I remember one programmer lamenting that he spent about half his time staring at code, trying to figure out why it didn’t work… and the other half staring at code trying to figure out why it

didwork.Because we’re bad at this, we (generally) test software before releasing it on the world. In other words, we

have a computer tell us what the program does, because the computer is much, much better at figuring this out than we are. And since we’re much worse at solving the what-does-the-program-do-over-finite-time problem, I don’t see how there’s any plausible way for us to be better at the infinite-time version of the problem.keiths,Ive always wondered about that. Im not sure your (linked) explanation answers it, but Im pleased to see someone else has been puzzled like me.

I am not worried by the informality of Dembski’s proof. I can see what he is doing and it is fine. It’s just that it is irrelevant to arguing about evolution. He compared apples to oranges when we need to compare apples to apples.

To show that evolutionary processes cannot get us into a state of high adaptation, we need to show that you can only get Specified Information to be Complex if you start out with it Complex

with the same specification. You can only make a bird able to fly 20 miles per hour if it already can fly that fast. for example.But that’s not what Dembski does. He constructs an elaborate specification for the “before” generation, using the “after” generation’s CSI states T, by finding all states in the “before” generation that lead to the states T under the evolutionary processes. These can rather easily be seen to be as rare a set of states.

But they aren’t the same specification.

In fact, constructing the specification that way uses our knowledge of the processes involved. Dembski had insisted that the specification be “separable” from the processes. But then he violated his own condition. Wesley Elsberry and Jeffrey Shallit pointed this out in their critique of Dembski in 2003 (published later in

Synthese).Is there a conservation law for specified information when the specification is held the same? No. It’s very easy to find counterexamples. And that case, specification held the same, is the one Dembski needed/

What is missing here is an appreciation of the brute fact that biological entities require viability. You are basically amazed at the rarity of the observed solutions compared to the vastness of the total configuration space and posit an explanation of underlying purpose. However, when one takes into account that the vast majority of configurations are non-viable, and therefore can and will never be observed, it becomes easier to understand the apparent bias. This is one reason why comparisons with coin flips are misplaced – every coin flip outcome is viable and can in principle be observed, whereas most configurations of biological elements are non-viable and will therefore never materialise. Hence the apparent bias towards functionality (and with it, complexity).

On top of this, biological entities are not composed of inert elements like coins or dominoes (pay attention, Sal) but of highly reactive organic molecules. Moreover, the entities are not composed de novo at every generation but originate from minor variations of the previous, already viable, parent stock.

These mathematical ‘proofs’ of why natural processes are extremely unlikely to result in compex biological entities completely ignore their physical/chemical nature, the physics of the environment in which the developments take place, and the fact that they will only exist for us to observe them if they are viable in their environment in the first place.

This is just Douglas Adams’ sentient puddle all over again.

graham2,I’d be interested in hearing your (or anyone else’s) objections to my explanation. I just reread it and it still seems right to me.

It would still be daft to say that human intelligence “is” (how bizarre, to say “is” rather than something like “is equipped with”) a partial halting oracle, inasmuch as an oracle responds infallibly in a single time step, and a human commonly requires a long time to respond fallibly.

keiths,Your explanation of the paradox is one way of countering the argument. Another way would be if there is a filter that removes most of the ‘unremarkable’ results before we even get to see them. If there is a process, generally invisible to us unless we know where to look for it, that eliminates all those unremarkable and ‘random’ looking outcomes outwith our specification, we really should not be surprised at observing results that fit our specification.

Such a filter exists in biology – it is called death. An organism that isn’t viable at its very origin generally won’t ever see the light of day. Keeping in mind that progeny is already going to be very close to its viable ancestors, the odds are excellent that it is going to be viable itself. In cases where it isn’t, because of serious deleterious mutations or other defects, it will generally die off long before it manifests itself as an organism in the first place.

faded_Glory,What needs to be explained alive condition. The filter does not exist prior to this condition.

colewd,That needs explaining too, yes, but that is a different discussion. This one is about evolution and the apparent low probability of specified complexity. My point is that we only have access to a highly biased dataset – the collection of the winners.

It isn’t very remarkable to meet a lot of lottery winners at a gathering of people who won the lottery. Do we need a special science to explain that, or do we simply need to understand the totality of the process that led to this meeting before concluding that we experienced something really unlikely?

Natural selection (such as the inviability you mention) is of course the explanation for how organisms could become well-adapted.

However this discussion is

notjust about that, it is about what mathematical arguments people like William Dembski have given that purport to show that natural selection cannot work to do that. Commenter keiths has reminded us of his argument that Dembski’s concept of Complex Specified Information does not achieve his goal, but is simply redundant to any argument that adaptation is very improbable.We are waiting to hear from Eric Holloway about that issue, and about the problems that have been raised with Dembski’s use of a Law of Conservation of Complex Specified Information.

As I read your stuff I was quite warming to it, then at the end I went cold again. The problem seems to be the construction of personal sets of things that we each hold special (my personal ID numbers etc) and if these come up, then its not random, but the construction of personal sets of numbers seems to be arbitrary. I could probably match any dice result to something in my life.

Of course if some dice throw did match my SSN I would be impressed, Im just not sure we have got to the bottom of why.

graham2,

They

arearbitrary, and I can see why that bothers you. It bothered me at first, too. That’s what I was getting at when I wrote this:Delbert and I don’t know each other or each other’s SSNs. I’d be suspicious if my SSN came up but not if his did. If he ran the same experiment, he’d be suspicious if his SSN came up but not if mine did. Since we disagree about when to be suspicious, doesn’t that mean we’re being hopelessly subjective?

Is that a fair description of what’s bothering you about this?

Maybe, but you’d have to work at it. Most nine-digit numbers, unlike your SSN, would have no obvious significance to you.

Here we reach the deep core of the ID argument, which posits that certain patterns are

inherentlymeaningful, and that we intuitively recognise them as such (e.g. 500 coin flips ending up all heads).But it’s pretty hard to formalize this intuition, especially if people obstinately refuse to see things your way (Huh, I bet your coin has two heads).

Corneel, keiths, graham2:

Surely in the case of biological organisms we need to ask whether they are surprisingly well-adapted. This removes the arbitrariness.

Sure, fully agree; fitness is the only appropriate yardstick. But then again, I am not an IDer 😀

Discussions with ID proponents tend to be about complexity, information and molecular function, not adaptation. I think it was very telling when Michael Behe in his latest book depicted adaptation as some sort of nuisance process that encouraged the influx of damaging mutations, opposing the build-up of complexity in molecular function (which is what we really should be interested in, apparently).

I engaged in an argument about all this on an ID site, before I was banned, that if a series of numbers that spelled out the digits of PI turned up, they satisfy the requirements of randomness beautifully, yet are obviously rigged.

I wish someone would put me out of my misery.

Joe,

Yes, but my OP isn’t about biology or evolution. It’s about coin flips and dice tosses:

Corneel,

I think that 500 heads

ismeaningful, though not inherently so. The coin doesn’t care if it lands heads up 500 times in a row, and neither do the laws of nature. But we do, and we are rightly suspicious when we see this particular pattern. The paradoxical question is: How do we justify our suspicion when every sequence of 500 tosses is equally likely under the assumption of fairness?That’s the question my OP tries to answer.

graham2,

Are you familiar with Kolmogorov complexity as a measure of randomness? It can actually handle the case you describe.

The interesting thing about the 500 heads scenario is that while practically everyone agrees it would make them surprised and suspicious, their

explanationsdiffer wildly.I’ve seen people offer all of the following explanations:

1. It’s a cognitive illusion akin to the lottery winner fallacy, and there is no legitimate reason to be surprised or suspicious.

2. It’s a one-off experiment. We have no reason to be suspicious until we repeat it.

3. We have a right to be suspicious only if the 500-heads outcome is prespecified.

4. We should be suspicious because there is only one way of getting 500 heads in a row, while there are lots of ways of getting roughly 50 percent heads and 50 percent tails in 500 tosses.

For the record, I think all of those are wrong.

Ah, I see. I would say that is just a function of our ignorance of and inability to spot the significance of the vast majority of possible outcomes. Here is an appropriate anecdote about the famous mathematician Srinivasa Ramanujan, as related by Godfrey Hardy:

500 heads in a row is just such an easy pattern that everyone will spot it. I guess that resonates a bit with your resolution.

Corneel:

Right. The pattern immediately strikes us as special. We become suspicious because the patterns that strike us as special form a relatively small subset of all possible patterns. In this particular case, there is only a one-in-a-billion chance of getting all heads if the coin is fair. We judge it as far more likely that the pattern came about due to something other than fair tosses.

The Ramanujan story reminds me of another interesting paradox; namely, that there cannot be a smallest uninteresting integer. The reason? The smallest uninteresting integer would be interesting for that very reason. It’s self-cancelling.

It follows that every integer is interesting.

It looks like you didn’t read my comment very carefully. I won’t respond any more to your argument until it seems you’ve read and understood my comment.

This has been formalized by quite a few mathematicians at this point. I’ve mentioned a couple in the top level article, e.g. randomness deficiency and tests for randomness.

Yes, so you have repeatedly claimed. But I suspect that every one of those tests requires one to specify the expected distribution (I may be wrong here), i.e. they enable you to reject that your outcome is a chance event, provided that you have specified the appropriate null-hypothesis.

And now we are not even talking about the “necessity” part. When an event has been repeating itself for 500 times, not a lot of creativity has been involved, I would say.

EricMH:

I read and understood your comment, Eric, but specification doesn’t rescue the CSI argument. Even Winston Ewert understands this, as I showed earlier in the thread. Take some time and think it through.

All else being equal, changes in “specificational resources” simply change the single-event probability threshold below which something qualifies as exhibiting CSI. It remains true that the probability must be sufficiently low, and it remains true that the probability must be evaluated with respect to the “chance hypothesis”

H, which according to Dembski 2005,It helps if you understand the history behind Dembski’s argument. Joe explained it earlier in the thread:

If the earlier version of CSI had worked, Dembski would have had something significant, as he would have avoided the need to evaluate the probability of an object’s being formed by “Darwinian and other material mechanisms”. Alas, it didn’t work, so Dembski had to revise the definition of CSI.

The revised version is useless, because you already have to do all the work of showing that something couldn’t have evolved before concluding that it has CSI and therefore couldn’t have evolved. Besides, for most cases of biological interest, neither Dembski or anyone else can calculate the necessary probabilities. Not even for the flagellum, which is the pet structure of IDers everywhere.