# Creating CSI with NS

Imagine a coin-tossing game.  On each turn, players toss a fair coin 500 times.  As they do so, they record all runs of heads, so that if they toss H T T H H H T H T T H H H H T T T, they will record: 1, 3, 1, 4, representing the number of heads in each run.

At the end of each round, each player computes the product of their runs-of-heads.  The person with the highest product wins.

In addition, there is a House jackpot.  Any person whose product exceeds 1060 wins the House jackpot.

There are 2500 possible runs of coin-tosses.  However, I’m not sure exactly how many of that vast number of possible series would give a product exceeding 1060. However, if some bright mathematician can work it out for me, we can work out whether a series whose product exceeds 1060 has CSI.  My ballpark estimate says it has.

That means, clearly, that if we randomly generate many series of 500 coin-tosses, it is exceedingly unlikely, in the history of the universe, that we will get a product that exceeds 1060.

However, starting with a randomly generated population of, say 100 series, I propose to subject them to random point mutations and natural selection, whereby I will cull the 50 series with the lowest products, and produce “offspring”, with random point mutations from each of the survivors, and repeat this over many generations.

I’ve already reliably got to products exceeding 1058, but it’s possible that I may have got stuck in a local maximum.

However, before I go further: would an ID proponent like to tell me whether, if I succeed in hitting the jackpot, I have satisfactorily refuted Dembski’s case? And would a mathematician like to check the jackpot?

I’ve done it in MatLab, and will post the script below.  Sorry I don’t speak anything more geek-friendly than MatLab (well, a little Java, but MatLab is way easier for this).

## 529 thoughts on “Creating CSI with NS”

1. junkdnaforlife: “”[Chaitin, Kolmogorov, and Solomonoff] What they said was that a string of 0s and 1s becomes increasingly random as the shortest computer program that generates the string increases in length. ”

You are referring to an English description of the output instead of a computer program that generates the output.

The computer program ‘A’ is actually one byte shorter than the computer program ‘B’ even though both generate the same information.

As Mike Elzinga has mentioned many times, at the physical level, where things actually happen, the programs are not the same.

The output from program ‘A’ should therefore should be more random than ‘B’ despite the fact that the information is identical, and this is according to your supplied quote.

What I need to know is, are you referring to the actual “target information” as containing “CSI” or is the generator of the information, in this case the object code for the CPU, the “information” that contains “CSI”?

2. “What I need to know is, are you referring to the actual “target information” as containing “CSI” or is the generator of the information, in this case the object code for the CPU, the “information” that contains “CSI”?

CSI would be the information in the string. But this depends on the population from which it came, and the specificity of the string. If we are talking binary strings or coin tosses, specificity is can be measured by Kolmogorov complexity, this is the simply describable measure. (It may be possible to expand specificity to include lossless data compression with a ratio > 2:1). CSI for a binary string would be then, a string that was output from a equiprobable population of {1|0}, that could be described simply as something like, “every third coin is heads.”

So in your example, the program is set to just output the number 1 100 times. It meets the specificity criterion, because we can describe the 100 bits as “1 100 times.” However, it is not complex, because it is coming from a population/generator whereas the probability of outputting 1 is 1. Now if you switch the generator to equal chances 1 | 0 for 100 bits, therefore meeting the complexity measure, and hit a simply describable pattern like, “100 1’s,”
you may consider rejecting chance.

3. junkdnaforlife: ID/Creationist’s say that physics and chemistry will come up short, precisely because of what the maps are telling us. You argue the maps won’t hold. Specifically in the case of origins, I argue they will.
You want to hug it out now?

You do realize don’t you that the rules by which matter condenses have been studied in extreme detail by taking matter apart and not by just making up stuff and claiming it applies?

It is you ID/creationists that keep claiming the rules by which matter condenses are not sufficient to explain why matter condenses. Yet you can’t tell us where the “barriers” are.

The point is that there is nothing in the rules we know that prevent living organisms to exist and, in fact, to have emerged from non-living processes that are nearly as complex. The fact that we haven’t found the particular needle – or needles – in a mountain of needles does not change the fact that the rules allow it.

We know the rules out to something like 13 to 15 significant figures. The rules are so rich in their consequences that we see all kinds of analogous process to life even in systems much simpler.

The question about the origins of life so interesting precisely because we see it is possible not only by what we know about the rules but also from literally millions of commonplace examples we see happening around us all the time.

Unfortunately, too many people pay little attention to their surroundings and live only with their own minds and preconceived dogmas. Nor will they avail themselves of the opportunities to go out and learn what we have discovered, no matter how much we try to encourage them to do so.

That kind of stubborn ignorance is not our fault. It derives from dogmas that are intrinsically hostile to science yet want to have the veneer of science in the form of a concocted pseudoscience that props up dogma. It’s a socio/political war on science; and it is not itself science.

So stop the war and go out and study the world around you. And, please, do benefit from what we already have learned.

4. The sequence of coin flips does not have a function, it is the result of “maximize the product of H-runs,” rather this describes the fitness function in Matlab. Whereas, specific dna sequence = interacts with the glucocorticoid
receptor, describe each other, universally, before we looked. However, Liz’s coin toss string does not = “maximize the product of H-runs,” rather, “maximize the product of H-runs” = her Matlab fitness function. Rather, liz’s coin toss string = alternating H and T to a million, would be an example of describing her string. This is why these non biological elements are mapped with this method.

It is this *mapping* that Mike E challenges, by saying that particles do not behave the way abstract representations do. ID’rs say the maps demonstrate the limit of physics and chemistry, and they will hold etc.

5. junkdnaforlife: “CSI would be the information in the string.”

Okay.

Toronto: “The output from program ‘A’ should therefore should be more random than ‘B’ despite the fact that the information is identical, and this is according to your supplied quote.”

I think I have this backwards as ‘B’s output should be more random than ‘A’s.

junkdnaforlife: “”[Chaitin, Kolmogorov, and Solomonoff] What they said was that a string of 0s and 1s becomes increasingly random as the shortest computer program that generates the string increases in length. ”

So how do we know which of the two outputs are more random, (according to your quote regarding the length of the non-“CSI” computer program code), despite the fact that they are identical?

Your answer, (that “CSI would be the information in the string”), would seem to indicate that simply looking at “information” is no indication of how “random” it is.

This means that Dembski cannot rely on the “information” in an object to determine “randomness” and therefore, cannot rely on”information” alone to determine “design”.

6. junkdnaforlife: It is this *mapping* that Mike E challenges, by saying that particles do not behave the way abstract representations do. ID’rs say the maps demonstrate the limit of physics and chemistry, and they will hold etc.

Please don’t misquote me. Atoms and molecules don’t behave the way ID/creationists’ “abstract representations” say they do.

When chemists, physicists, and biologists incorporate the rules of nature into their models, the models behave as nature does. It is one of the many ways we put knowledge to use as well as to verify our understanding.

Don’t ever make the mistake that we are using YOUR “rules” in our programs. YOUR rules don’t work because they were concocted for other reasons. They were not derived from the study of the physical universe.

7. The sequence of coin flips does not have a function, it is the result of “maximize the product of H-runs,” rather this describes the fitness function in Matlab. Whereas, specific dna sequence = interacts with the glucocorticoid
receptor, describe each other, universally, before we looked. However, Liz’s coin toss string does not = “maximize the product of H-runs,” rather, “maximize the product of H-runs” = her Matlab fitness function. Rather, liz’s coin toss string = alternating H and T to a million, would be an example of describing her string. This is why these non biological elements are mapped with this method.

If this: “specific dna sequence = interacts with the glucocorticoid receptor, describe each other” is supposed to have any meaning as description of the relationship between a DNA sequence and a function of the corresponding protein synthesized, then “specific sequence of coin tosses = maximize the product of H-runs” describe each other in exactly the same sense.

If *maximizes the product of H-runs* does NOT have meaning as a description of the string, because it does not tell us the exact sequence of the string, then *interacts with glucocorticoid receptor* is in the same sense NOT a description of the DNA sequence, because it does not tell us the exact sequence of DNA.

8. “So how do we know which of the two outputs are more random,”

This is why determining the frequencies in the population is so crucial. It’s easy when we set up the generator ourselves, then we know exactly what the frequencies of the elements are. And there is little controversy over considering rejecting chance if we set up a binary 1 | 0 generator with equal 1 and 0 frequencies and we hit a 200 bit string of alternating 1 and 0’s. The debate is when this gets applied to biology. Because how we determine the frequencies of the elements makes all the difference. The debate is about how accurate the population frequencies are.

9. Its not in the same sense, the biological function is universal, and it existed before we got there.

Which is the least accurate description:

1) 0000000000 = 10 0’s
2) specific dna sequence = interacts with the glucocorticoid receptor,
3) maximizes the product of H-runs = fitness function
4) liz’s coint toss string = maximizes the product of H-runs

10. junkdnaforlife: “The debate is when this gets applied to biology.”

I agree.

That is the problem with Dembski’s focus on the math.

Biology is real and chemistry and physics determine what happens from one generation to the next.

Dembski errs when he tries to model biological mechanisms with math.

This attempt will always fail if there is not constant referencing to actual chemistry.

In chemistry and physics, some things simply can’t happen, but this is not taken into consideration by the ID side.

If it was, we would not get improbability arguments that cover the whole range of (2^n).

11. Mike in the Abel paper thread, when you said:

“Letters and numbers do not interact strongly with each other. Atoms and molecules do.”

And I say:

“Mike E challenges, by saying that particles do not behave the way abstract representations do.”

I think I got it.

12. You direct it at me actually:

“You are missing a crucial point. Letters and numbers do not interact strongly with each other. Atoms and molecules do.”

13. The point is that there is nothing in the rules we know that prevent living organisms to exist and, in fact, to have emerged from non-living processes that are nearly as complex. The fact that we haven’t found the particular needle – or needles – in a mountain of needles does not change the fact that the rules allow it.

Bare possibility isn’t an explanation of anything.

14. junkdnaforlife,

Are you agreeing with me, that improbability arguments from the ID side that cover the whole range of (2^n), are not valid?

15. Its not in the same sense, the biological function is universal, and it existed before we got there.

I have no idea what you mean by *the biological function is universal*. Obviously *interacts wit glucocorticoid receptor* is not a universal function of DNA.

Of course the biological function in question existed before we looked. The coin toss string in question also maximized the product of H-runs before we looked. Us identifying the sequence of that string that fulfills the requirement *maximizes the product of H-runs* is equivalent to us identifying the sequence of the DNA string that fulfills the requirement *interacts with glucocorticoid receptor*

Which is the least accurate description:

1) 0000000000 = 10 0’s
2) specific dna sequence = interacts with the glucocorticoid receptor,
3) maximizes the product of H-runs = fitness function
4) liz’s coint toss string =maximizes the product of H-runs

(2), (3), and (4) are all rather inaccurate.
In (3) the description in front of *=* is one specific example of the term after *=*
(2) and (4) only make sense if *=* is taken to mean *results in*

16. Flat out 1/2^n depends. Rather k/n, whereas k is possible functioning sequences and n is possible sequences

17. William J. Murray: Bare possibility isn’t an explanation of anything.

I don’t think anyone here claimed that it is. However, most ID-arguments, namely the one addressed in this thread, are of the form: x is impossible in principle, therefore ID. Thus, all those ID-arguments, namely the one addressed in this thread, crumble when it is shown that x is possible in principle.

18. even more accurately: In (2) and (4), the description after *=* is a property of the entity described in front of *=*.

19. junkdnaforlife: Mike in the Abel paper thread, when you said:
“Letters and numbers do not interact strongly with each other. Atoms and molecules do.”
And I say:
“Mike E challenges, by saying that particles do not behave the way abstract representations do.”
I think I got it.

I guess I didn’t understand what you meant by “abstract” representation.

Many of these programs that model a situation in nature are using knowledge about how nature finds solutions. Such knowledge falls into some broad categories of minimizing something like energy, maximizing something like entropy, or in general, finding what are called stationary points.

The natural selection algorithms, for example, are also suitable analogs for, say, minimizing a potential energy. Dawkins’s little Weasel program is a nice demonstration of something like a radioactive decay to a ground state, or of water draining from a cylinder by way of a nozzle at the base of the cylinder.

I can think of some applications for Elizabeth’s demonstration here because the program maximizes something. It could be an analog for energy or for fitness or for the formation of a molecule within an energy cascade.

Natural selection is an observed phenomenon in the physical world that applies to all kinds of simple to complex systems; not just to living organisms. It is observed in nature because the processes involved in natural selection have their roots in chemistry and physics. Natural selection is a phenomenological manifestation of those underlying chemical and physical laws but its usefulness as a concept comes in the simplicity with which it summarizes those underlying laws for complex systems interacting with a larger environment.

So when ID/creationists assert that “evolutionists” put the answer into such a program, what do they want as an alternative? The usual answer from them turns out to be that everything must be scrambled all over again with every trial. That is not how nature works; it is not even close.

These ID/creationists demands expose that Fundamental Misconception that runs through all ID/creationist writings, namely that it is all “spontaneous molecular chaos” down there and that atoms and molecules – or species, or alleles – must be modeled as randomly sampled, inert objects that do not interact among themselves or with anything else in the universe. Such notions betray profound ignorance of vast areas of science.

20. Joe G: And necessity does the job at producing algorithmically compressible patterns! Laws can produce specificity- crytals represent specificity- snowflakes represent specificity

And again computer programs, assembly instructions and encyclopedia articles are all CSI and not one can be algorithmically compressed.

A DNA sequence of a gene cannot be algorithmically compressed.

Meyer and Dembski have actually collaborated on ID concepts.

I’m not sure whether Meyer has collaborated with Dembski, although Meyer certainly cites Dembski copiously. But in the passage from Meyer you cite, Meyer is not using “complexity” in the sense in which Dembski uses it when he refers to “event-complexity” (“difficulty of reproducing the corresponding event by chance”), which can apply equally to a compressible sequence or an incompressible one. Dembski uses the term “descriptive complexity” to denote a sequence that is difficult to compress, and the term “specified complexity” for a pattern that is both easy to compress (“pattern simplicity”) and has “event-complexity”.

Meyer, on the other hand, only calls a pattern is only “complex” if it is incompressible as well as having lots of Shannon information, and “specified” if it “was specifically arranged to perform a function”.

This kind of confusion is, in my experience, typical of ID writings, but Dembski himself is generally clear about what he means in a given context, and as this exercise was specifically undertaken to show that Dembski claim in Specificiation: the Pattern that Signifies Complexity is false, and I am using the definitions he gives in that paper for this purpose.

21. junkdnaforlife:
Liz you say:

“It is specified by Dembski’s definition, because genomes (sequences of heads and tails) whose product of runs-of-heads is very large is a tiny subset of the vast set of possible genomes.”

// Now here is Dembski explaining specificity maps pertaining to coin tosses, (R-) and (R’) are coin toss strings:

“6. Specificity:
The crucial difference between (R-) and (R’) is that (R’) exhibits a simple, easily described pattern whereas (R-) does not. To describe (R’), it is enough to note that this sequence lists binary numbers in increasing order. By contrast, (R-) cannot, so far as we can tell, be described any more simply than by repeating the sequence. Thus, what makes the pattern exhibited by (R’) a specification is that the pattern is easily described but the event it denotes is highly improbable and therefore very difficult to reproduce by chance. It’s this combination of pattern simplicity
(i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing
the corresponding event by chance) that makes the pattern exhibited by (R’) — but not (R-) — a specification.”

// There is a clear cut disconnect between what Demski says and what you say he says. Such as I had noted earlier. A binary population of {H,T}, a string of random coin tosses has no function, therefore to simulate the specific arrangements of dna sequences that function / all dna sequences with coin flips, your string needs to be simply describable, such as “every forth coin is heads.”

As I said earlier, junkdnaforlife, that is an interesting point, and I don’t think that Dembski makes a very good job of connecting his idea that specification means “compressibility” (and, as Joe G points out above, Meyer actually makes the opposite connection, considering a sequence “complex” if it is not compressible, although he doesn’t seem to notice that he is in contradiction to Dembski!) with his idea that specification is also related to function although he does write elsewhere (in No Free Lunch) that in biology, specification is always related to function. As I wanted to relate my exercise to biology, I went for a specification that was a function (had I merely gone for “compressibility”, I would, I think, and rightly, have been accused of merely incorporating my specification in my fitness criterion, which I have not done – my fitness criterion is a function that the “genome” sequences have to perform, and as there are only a very small subset that can perform it, then that is a tiny subset that would be vanishingly unlikly to occur in the absence of NS.

However, it turns out (because I’m not as green as I’m cabbage looking heh) that the winning sequences are also highly compressible, as they are strings of, typically, HHHT, HHHHT and HHHHHT, and that the highest scorer of all, is extremely compressible: a repeating sequence of HHHHTs. We can easily compute the number of sequences of coin-tosses that can be as simply, or more simply, described, and, thus compute of the final sequences using Dembski’s actual compressibility criteria.

If someone would like to do that (Olegt?) we can see whether I still hit chi>1. I’m pretty confident I will. But it’s not as biologically interesting.

What you are doing has nothing to do with Dembski. More from Dembski:
.
“[Chaitin, Kolmogorov, and Solomonoff] What they said was that a string of 0s and 1s becomes increasingly random as the shortest computer program that generates the string increases in length. For the moment, we can think of a computer program as a short-hand description of a sequence of coin tosses. Thus, the sequence (N) is not very random because it has a very short description, namely, repeat ‘1’ a hundred times.”

“To sum up, the collection of algorithmically compressible (and therefore nonrandom) sequences has small probability among the totality of sequences, so that observing such a sequence is reason to look for explanations other than chance.”

Right. And so, if I end up with a sequence that can be generated by a program that says: repeat HHHHT 100 times, I will have generated an extremely compressible sequence, and you would have “reason to look for explanations other than chance”. And in this case, the explanation would be: this is the genome of a virtual organism whose probability of reproductive success was maximised if the product of its runs-of-head was maximised.

So you say:
“I started my exercise with a small snowball (a small amount of CSI if you like – indeed I measured it).

And Dembski:

“Algorithms and natural laws are in principle incapable of explaining the origin of CSI. To be sure, algorithms and natural laws can explain the flow of CSI. Indeed, algorithms andnatural laws are ideally suited for transmitting already existing CSI. As we shall see next, what they cannot do is explain its origin.”

Right. And so I have demonstrated not “flow” of CSI but generation of it – I end up with more than I started with. Specifically, I end up with so much that its chi exceeds Dembski’s threshold for rejecting “non-Design”. I suggest that Dembski is nearly right, only that his threshold (far more stringent than it need be, IMO, but never mind) is the threshold at which we should reject “processes with a flat probability distribution”. And possibly even “processes that do not involve selection”.

Ironically, Dembski’s definition of “Intelligence” does not incorporate the notion of intention, merely of choice/aka selection. So he’s actually correct by his own definition! But he fails to notice that selection need not be “intentional”. Anything that promotes persistence will do the trick.

22. Well, the increments in fitness are becoming rarer and rarer. Up to 3.4286e+59 now.

Could take another week I think to hit the jackpot!

23. Interesting post. Olle Haggstrom’s point is related, I think.
It seems to me that given that the universe has a non-uniform structure, non-equiprobable patterns can occur, and that it is this property that allows the kinds of patterns to occur that Dembski wants to reject as “non-design”.

So if the ID argument was that an ID must have designed a non-uniform universe, that might be worth discussing, but it would render all the arguments about bacterial flagella and OOL moot.

So it would be interesting to know who, among ID proponents here, thinks that an ID must have actively interfered in the universe, once created, to produce the patterns we observe, and who considers that the universe itself was designed, ab initio, with the capacity to produce these patterns.

I think Joe G may fall in the second camp. Anyone else?

24. The question of just when and to what extent a “designer” acted/is acting is one that has been posed many times to the ID gurus. And JoeG.
I don’t recall there ever being an unequivocal answer – or even a clear “dunno”
I think it symptomatic of the intellectual disarray in the ID camp, which has been pointed out several times in this forum

25. “So it would be interesting to know who, among ID proponents here, thinks that an ID must have actively interfered in the universe, once created, to produce the patterns we observe, and who considers that the universe itself was designed, ab initio, with the capacity to produce these patterns.”

The thing that distinguishes I.D. from deism and some forms of theism is that their designer is definitely interventionist, and that’s why they argue against naturalistic processes which would be compatible with a universe that was front loaded with intent.

They do like “fine tuning” of the universe arguments, but their god apparently got its fine tuning slightly wrong, and had to do some local biological intelligent designing on top of it!

26. Well, I do think there is a great deal of heterogeneity in ID positions (after all, they include YEC!).

However, what I find odd (and frustrating) is that rather than trying figure out what is consistent across those positions, and what is inconsistent, they close ranks against the common enemy, often referred to as “Darwinism” (even when it refers to cosmology, not biology!)

If ID is to be taken seriously as science (and it could be, potentially), then it must confront its own internal contradictions. I’m not so concerned about the YEC thing, but contradictions over the very definition of key concepts like “specified complexity” must be resolved. If Meyer is contradicting Dembski, Dembski needs to point that out (he may have done, but I’m not aware of it). If Meyer thinks Dembski’s definition is not viable (and I agree) he needs to make that argument. If Dembski concedes, as he seems to, that evolutionary search can be better than random search under certain circumstances, he needs to demonstrate why those circumstances do not pertain to biological evolution.

And if IDists acknowledge that evolutionary search works fine for “microevolution”, as most seem to, and if their case is that certain steps (the ribosome? The bacterial flagellum? “new body parts”?) are beyond the reach of evolutionary search (Behe’s case) then they need to get behind Behe, whose conclusions run counter to many positions taken by ID proponents (common descent, for instance), and stop making arguments based on the falsified principle that evolutionary processes cannot do a great deal.

And Behe needs to confront the falsified principle of his that “IC” features are unevolvable by Darwinian mechanisms, because even my little exercise demonstrates that they are.

27. And Behe needs to confront the falsified principle of his that “IC” features are unevolvable by Darwinian mechanisms, because even my little exercise demonstrates that they are.

Everyone needs to get it into their head that evolutionary mechanisms do not boil down to “RM + NS”! Drift and Recombination are the unsung heroes of the piece. Adaptation gets all the credit, but these latter processes make a mahoosive difference to the ‘evolutionary algorithm’ – perturbation from local maxima, crossing valleys and – in the case of Recombination – networking and parallel-processing the ‘search’, creating new variants, creating gene duplicates and being a direct source of ‘IC’ combinations that cannot be reached by serial point mutation. Let’s hear it for Recombination!

28. Allan Miller: Everyone needs to get it into their head that evolutionary mechanisms do not boil down to “RM + NS”! Drift and Recombination are the unsung heroes of the piece. Adaptation gets all the credit, but these latter processes make a mahoosive difference to the ‘evolutionary algorithm’ – perturbation from local maxima, crossing valleys and – in the case of Recombination – networking and parallel-processing the ‘search’, creating new variants, creating gene duplicates and being a direct source of ‘IC’ combinations that cannot be reached by serial point mutation. Let’s hear it for Recombination!

Except recombination is also supposed to be random- gene duplications are supposed to be random- as a matter of fact ALL genetic change is a mistake/ error/ accident.

That said no one has any idea if any mutation beyond point mutations, are random in any sense of the word. As far as you know recombinations are a DESIGN mechanism.

29. damitall:
The question of just when and to what extent a “designer” acted/is acting is one that has been posed many times to the ID gurus. And JoeG.
I don’t recall there ever being an unequivocal answer – or even a clear “dunno”
I think it symptomatic of the intellectual disarray in the ID camp, which has been pointed out several times in this forum

LoL! SCIENCE- that is what science is for-> to help us answer those questions!

Heck your position doesn’t have any answers and it has almost all of the resources.

30. Elizabeth:
Well, I do think there is a great deal of heterogeneity in ID positions (after all, they include YEC!).

However, what I find odd (and frustrating) is that rather than trying figure out what is consistent across those positions, and what is inconsistent, they close ranks against the common enemy, often referred to as “Darwinism” (even when it refers to cosmology, not biology!)

If ID is to be taken seriously as science (and it could be, potentially), then it must confront its own internal contradictions.I’m not so concerned about the YEC thing, but contradictions over the very definition of key concepts like “specified complexity” must be resolved.If Meyer is contradicting Dembski, Dembski needs to point that out (he may have done, but I’m not aware of it).If Meyer thinks Dembski’s definition is not viable (and I agree) he needs to make that argument.If Dembski concedes, as he seems to, that evolutionary search can be better than random search under certain circumstances, he needs to demonstrate why those circumstances do not pertain to biological evolution.

And if IDists acknowledge that evolutionary search works fine for “microevolution”, as most seem to, and if their case is that certain steps (the ribosome?The bacterial flagellum?”new body parts”?) are beyond the reach of evolutionary search (Behe’s case) then they need to get behind Behe, whose conclusions run counter to many positions taken by ID proponents (common descent, for instance), and stop making arguments based on the falsified principle that evolutionary processes cannot do a great deal.

And Behe needs to confront the falsified principle of his that “IC” features are unevolvable by Darwinian mechanisms, because even my little exercise demonstrates that they are.

What does the theory of evolution have in the way of mathematically rigrous definitions- that way we can compare?

And your demonstration has nothing to do with biology nor anything Behe claimed.

And AGAIN as far as you know an evolutionary search is a DESIGN mechanism.- Please stop the equivocation.

31. Hi Liz-

IDists await your paper to appear in a peer-reviewed journal.

Good luck with that…

32. Well, I do think there is a great deal of heterogeneity in ID positions (after all, they include YEC!).

Oh yes. They’re certainly a broad church.

In addition to what you’ve said above, they (including Behe) need to understand probabilities and the Texas sharpshooter fallacy. I think the underlying problem is that it does seem genuinely improbable to them that nature could “blindly” hit the specific target, which is, of course, us.

33. And in a way, I can see where they’re coming from – we, and all our cells, are very complex indeed, and it is difficult, if one knows very little about chemistry and physics, to imagine how all that complexity came to be.

However, with even quite basic understanding – say, BSc. level -, particularly of those intramolecular forces, and the associated laws, as very eloquently described by Mike E, in this thread and elsewhere, the range of possibilities becomes much clearer. Unbiased reading of the relevant literature also clarifies, but only if the reader has that understanding.

And here Elizabeth has clarified the possibility of possibilities, as it were, in a very easily understood manner, with specific references to and quotes from Dembski’s writings and claims.

If nothing else, it will encourage IDists to look to their claims, and definitions of terms. I foresee quite a bit of semantic fun to come.

34. Joe:

Except recombination is also supposed to be random- gene duplications are supposed to be random- as a matter of fact ALL genetic change is a mistake/ error/ accident.

So what?

That said no one has any idea if any mutation beyond point mutations, are random in any sense of the word.

By what method do you determine that point mutations are random and all other genetic changes are not? All mutations and recombinations are random in some sense of the word.

If you want me to accept that a designer is on hand at every single meiosis to create SPECIFIC combinations for some unspecified purpose, then guide that egg down the fallopian tube to meet that very sperm (similarly guided) from that particular stranger she met in a bar a couple of evenings later, so that the offspring of that union could undergo a particular gametogenetic recombination 20 year later … I’m afraid it ain’t gonna happen. And don’t say “nice strawman” – this is where you are heading with this “is anything really random?” garbage.

35. I think Joe G may fall in the second camp. Anyone else?

Since humans are actively using ID to generate product otherwise not producible by non-ID forces, it’s fairly obvious ID is necessarily interventionist.

36. Liz,

I’m afraid that I don’t see how this can work. Are you still calculating chi as follows?
-log2[10^160*N/2^500]
Even with N as low as 1, this falls short of the threshold of chi >= 1. The length of the sequence needs to be at least 533 to hit the threshold, assuming that N=1. If N is larger, the sequence needs to be longer.

If my math is right, sorry for being a party pooper.

37. William J. Murray: Since humans are actively using ID to generate product otherwise not producible by non-ID forces, it’s fairly obvious ID is necessarily interventionist.

I don’t think that was the question, William. I think the question is: do ID proponents propose that the Intelligent Designer responsible for humans designed a universe in which life would evolve spontaneously, or whether the ID intervened in his/her created universe at key points (maybe OOL, maybe the ribosome, maybe bacterial flagella) to make sure it took occurred at all and/or took its intended form.

38. I don’t think that was the question, William. I think the question is: do ID proponents propose that the Intelligent Designer responsible for humans designed a universe in which life would evolve spontaneously, or whether the ID intervened in his/her created universe at key points (maybe OOL, maybe the ribosome, maybe bacterial flagella) to make sure it took occurred at all and/or took its intended form.

I think all ID ultimately comes from the same source, so the same ID source that is proposed to have kicked things off is that which is currently operating through humans. So, it’s not really “ID intervention”, but ID as an ongoing causative agency.

39. William J. Murray: I think all ID ultimately comes from the same source, so the same ID source that is proposed to have kicked things off is that which is currently operating through humans. So, it’s not really “ID intervention”, but ID as an ongoing causative agency.

OK, I think I see what you may be getting at. Fair enough.

But that wouldn’t be a question that science could resolve, I don’t think – or do you think it could?

40. Ah, yes. I think you typed 10^160 earlier, which I assumed to be your guesstimate for 10^120 * φ_S(T). So if N/2^500 is P(T|H), aren’t you missing the term φ_S(T)? Not that this nullifies the point of your experiment — It seems that an estimate of φ_S(T) is bound to be quite arbitrary, and any reasonable value can be compensated for by using a longer sequence if necessary.

41. R0b:
Ah, yes.I think you typed 10^160 earlier, which I assumed to be your guesstimate for 10^120 * φ_S(T).So if N/2^500 is P(T|H), aren’t you missing the term φ_S(T)? Not that this nullifies the point of your experiment — It seems that an estimate of φ_S(T) is bound to be quite arbitrary, and any reasonable value can be compensated for by using a longer sequence if necessary.

Well, 1/2^500 is P(T|H), and φ_S(T)=N, where N is the number of patterns that satisfy the specification or better.

ETA: and apologies for earlier typo!

42. Elizabeth: OK, I think I see what you may be getting at.Fair enough.

But that wouldn’t be a question that science could resolve, I don’t think – or do you think it could?

Again, this depends on your frame of reference. If you are looking forward in time, you can see that the range of what CAN happen is both enormous, and difficult or impossible to predict. We know that something will evolve, but we can’t predict exactly what it is or what pathway might be followed to get there. So looking forward, we see a somewhat trackless sea of possibilities.

Conversely, looking backwards we see sequences entirely contingent on fabulously unlikely coincidences, piled atop one another as far back as we can see. The conclusion of deterministic Design is almost unavoidable. What MIGHT happen is pretty much wide open. What DID happen is so improbable that a Designer SEEMS required.

43. William J Murray: “I think all ID ultimately comes from the same source, so the same ID source that is proposed to have kicked things off is that which is currently operating through humans. So, it’s not really “ID intervention”, but ID as an ongoing causative agency.”

If it’s “ID” as an OOL, then what we see as “evolution” is “ID as an ongoing causative agency” which would appear to us exactly as the “evolution” we see.

How could you even detect ID then?

44. Flint: Again, this depends on your frame of reference. If you are looking forward in time, you can see that the range of what CAN happen is both enormous, and difficult or impossible to predict. We know that something will evolve, but we can’t predict exactly what it is or what pathway might be followed to get there. So looking forward, we see a somewhat trackless sea of possibilities.Conversely, looking backwards we see sequences entirely contingent on fabulously unlikely coincidences, piled atop one another as far back as we can see. The conclusion of deterministic Design is almost unavoidable. What MIGHT happen is pretty much wide open. What DID happen is so improbable that a Designer SEEMS required.

Are you trying to illustrate the sharpshooter fallacy?

45. Joe G: LoL! SCIENCE- that is what science is for-> to help us answer those questions!

Heck your position doesn’t have any answers and it has almost all of the resources.

There’s nothing stopping any of you IDists from doing all the science you want to do. Millions of dollars are spent by IDists on promoting ID and bashing evolution and the people who accept and study it, but no science is done with those millions. IDists have plenty of resources AND you have just as much access to the money that real scientists have access to, but to get any of that money you have to do science, not just gripe about science.

A lot is contributed to science by self-educated, nature loving people who have little to no college education and have no money handed to them for any of their research and discoveries. Millions of people around the world find, document, and photograph important information about nature every day and share it with professional scientists, and the vast majority of them do it without being paid in any way. Many even pay to accompany and help professional scientists who are doing their work in the wild.

What have you contributed to science lately, if ever? What novel research have you done lately, if ever? What grants have you applied for lately, if ever? What scientific expeditions have you been on lately, if ever? What volunteer work have you done for science lately, if ever? What papers have you submitted to scientific journals lately, if ever? What novel discoveries have you made lately, if ever? What science are you actually doing? And no, bashing science doesn’t count as doing science.

46. Allan Miller:
Joe:So what?

By what method do you determine that point mutations are random and all other genetic changes are not? All mutations and recombinations are random in some sense of the word.

If you want me to accept that a designer is on hand at every single meiosis to create SPECIFIC combinations for some unspecified purpose, then guide that egg down the fallopian tube to meet that very sperm (similarly guided) from that particular stranger she met in a bar a couple of evenings later, so that the offspring of that union could undergo a particular gametogenetic recombination 20 year later … I’m afraid it ain’t gonna happen. And don’t say “nice strawman” – this is where you are heading with this “is anything really random?” garbage.

How did you determine all mutations and recombinations are random in some sense of the word. By what method?

Also the designer just had to design the correct genetic programming/ genetic algorithm and the software does the rest- you don’t need a computer program sitting at your computer, programming as you type, do you?

47. Flint:

Conversely, looking backwards we see sequences entirely contingent on fabulously unlikely coincidences, piled atop one another as far back as we can see. The conclusion of deterministic Design is almost unavoidable. What MIGHT happen is pretty much wide open. What DID happen is so improbable that a Designer SEEMS required.

Who is this “we” of whom you speak? I don’t see anything ‘unlikely’. Taking as a starting point a system capable of replication (whose origin may or may not prove to be ‘unlikely’), then all it really has to do is replicate. If it can, it will, and if it can’t, it’s not a replicator. All it then needs is something, somewhere, in accessible genetic space, to improve its replicational capacity in some way. If it doesn’t find it, it can just carry on replicating, unless something that has found it arises and squashes it like a bug. Life doesn’t have to leap around like some kind of ‘improbability drive’, piling up unlikelihoods. It just needs to replicate, and if in doing so it happens upon an ‘improvement’, so much the better. That improvement comes to dominate the population of replicators. From where they stand, all they have to do is locate another ‘improvement’ (or wander the metaphorical space until ‘captured’ by one). The cumulative probabilities may be astronomical, just as the odds against you or I sitting here as opposed to two of the countless trillions of possible individuals human genetic combinations may have thrown up. But here we are and, at the end of a 4 billion year chain of replication that did not fizzle out, something had to exist. It’s us, and fungi, and sharks, and paramecium, and streptocuccus, and aardvarks and maples … but for a couple of contingencies (endosymbiosis and sex), it could have remained simple prokaryotes forever. Those are the main ‘improbabilities’. Everything else is just ‘local’ wandering. You can go a hell of a long way by exploring the neighbourhood of where you happen to be at the time.