# Creating CSI with NS

Imagine a coin-tossing game.  On each turn, players toss a fair coin 500 times.  As they do so, they record all runs of heads, so that if they toss H T T H H H T H T T H H H H T T T, they will record: 1, 3, 1, 4, representing the number of heads in each run.

At the end of each round, each player computes the product of their runs-of-heads.  The person with the highest product wins.

In addition, there is a House jackpot.  Any person whose product exceeds 1060 wins the House jackpot.

There are 2500 possible runs of coin-tosses.  However, I’m not sure exactly how many of that vast number of possible series would give a product exceeding 1060. However, if some bright mathematician can work it out for me, we can work out whether a series whose product exceeds 1060 has CSI.  My ballpark estimate says it has.

That means, clearly, that if we randomly generate many series of 500 coin-tosses, it is exceedingly unlikely, in the history of the universe, that we will get a product that exceeds 1060.

However, starting with a randomly generated population of, say 100 series, I propose to subject them to random point mutations and natural selection, whereby I will cull the 50 series with the lowest products, and produce “offspring”, with random point mutations from each of the survivors, and repeat this over many generations.

I’ve already reliably got to products exceeding 1058, but it’s possible that I may have got stuck in a local maximum.

However, before I go further: would an ID proponent like to tell me whether, if I succeed in hitting the jackpot, I have satisfactorily refuted Dembski’s case? And would a mathematician like to check the jackpot?

I’ve done it in MatLab, and will post the script below.  Sorry I don’t speak anything more geek-friendly than MatLab (well, a little Java, but MatLab is way easier for this).

## 529 thoughts on “Creating CSI with NS”

1. Umm first you have to understand CSI…

2. Joe G,

I have a problem understanding the “Specification” part of CSI.

Why is it done after the fact?

ID says if a part serves a function, that means it is specified, but that to me looks circular.

Looking forward towards a target I could accept as a specification, but looking back from a target and saying, “This must be what we were aiming for”, is a hard sell.

3. As I understand Dembski’s paper, he is trying to find a way of “specifying” a “special” subset post hoc, so that we can look at a pattern, without knowing what the “intended” target is, and yet recognise that the pattern is one of a small but special subset.

His proposal is to use “compressibility” – if a pattern can be described as or more simply than the observed pattern, then it is part of the “special” subset. Thus we can estimate the size of the subset of which the observed pattern forms a part, and the proportion that subset forms of the whole. And if that proportion is so small that hitting a member of it by some process in which all patterns are equiprobable is unlikely given the number of events in the universe, then we can infer that it must have been designed.

I have in fact given a pre-specification to make life easier (not for me – for ID proponents!) in that I have said that my target is a member of that small subset of coin-toss series in which the product of the size of each run of heads exceeds a very large number.

I don’t, in advance, know what the jackpot series is, but I do know that I’m unlikely to hit it without natural selection. However, using natural selection, I’m confident I can find one well before I’ve used up the probabilistic resources of the universe!

4. Right now I’m up to 3.8319e+58.

Improvements are becoming rarer. I forgot to output the number of generations, so I don’t know how long I’ve been going. I’ll check on it in a while.

5. Elizabeth: Right now I’m up to 3.8319e+58.

Improvements are becoming rarer. I forgot to output the number of generations, so I don’t know how long I’ve been going. I’ll check on it in a while.

One way to get products greater than 10^60 is to have a run 100 groups of THHHH; in which case you would have a score of 4^100 = 1.6 x 10^60.

6. Yes, that seems to be one that gets over the bar. I don’t know how many others there are though.

Current best is: 4.7898e+58

7. Perhaps by CSI above you mean reaching the CSI threshold that indicates a design inference is warranted.

Given that, I don’t see anything wrong in principle with your reasoning or your use of CSI in this example. I’m just not sure what claim of Dembski’s you think your example refutes, or what you think a process of culling the bottom performers is representative of. Given the title of the thread, it appears you are equating that culling process with NS, but it’s certainly not representative of natural selection; you’ve described a process deliberately designed to increase the CSI to a maximum potential.

Natural selection doesn’t cull the CSI-producing underachievers; it’s culling process has nothing to do with CSI per se.

8. Elizabeth: “As I understand Dembski’s paper, he is trying to find a way of “specifying” a “special” subset post hoc, so that we can look at a pattern, without knowing what the “intended” target is, and yet recognise that the pattern is one of a small but special subset.”

I would have no problem with the term “Described” as opposed to “Specified” but Dembski and other IDists can use “Specified” to hint that a “specific” target was intended.

A “Specification” in engineering describes an intended “pre-design” outcome while ID uses it to describe an outcome after the fact.

9. Joe G: that seems to be one that gets over the bar. I don’t know how many others there are though.

Walk us through an example with calculated values Joe, so we’re all on the same page.

10. Since this is a thread on GAs I will post a link to my word generator. Unlike the basic Weasel program it does not have a target. Its fitness function is based on whether letters appear in positions that occur in dictionary words. There is no preference given to matching complete words, but it does score increased fitness for combinations of letters that appear in words.

Because it does not score fitness for completed words, it is as likely to produce pronounceable neologisms as it is to produce dictionary words. I’m rather biased, but I think the ability to produce neologisms is evidence that GAs can invent.

It’s produced several drug trade names that are not in its scoring dictionary.

http://itatsi.com

11. Elizabeth: Yes, that seems to be one that gets over the bar. I don’t know how many others there are though.
Current best is: 4.7898e+58

So I guess with this sequence of 100 single tails and four heads we could have THHHH or HHHHT; in which case there are 2 out of 2^500 chances of getting 10^60.

So that is a probability of 2^(-499) = 6.1 x 10^(-151) of obtaining that score.

12. William J. Murray:
Given the title of the thread, it appears you are equating that culling process with NS, but it’s certainly not representative of natural selection; you’ve described a process deliberately designed to increase the CSI to a maximum potential.

No. Liz has described a process deliberately designed to increase the runs of H’s. This is a simulation that is exactly representative of natural selection, where rounds with the phenotype *high product of H-runs* have higher fitness than rounds with the phenotype *low product of H-runs*. The fitter phenotypes also have higher CSI, i.e the two traits (*high product of H-runs* and *high CSI*) are mechanistically correlated, and thus selection acts jointly on both traits. Just like an individual with a functioning immune system may have higher fitness than an individual without a functioning immune system, where the trait that confers higher fitness (a functioning immune system) may also have higher CSI.

Natural selection doesn’t cull the CSI-producing underachievers; it’s culling process has nothing to do with CSI per se.

Natural selection culls the CSI-producing underachievers in all those situations where the traits that confer higher fitness also have higher CSI. That’s exactly the example Liz is using.

13. I have a problem understanding the “Specification” part of CSI.

Strange, if you could just support your position you wouldn’t even have to worry about any part of CSI

Why is it done after the fact?

How do you think archaeologists and forensic scientists do it? “Minority Report” with precogs is science-fiction.

Do you think we were there with the designer(s)?

ID says if a part serves a function, that means it is specified, but that to me looks circular.

Unfortinately that isn’t all what ID says about that. For example if any ole DNA sequence could produce any ole protein, there wouldn’t be any specificity. However if there is a protein of say 200 amino-acids that can tolerate only one AA substitution, then that has a high specificity.

Looking forward towards a target I could accept as a specification, but looking back from a target and saying, “This must be what we were aiming for”, is a hard sell.

Well that is how science operates- we observe some result and try to explain it- what it is, what it does and how it came to be the way it is.

14. Toronto: Joe G,

I have a problem understanding the “Specification” part of CSI.

Why is it done after the fact?

ID says if a part serves a function, that means it is specified, but that to me looks circular.

Looking forward towards a target I could accept as a specification, but looking back from a target and saying, “This must be what we were aiming for”, is a hard sell.

I have been pointing this out to IDists like Joe G for years. What ID calls a ‘specification’ is actually only an after-the-fact description of a biological entity. To be a specification it must be a before-the-fact design document that the end product is constructed to meet.

Arguing this point was one of the things that got me banned at UD. I’ve yet to see any IDist honestly address the problem.

15. Natural selection culls the CSI-producing underachievers in all those situations where the traits that confer higher fitness also have higher CSI. That’s exactly the example Liz is using.

If that is what she means in her example, where does she represent situations where greater fitness = decreased CSI?

16. I said: “Natural selection culls the CSI-producing underachievers in all those situations where the traits that confer higher fitness also have higher CSI. That’s exactly the example Liz is using.”

William J. Murray: If that is what she means in her example, where does she represent situations where greater fitness = decreased CSI?

Who said that she does? Why would she need to represent that? She is trying to show that NS *can* produce CSI. Simulating a situation where higher fitness is correlated to higher CSI is completely sufficient to achieve that. Unless you think that this situation can never occur in living organisms?

17. Joe G: “Unfortinately that isn’t all what ID says about that. For example if any ole DNA sequence could produce any ole protein, there wouldn’t be any specificity. However if there is a protein of say 200 amino-acids that can tolerate only one AA substitution, then that has a high specificity.”

That would mean that “any” bit pattern is specific since if you change any bit, the “information” is not the same.

On the other hand, I could show you JPEGs of Mitt Romney at different compression ratios, but all of them are recognizable, yet they all have completely different bit patterns.

With your definition of CSI, a compressed picture of Mitt Romney is not “specifically” a picture of him since all bits in the “information” can change without changing the image of Romney from one JPEG to another.

18. Who said that she does? Why would she need to represent that? She is trying to show that NS *can* produce CSI.

In the first place, no new complexity or specification is being added by the culling process; that is being added by her point mutation process. IOW, if there are no point mutations, the culling process only culls the bottom performers until virtually nothing is left. But, if we allow Elizabeth to sneak her point mutations in as “part of the NS process”, then she can only accompllish her goal through an iterative culling process in conjunction with random* point mutations.

She’s trying to get to a goal “within the lifetime of the universe”. If we have infinite time and reasources, then yes, eventually monkeys will type shakespeare; but the reason the culling process is important is because that speeds the monkey to bard process up. Nobody is arguing that chance* ***cannot*** produce CS; they argue that in principle it is not a sound explanation for CSI above a certan level, given limited time and resources.

This makes how the culling process is characterized very important, because if the culling process is also culling out CSI overachievers on occasion (where increased fitness matches decreased CSI), that lengthens the amount of time necessary for the process to acquire the target level of CSI that implies a design inference. In fact, it makes that amount of time required (iterative generations) indistinguishable form chance, unless the culling process is, for some reason, gamed (skewed) towards increased CSI.

19. William J. Murray:
Perhaps by CSI above you mean reaching the CSI threshold that indicates a design inference is warranted.

Well, Dembki’s chi metric actually incorporates his universal probability bound, and he suggest that if the value of chi exceeds 1, we can reject the non-Design null.

So, yes, I’m trying to get to that threshold. I think I’m there, actually, but the iterations haven’t quite hit my jackpot yet. I’m not sure whether they will, with the deterministic culling system I’ve got. Stochastic factors sometimes get a a promising failure into play. But I’ll leave it running overnight.

Given that, I don’t see anything wrong in principle with your reasoning or your use of CSI in this example. I’m just not sure what claim of Dembski’s you think your example refutes,

That if we observe a pattern that exhibits CSI>1 we must reject non-Design.

or what you think a process of culling the bottom performers is representative of.

Natural selection i.e. heritable variance in reproductive success. The poor performers don’t get to reproduce. Consider a population with limited resources, in which only the best foragers survive to breed. The better everyone gets, the better the best have to be to survive.

Given the title of the thread, it appears you are equating that culling process with NS,

Yes, of course.

but it’s certainly not representative of natural selection;

It’s exactly representative of natural selection.

you’ve described a process deliberately designed to increase the CSI to a maximum potential.

No. I’ve designed a process that maximises the product of the sizes of the runs of heads. That’s the function I specified. And very few series will have a larger function that my jackpot threshold. That means, automatically, that the CSI of any winning series will be very high. Just as, if we specified that a sequence of nucleotides had to result in a highly effective protein, there might be very few sequences that coded for that protein above a given threshold. Dembski suggest that when we observe a pattern, we compute the number of patterns that perform as well, or better, and compute the proportion they are of the total number of theoretically possible patterns.

That’s exactly what I’m doing.

Natural selection doesn’t cull the CSI-producing underachievers; it’s culling process has nothing to do with CSI per se.

Indirectly it does, because what specification the “semiotic agent” as Dembski calls it will be related to the function of interest. If it’s a protein, then it will be the function of that protein, so if we select for that protein, we will also,by definition, be selecting for CSI, where the S part is the function of that protein.

In my little exercise, I have a “genome” – the series of coin tosses, a “phenotype” – the string of head-run-totals, and a “function” – the product of that string. I’m selecting for that function. Only a very small subset of possible genomes can perform that function above the threshold I set. That means that if I manage to find one that does, I have a genome that exhibits CSI.

20. Rich: Walk us through an example with calculated values Joe, so we’re all on the same page.

That’s my comment you were quoting, Rich. I’m happy to walk you through it, though.

21. An example of where increased fitness might result in reduced CSI calculated on a DNA sequence could be where a protein is redundant in the current environment, and therefore uses resources that could be better used on useful proteins. And there are might be a lot of mutations that would disable the protein, and very few that did not.

So if you calculated CSI on basis of the small number of sequences that produced a viable protein, as against the many sequences that did not, an organism with a corrupted sequence might be fitter than one with an intact sequence, yet the CSI, calculated on the protein, would be reduced.

Of course this raises a key conceptual issue with regard to CSI – if the specification is simply “a fitter organism” then of course anything that increases fitness will increase CSI. But if you calculate it on some sub-function, like protein synthesis, then you could have greater fitness in organisms with less CSI.

Here, I am computing Functional Complexity, where the better the function is performed, the fitter my “phenotypes”.

But that does not invalidate what I am doing by Dembski’s criteria, and in fact, because my specification is a function that serves fitness, I’m doing something more biologically relevant,than, say, computing the Functional Complexity of a protein that is not necessarily contributing to the fitness of the phenotype.

22. William J. Murray: In the first place, no new complexity or specification is being added by the culling process; that is being added by her point mutation process. IOW, if there are no point mutations, the culling process only culls the bottom performers until virtually nothing is left.But, if we allow Elizabeth to sneak her point mutations in as “part of the NS process”, then she can only accompllish her goal through an iterative culling process in conjunction with random* point mutations.

Sorry, my statement was imprecise, my bad! What I meant was this: Liz is trying to show that random point mutations AND natural selection *can* produce CSI. And Liz isn’t *sneaking* anything in anywhere. She clearly states in the OP: “…starting with a randomly generated population of, say 100 series, I propose to subject them to random point mutations and natural selection,…”

Nobody is arguing that chance* ***cannot*** produce CS; they argue that in principle it is not a sound explanation for CSI above a certan level, given limited time and resources.

I have no idea what you mean by this or how it pertains to Lizzie’s OP. It is quite obvious, and you acknowledge this in your first comment on this thread that by CSI she means reaching a probability threshold that Dembski claims to indicate that a design inference is warranted.

This makes how the culling process is characterized very important, because if the culling process is also culling out CSI overachievers on occasion (where increased fitness matches decreased CSI), that lengthens the amount of time necessary for the process to acquire the target level of CSI that implies a design inference.

You completely misunderstand the example, and my explanations of it. In a situation where the trait that confers higher fitness also has higher CSI, the culling process of natural selection CANNOT cull CSI-overachievers. Not even occasionally. By the very definition and the relation of the terms *fitness* and *natural selection*.

23. So back to the topic- what does the OP have to do with natural selection, which pertains to biological populations?

24. All you’re doing, Elizabeth, is equivocationg fitness with CSI. If you’re going to assume “increased fitness” = “increased CSI” (which is all your example does, because the only distinguishing quality of your target is a CSI value, and the only consideration of your culling process is to achieve that goal), then all you have created here is an example of how an intelligently designed process can acquire high levels of CSI even utilizing random mutations by gaming the system in favor of high CSI outcomes.

25. William J. Murray:
All you’re doing, Elizabeth, is equivocationg fitness with CSI. If you’re going to assume “increased fitness” = “increased CSI” (which is all your example does, because the only distinguishing quality of your target is a CSI value, and the only consideration of your culling process is to achieve that goal), then all you have created here is an example of how an intelligently designed process can acquire high levels of CSI even utilizing random mutations by gaming the system in favor of high CSI outcomes.

William- it is a waste of time trying to explain this stuff to these people.

I didn’t misunderstand you. I understand that in the case of the higher CSI = better fitness, the higher CSI will be chosen. What her example ignores are the cases of lower CSI = better fitness and the cases where “no variation in CSI” = better fitness.

27. William J Murray: “What her example ignores are the cases of lower CSI = better fitness and the cases where “no variation in CSI” = better fitness.”

This is where you misunderstand ID’s requirement that CSI is linked to “specific” complexity.

Better fitness implies a higher degree of “specified” information since by definition any “random” bit pattern would not be as “specific”.

The “bit window”, has NOT changed and therefore the better the “fitness”, the more “specific” the “information”.

28. Here is a plot I made of sequences with high “products” where all the runs-of-heads are the same length. There will also be sequences that mix runs of different sizes, but this gives an idea of the odds:

29. William J. Murray:

Ididn’t misunderstand you. I understand that in the case of the higher CSI = better fitness, the higher CSI will be chosen. What her example ignores are the cases of lower CSI = better fitness and the cases where “no variation in CSI” = better fitness.

William, give me a calculated example of a case where lower CSI=better fitness. I don’t necessarily need to see actual numbers, just tell me what would have to be true for this to be the case.

30. William J. Murray:

I didn’t misunderstand you. I understand that in the case of the higher CSI = better fitness, the higher CSI will be chosen. What her example ignores are the cases of lower CSI = better fitness and the cases where “no variation in CSI” = better fitness.

Again: No, it doesn’t ignore anything. The cases where lower CSI = better fitness are irrelevant to her goal, which is to show that random point mutation and natural selection *can* produce CSI: she argues that it does so in cases where higher CSI = better fitness. That there are other cases where random point mutations and natural selection will not produce CSI is not contested here by anyone and is completely irrelevant to her argument.

31. William J. Murray: In the first place, no new complexity or specification is being added by the culling process; that is being added by her point mutation process.

As I’ve tried to explain before, William, it makes no sense to separate these two processes. It’s like saying all the clapping is being done by one hand.

If I do not cull, and I can easily drop that part of my program, I will simply generate, rather ineffectually, a lot of sequences that are no more likely to have high products than any other sequence, although they are quite likely to have a product close to that of their parent. I can easily do this, and show you the result. But the chances of me producing a high-product sequence will be extremely small, and therefore my chances of producing a sequence that exhibits a chi >1 will also be extremely small. In fact that’s pretty well Dembski’s null hypothesis.

But by adding Natural Selection, i.e. by culling the poor performers, parts of sequences that have high products accumulate, and so I am much more likely to achieve chi>1.

If I demonstrate that this is so, i.e. that no culling doesn’t give me chi>1 and culling does, will you be convinced?

IOW, if there are no point mutations, the culling process only culls the bottom performers until virtually nothing is left.

Obviously we need the point mutations – that’s the other hand in the clap. Without new variation, no genome can be better than the best of the starting population, which was randomly generated! However, note that I cull the bottom 50, then “breed” from the top. If I do this with no mutation then eventually I will end up with a population that is identical to the best performer of the original randomly generated sequences. I can demonstrate all these if you like: NS but no RM; RM, but no MS, neither NS nor RM. But only RM+NS will generate a sequence with chi>1.

And the take home message, IMO, is that CSI is a potentially useful concept, but it’s not the “pattern that signifies intelligence”, it’s the pattern that signifies heritable variance with reproductive success, of which intentional intelligence, I suggest is a subset.

But, if we allow Elizabeth to sneak her point mutations in as “part of the NS process”, then she can only accompllish her goal through an iterative culling process in conjunction with random* point mutations.

Of course. But I’m not “sneaking” in anything. I’m simply showing that RM alone can’t produce high CSI, but that RM+NS can. Obviously NS alone, can’t, because NS is heritable variation in reproductive success, and that variance has to come from somewhere! In reality you can’t have NS without some source of variance. In my exercise we start with a population of 100 variants, but I could, if you like, start with 100 identical genomes. Clearly, then, without RM, there will be no NS, because all the genomes will have exactly the same fitness. That’s why I say that separating them is so misleading.

She’s trying to get to a goal “within the lifetime of the universe”. If we have infinite time and reasources, then yes, eventually monkeys will type shakespeare; but the reason the culling process is important is because that speeds the monkey to bard process up.

Exactly. Which is why monkeys-with-typewriters is irrelevant to any criticism of Darwinian evolution!

Nobody is arguing that chance* ***cannot*** produce CS; they argue that in principle it is not a sound explanation for CSI above a certan level, given limited time and resources.

Well, if you are using “chance” to mean “no-design” (as per your earlier definition) then, yes, people – most ID proponents – are arguing precisely that. It’s the basis of the whole CSI argument. However, if you are using “chance” to mean “random mutation” then, yes, random mutation alone won’t produce high levels of CSI, if the variants never vary in reproductive success i.e. if there is no NS. However, if the variants do confer variation in reproductive success, then we have NS, and a substantial probability of high CSI.

This makes how the culling process is characterized very important, because if the culling process is also culling out CSI overachievers on occasion (where increased fitness matches decreased CSI), that lengthens the amount of time necessary for the process to acquire the target level of CSI that implies a design inference. In fact, it makes that amount of time required (iterative generations) indistinguishable form chance, unless the culling process is, for some reason, gamed (skewed) towards increased CSI.

The culling process will always be “skewed” towards CSI, if CSI is calculated on the function the sequence serves the phenotype. If it isn’t, then not necessarily. But the whole point of CSI as a concept is to explain the extraordinarily complex functions we see being carried out in organisms to the benefit of that organism. So why would you want to specify (for your S) something irrelevant to phenotypic function?

William, I fear you will think I am “obfuscating” above, but I am not. Please read my post very carefully. The points I am making go to the heart of the ID vs Evo debate.

And if you’d like me to demonstrate those alternative runs, I’m more than willing to do so.

32. Joe G: William- it is a waste of time trying to explain this stuff to these people.

So what are you doing here?

33. William J. Murray:
If you’re going to assume “increased fitness” = “increased CSI” (which is all your example does, because the only distinguishing quality of your target is a CSI value, and the only consideration of your culling process is to achieve that goal), then all you have created here is an example of how an intelligently designed process can acquire high levels of CSI even utilizing random mutations by gaming the system in favor of high CSI outcomes.

Let me say it again, because it didn’t seem to sink in the first time: The distinguishing quality of the target, i.e. the selected trait, is NOT CSI, it is: *high product of H-runs*. Liz has described a process deliberately designed to increase the product of H-runs. This is a simulation that is exactly representative of natural selection, where rounds with the phenotype *high product of H-runs* have higher fitness than rounds with the phenotype *low product of H-runs*. The fitter phenotypes also have higher CSI, i.e. the two traits (*high product of H-runs* and *high CSI*) are mechanistically correlated, and thus selection acts jointly on both traits.

34. So now we’re just going to assume that higher CSI = more fit, and I have to demonstrate the converse? Wheee! The burden has been shifted! We go from you making your case, to me having to prove otherwise?

Here we go: lower CSI = some low form of microbial life. Higher CSI = mammals, reptiles, avians. Environmental pressure = introduction of toxic gas that poisons & kills off all creatures that breathe via lungs. It just so happens that the fittest organisms in that environment are ones with much, much less CSI. Note: the lack of CSI isn’t per se what saved them (all sorts of non-lung CSI would have been fine), it was just the increased CSI that happend to develop lung breathing that killed off trillions of high CSI organisms because of a new environmental pressure.

35. Joe,

It’s a waste of time if you’re trying to get them to understand. It’s not a waste of time if it’s the most effective way of killing time during slow periods at work.

36. William J. Murray: So now we’re just going to assume that higher CSI = more fit, and I have to demonstrate the converse? Wheee! The burden has been shifted! We go from you making your case, to me having to prove otherwise?

Could we prove that CSI exists, first and foremost? I don’t know if we’re looking at the emperor’s pants or shirt.

37. William J. Murray:
Joe,

It’s a waste of time if you’re trying to get them to understand.It’s not a waste of time if it’s the most effective way of killing time during slow periods at work.

It’s a waste of time if all any of us are doing are “trying to get them to understand”. Let’s try equally hard to understand each other.

38. William J. Murray: Here we go: lower CSI = some low form of microbial life. Higher CSI = mammals, reptiles, avians. Environmental pressure = introduction of toxic gas that poisons & kills off all creatures that breathe via lungs. It just so happens that the fittest organisms in that environment are ones with much, much less CSI. Note: the lack of CSI isn’t per se what saved them (all sorts of non-lung CSI would have been fine), it was just the increased CSI that happend to develop lung breathing that killed off trillions of high CSI organisms because of a new environmental pressure.

WOW!

Are you now suggestion that microbes and animals with lungs are in the same gene pool?

And just what does your “counterexample” have to do with what Elizabeth is demonstrating?

Can any of you ID followers actually explain what CSI is?

What does CSI have to do with anything in science?

39. William J. Murray:
So now we’re just going to assume that higher CSI = more fit, and I have to demonstrate the converse? Wheee! The burden has been shifted! We go from you making your case, to me having to prove otherwise?

No. You don’t have to assume that at all. But if you are going to calculate the CSI on some specification other than phenotypic fitness (which is perfectly possible – you could do it on efficiency at generating some protein, or on the efficiency with which that protein performs some cellular function, even though that protein had no effect – or a deleterious effect – on phenotypic fitness).

But Dembski’s claim is that CSI of chi>1 can only be generated by Design. I am demonstrating that it can be generated by evolutionary processes, namely heritable variance in reproductive success.

Therefore, if I succeed, I have supplied a practicable and replicable falsification of Dembski’s claim.

Attempting to imply that I have cheated by using natural selection is clearly absurd because natural selection is precisely the mechanisms Darwinist propose for generating CSI. Saying that Natural Selection can’t do it without mutations is also pointless because without mutations there can be no Natural Selection. Saying that it’s cheating to select for CSI is absurd because the whole point of the CSI concept it to claim the kind of beneficial functions in biology that are said to exhibit CSI cannot be produced by Darwinian mechanisms, i.e. by Natural Selection.

So far from me shifting the burden, I’m trying as hard as I can to keep it where Dembski himself put it! He said that only Design could result in chi>1. Evos say that Darwinian mechanisms can do this. I’m demonstrating that this is true.

Where is the shifted burden?

Here we go: lower CSI = some low form of microbial life.Higher CSI= mammals, reptiles, avians.Environmental pressure = introduction of toxic gas that poisons & kills off all creatures that breathe via lungs. It just so happens that the fittest organisms in that environment are ones with much, much less CSI. Note: the lack of CSI isn’t per se what saved them (all sorts of non-lung CSI would have been fine), it was just the increased CSI that happend to develop lung breathing that killed off trillions of high CSI organisms because of a new environmental pressure.

Fine. No-one, least of all me, is arguing that CSI is not necessarily associated with increased fitness. I’ve already given examples of where that could be the case.

But nor is anyone arguing that NS must increase CSI. All we are arguing is that Dembski is wrong to assume that chi>1 signifies intelligence. My exercise is not “intelligent” in that it has no foresight. It is purely Darwinian. Yet it’s busy generating CSI with chi>1.

40. William J Murray: “So now we’re just going to assume that higher CSI = more fit, and I have to demonstrate the converse? Wheee! The burden has been shifted! We go from you making your case, to me having to prove otherwise?”

Sounds like you disagree with Dembski then.

Dembski says if I have a 500 bit window, (complexity), it does not on its own infer a designer, but couple that with “specified” functionality, and I can infer a designer.

THEREFORE, if I have a less “fit” organism, it would indicate a lower value of CSI for any level of “complexity”, (number of bits), in my window of interest.

THEREFORE, CSI drops with a decrease in fitness.

41. William J. Murray: So now we’re just going to assume that higher CSI = more fit, and I have to demonstrate the converse? Wheee! The burden has been shifted! We go from you making your case, to me having to prove otherwise?Here we go: lower CSI = some low form of microbial life. Higher CSI = mammals, reptiles, avians. Environmental pressure = introduction of toxic gas that poisons & kills off all creatures that breathe via lungs. It just so happens that the fittest organisms in that environment are ones with much, much less CSI. Note: the lack of CSI isn’t per se what saved them (all sorts of non-lung CSI would have been fine), it was just the increased CSI that happend to develop lung breathing that killed off trillions of high CSI organisms because of a new environmental pressure.

The introduction of the toxic gas altered the environmental specification to “no lungs”, whereas before it had included them. NS particularly relates to the “S” in CSI. It arguably increases or tightens the specification in your example.

It is the “S” that Dembski particularly associates with intelligent design.

Ten consecutive heads in coin tosses is supposed to show specification, not complexity. The tails could have been poisoned!

42. Just to clarify (I’ll post my script shortly): I am evaluating the fitness of my critters on the basis of how large the product of the sizes of their runs-of-heads is.

That is also the specification of interest, just as the specification for a protein-coding sequence for protein would normally be how well it performs its function in the organism that bears it. My example is exactly analogous to that.

And there are a very small number of sequences, out of the total number of sequences that give rise to very high products. Over about 1058 the proportion is so tiny that the sequences that good and better form such a tiny subset of the total possible number of sequences that the probability of randomly generating such a sequence, without natural selection, is unlikely in the history of the universe.

With natural selection it is easy, however – selection, that is, for sequences with high products of sizes of runs-of-heads.

Therefore, high CSI is the pattern that signifies selection, not, as Dembski says, . Although, interestingly, you could argue that by Dembski’s own definition of intelligence, natural selection is intelligent, given that “choose” and “select” are synonyms: “the power and facility to choose options”.

And I don’t think Dembski would quarrel with my exercise on the grounds Joe G and William have suggested. What he would say is that I have “smuggled” in information in the form of the fitness criterion.

My answer is: you don’t have to “smuggle” fitness criteria in real life. They are abundant in the form of the hazards and resources in our environment.

43. Elizabeth: Just to clarify (I’ll post my script shortly): I am evaluating the fitness of my critters on the basis of how large the product of the sizes of their runs-of-heads is.

And just a little clarification about the function you are looking at.

We already know that the Shannon entropy is maximized when all probabilities are equal. So all we are doing is partitioning sets of H’s using the T’s as partitions. Then the probabilities within each “box” will be determined by how many H’s there are in that box. If all the probabilities are equal, the negative average of the logarithms of the probabilities is maximized and that means the product of the number of heads in maximized.

So we know that all the probabilities must be equal, which means that the number of H’s within a partition are all equal. That makes the length of a partition I+1 were I is the number of H’s and the plus one counts the partition boundary for each set of H’s

All we have to do is maximize I^(N/(I+1)) which is the same as maximizing its logarithm

(N/(I+1))ln( I ). We do this for integers, but we can see where the maximum occurs by thinking of I being a continuous variable x, set the derivative equal to zero, solve for x, and then find the nearest integer.

In setting the derivative equal to zero, we must find the solution to

x(1 – ln x) + 1 = 0.

This turns out to be 3.59112… and the nearest integer is 4.

So N = 500, I = 4, and we end up with 4^100 as the maximum.

44. Toronto: Sounds like you disagree with Dembski then.

Dembski says if I have a 500 bit window, (complexity), it does not on its own infer a designer, but couple that with “specified” functionality, and I can infer a designer.

THEREFORE, if I have a less “fit” organism, it would indicate a lower value of CSI for any level of “complexity”, (number of bits), in my window of interest.

THEREFORE, CSI drops with a decrease in fitness.

Well, Dembski doesn’t have a whole lot to say about biology, of course. But the Hazen paper, the one that Joe G likes (and so do I) has an equivalent formulation in which the “specification” is function. But that needn’t be coupled to fitness. For example, a sequence could be very efficient at producing a toxic protein. If we evaluated “function” on the efficiency of protein production, we’d have high functional complexity associated with lower fitness. Arguably this is true, for example of the Huntington’s protein. Or would be, if Huntington’s disease decreased reproductive success, which it may or may not do.

45. CJYman once posted the following:

If it is Shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information.

46. Since I don’t know what CSI is or how to calculate it, I will just have to guess what it might meant in this case.

Since we end up with 4^100 as our “maximum fitness” in this case, then perhaps we could take CSI to be the logarithm to base 2 of 4^100 = 2^200. That gives us 200 bits of CSI.

But, as I say, I have no idea what CSI means.

47. Well, my best product has now reached 8.0828e+58. So it’s creeping up, but ever more slowly! I’ll see where it’s got to in the morning.

But it clearly works. That number is almost certainly high enough for the proportion of sequences that high or higher to be small enough to give a chi >1

So Dembski is falsified, unless someone can tell me how my calculation of chi is wrong. I calculated it as -log2[10160*N/2500] where N was my guesstimate of the number of sequences with products higher than my threshold. Actually N can be far bigger than my estimate, and it still makes it by miles.

48. CSI- Complex Specified Information.

Information- see Shannon, Claude

(When Shannon developed his information theory he was not concerned about “specific effects”:

The word information in this theory is used in a special mathematical sense that must not be confused with its ordinary usage. In particular, information must not be confused with meaning.- Warren Weaver, one of Shannon’s collaborators

And that is what separates mere complexity (Shannon) from specified complexity.)

Specified Information is Shannon Information with meaning/ function

Complex Specified Information is 500 bits or more of specified information

MathGrrl wants a mathematically rigorous definition of CSI and I say that is like asking for a mathematically rigorous definition of a computer program (which contains CSI).

The mathematical rigor went into calculating the probabilities that got us to 500 bits of SI = CSI

Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems.- Wm. Dembski page 148 of NFL

In the preceding and proceeding paragraphs William Dembski makes it clear that biological specification is CSI- complex specified information.

In the paper “The origin of biological information and the higher taxonomic categories”, Stephen C. Meyer wrote:

Dembski (2002) has used the term “complex specified information” (CSI) as a synonym for “specified complexity” to help distinguish functional biological information from mere Shannon information–that is, specified complexity from mere complexity. This review will use this term as well.

Biological functionality is specified information.

So what do we have to do to see if it contains CSI? Count the bits and figure out the variation tolerance because if any sequence can produce the same result then specified information disappears.

And again, CSI is all about origins…

49. Joe G:
CJYman once posted the following:

If it is Shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information.

Well, that’s not Dembski’s definition. His is the opposite. He uses compressibility as a measure of specification. From Dembski’s paper:

To sum up, the collection of algorithmically compressible (and therefore nonrandom) sequences has small probability among the totality of sequences, so that observing such a sequence is reason to look for explanations other than chance.

All I’m demonstrating is that Dembski is incorrect.

50. Might I point out that you can’t calculate the improbability of a sequence unless you know at least something about the history of the sequence.

What Dembsky seems to be doing is making a Bayesian inference and simultaneously denying having to know anything about the likely actors that could have produced the string. I believe Dembski denies that applicability of Bayes unless it is convenient, as in calculating the probability that an election has been rigged.

It has always seemed odd to me that when deciding which of two agents is responsible for the history of a gene sequence, that the agent that has no physical presence and has never been observed is preferred to an agent that is observable and which has been the subject of a century and a half of active research. I suppose that’s why Dembski avoids discussing Bayes.

The other thing being done here is the implied comparison of a gene sequence with the lock combination, the combination that has to be perfect in all 400 digits in order to have any function at all.

This is demonstrably not the way genetic sequences behave.

51. I admit I don’t understand why microbes are less specified than birds. Or why they are less evolved. I’m not going to be convinced that they are even less complex without some operational definition of complexity.

Seriously, for all I know the Designer specified a biosphere composed entirely of bacteria (and viruses preying on them), in which case He got it ALMOST right, but not quite, and these bigger critters are errors, noise in the system, outcomes that fail to meet the specification.

So maybe it would reflect my confusion most accurately to say that I am NOT seeing any useful, quantified, operational definitions of complexity, specification, OR information. And if NONE of the terms in CSI are defined in some operational way that they can be measured (to everyone’s agreement) and compared, we’re either discussing how many blurks it takes to gronch, or else we’re using purely post hoc definitions based on gut hunches. And THESE are quite clearly based on theological preconceptions.

52. And btw, here is my “semiotic agent’s” description of the specified subset:

Sequences of 500 coin-tosses in which the product of the sums of runs-of-heads exceed [Jackpot].

53. Elizabeth: “Well, Dembski doesn’t have a whole lot to say about biology, of course. ”

That’s the problem!

I am looking at this from the “functional” point of view of an n bit window of “information”, as Dembski does, not “survivability” of the entire object.

As an example, the odds of me growing instead of fingernails, razor blades complete with the brand name “Wilkinson Sword” written on them, would exhibit loads of CSI according to Dembski, but might prove fatal to me!

54. Well, that’s not Dembski’s definition.His is the opposite.He uses compressibility as a measure of specification.From Dembski’s paper:

All I’m demonstrating is that Dembski is incorrect.

Again apples and oranges- you really need to read “No Free Lunch” – really

55. Elizabeth:
Well, my best product has now reached 8.0828e+58.So it’s creeping up, but ever more slowly!I’ll see where it’s got to in the morning.

But it clearly works.That number is almost certainly high enough for the proportion of sequences that high or higher to be small enough to give a chi >1

So Dembski is falsified, unless someone can tell me how my calculation of chi is wrong.I calculated it as -log2[10160*N/2500] where N was my guesstimate of the number of sequences with products higher than my threshold.Actually N can be far bigger than my estimate, and it still makes it by miles.

Umm you cannot falsify Dembski until you address his argument, which you haven’t.

56. Flint: “So maybe it would reflect my confusion most accurately to say that I am NOT seeing any useful, quantified, operational definitions of complexity, specification, OR information. ”

I agree.

You would think that the ID side would have a whole list of worked out examples, clearly showing their work, but they don’t.

I see CSI as ID’s way of tying evolution to the toss of dice.

57. Joe G: Well, that’s not Dembski’s definition.His is the opposite.He uses compressibility as a measure of specification.From Dembski’s paper:

All I’m demonstrating is that Dembski is incorrect.

Again apples and oranges- you really need to read “No Free Lunch” – really

I assume that Dembski intended that paper to be internally coherent. Moreover, he actually says it supecedes his previous treatments of CSI. And how can “algorithmically incompressible” not mean the opposite of “algorithmically compressible”?!

And I’ve read his NFL papers. They don’t help his case. I’ve just demonstrated that for this landscape evolutionary search is way more efficient than random search.

I’ll say it’s “apples and oranges”!

58. Joe G: Umm you cannot falsify Dembski until you address his argument, which you haven’t.

I’ve addressed it head on. I’ve used his exact formula to calculate an exactly equivalent scenario to his own examples (coin-tosses), and demonstrated that I can get a sequence with chi>1 from evolutionary processes only (random point mutations and natural selection).

It’s a direct falsification.

59. Mike, I’m recording the lineages of the genomes, so provided the thing doesn’t crash overnight, I’ll post the lineage of the winner tomorrow.

60. Joe G: Again apples and oranges- you really need to read “No Free Lunch” – really

I would seriously suggest that you have not read it yourself. You don’t appear to have the ability.

So if you are going to continue to snark at and taunt people, you should at least demonstrate that you can understand some math.

Elizabeth is giving a pretty clear demonstration here, and you don’t even know what is happening.

61. Elizabeth: Mike, I’m recording the lineages of the genomes, so provided the thing doesn’t crash overnight, I’ll post the lineage of the winner tomorrow.

Cool!

62. Elizabeth,

I suggest you are being disingenuous, but maybe that’s just my ignorant impression. What you pretend not to deal with is Dembski’s underlying argument, which is (of course) that goddidit. If using his methods you can show that Dembski’s God is not necessary, you are doing it wrong. Even if you can successful show that Dembski’s god can be factored out of any possible coherent interpretation of his formulation, then you are REALLY doing it wrong. Surely you’re aware that Dembski’s entire purpose, from day 1, has been to “find” his God lurking behind biololgical processes which are exasperatingly easy to understand without Him.

I think at some point in the past you attempted, and subsequently gave up trying, to get any ID proponent to produce a single actual CSI calculation for any biological entity. So as I read it, you are attempting to find ever-simpler implementations of Dembski’s ideas, all of which make no use of Dembski’s God. And no matter how simple you make it, if your example works “you haven’t addressed his arguments”.

And you haven’t! At least, you are superficially addressing what he SAYS, but not what he MEANT. He MEANT that goddidit. You know that, the ID proponents know that, we all know that. Dembski’s rationalizations and circumlocutions and mathematical language can be directly refuted, as you keep doing. But his Faith, his rejection of natural processes producing natural results, can’t be refuted by the misguided expedient of addressing his words. And that’s why you’re racing around playing whack-a-mole with Joe G and William Murray. You are unwilling to address a spiritual issue on spiritual terms.

63. Elizabeth: I assume that Dembski intended that paper to be internally coherent.Moreover, he actually says it supecedes his previous treatments of CSI.And how can “algorithmically incompressible” not mean the opposite of “algorithmically compressible”?!

And I’ve read his NFL papers.They don’t help his case.I’ve just demonstrated that for this landscape evolutionary search is way more efficient than random search.

I’ll say it’s “apples and oranges”!

Liz, a computer program is CSI. Furniture assembly instructions are CSI. Encyclopedia articles are CSI. Not one of those can be algorithmically compressed.

Dembski says:

To sum up, the collection of algorithmically compressible (and therefore nonrandom) sequences has small probability among the totality of sequences, so that observing such a sequence is reason to look for explanations other than chance.

A collection of non-random sequences.

And guess what? If you don’t get it right the first time, it doesn’t get to reproduce with variation.

But yeah if any ole nucleotide sequence could just start reproducing with variation- that means functional variation- then Dembski would be falsified.

Ya see that random coin toss would equal the sequence you need to match to get replication with variation. If you don’t get it you have to start over.

64. Joe G, I do suggest you read Dembski’s paper (the one linked to in the other thread) carefully. I am solely dealing with the point Dembski makes in that paper which is that patterns that have a chi>1 signify Design. He shows us how to calculate it, and even uses coin-tosses as an example.

What I am saying is that a system of self-replicators that reproduce with heritable variance in reproductive success can also produce patterns with chi>1.

Yes, you have first to have the system of self-replicators, reproducing with heritable variance in reproductive success, and if Dembski’s argument was that only a Designer could produce a self-replicator that replicates with heritable variation in reproductive success, that would be be quite different. But Dembski, specifically, says that Darwinian processes cannot create chi>1.

I have just demonstrated that they can. They can increase it from levels well below threshold to well above.

65. Flint:
Elizabeth,

I suggest you are being disingenuous, but maybe that’s just my ignorant impression. What you pretend not to deal with is Dembski’s underlying argument, which is (of course) that goddidit. If using his methods you can show that Dembski’s God is not necessary, you aredoing it wrong. Even if you can successful show that Dembski’s god can be factored out of any possible coherent interpretation of his formulation, then you are REALLY doing it wrong. Surely you’re aware that Dembski’s entire purpose, from day 1, has been to “find” his God lurking behind biololgical processes which are exasperatingly easy to understand without Him.

I think at some point in the past you attempted, and subsequently gave up trying, to get any ID proponent to produce a single actual CSI calculation for any biological entity. So as I read it, you are attempting to find ever-simpler implementations of Dembski’s ideas, all of which make no use of Dembski’s God. And no matter how simple you make it, if your example works “you haven’t addressed his arguments”.

And you haven’t! At least, you are superficially addressing what he SAYS, but not what he MEANT. He MEANT that goddidit. You know that, the ID proponents know that, we all know that. Dembski’s rationalizations and circumlocutions and mathematical language can be directly refuted, as you keep doing. But his Faith, his rejection of natural processes producing natural results, can’t be refuted by the misguided expedient of addressing his words. And that’s why you’re racing around playing whack-a-mole with Joe G and William Murray. You are unwilling to address a spiritual issue on spiritual terms.

I’d be more than willing to address the spiritual issue on spiritual terms, and indeed, have occasionally done so on UD – I have grave theological issues with ID as well.

But the argument that Dembski presents in that paper is a simple mathematical one, and it is simply and demonstrably false.

As my computer is busy demonstrating (not that it has not been done before, many times, but this script has the bonus of taking a scenario directly related to the examples in Dembski’s paper, namely specified subsets of sequences of coin-tosses, and we can calculate the CSI quite precisely (well, someone might have to help in getting an exact value for that N, but I’ve ballparked it safely enough).

66. Joe G: Dembski says:
To sum up, the collection of algorithmically compressible (and therefore nonrandom) sequences has small probability among the totality of sequences, so that observing such a sequence is reason to look for explanations other than chance.
A collection of non-random sequences.
And guess what? If you don’t get it right the first time, it doesn’t get to reproduce with variation.

What about crystals? Are they CSI?

67. I must say, I’d find Joe G’s and William’s objections more persuasive if either of them would provide a ballpark calculation for the CSI of the various examples that they’ve referred to.

And Joe, you need to read at least the section of Dembski’s paper on compressibility. You often accuse me of not understanding ID, but I seem to understand that paper considerably better than you do!

68. Elizabeth:
I must say, I’d find Joe G’s and William’s objections more persuasive if either of them would provide a ballpark calculation for the CSI of the various examples that they’ve referred to.

And Joe, you need to read at least the section of Dembski’s paper on compressibility.You often accuse me of not understanding ID, but I seem to understand that paper considerably better than you do!

The paper refers to specification only.

69. Mike Elzinga: What about crystals?Are they CSI?

No- they are not complex- this is in “No Free Lunch” and Orgel says it too- Meyer goes over it in several of hiis writings also.

70. Elizabeth:
Joe G, I do suggest you read Dembski’s paper (the one linked to in the other thread) carefully.I am solely dealing with the point Dembski makes in that paper which is that patterns that have a chi>1 signify Design.He shows us how to calculate it, and even uses coin-tosses as an example.

What I am saying is that a system of self-replicators that reproduce with heritable variance in reproductive success can also produce patterns with chi>1.

Yes, you have first to have the system of self-replicators, reproducing with heritable variance in reproductive success, and if Dembski’s argument was that only a Designer could produce a self-replicator that replicates with heritable variation in reproductive success, that would be be quite different.But Dembski, specifically, says that Darwinian processes cannot create chi>1.

I have just demonstrated that they can.They can increase it from levels well below threshold to well above.

Umm Darwinian processes have not produced any self-replicators with variation.

As I said:

But yeah if any ole nucleotide sequence could just start reproducing with variation- that means functional variation- then Dembski would be falsified.

You can’t even get started with Darwinian processes.

71. Elizabeth: Well, my best product has now reached 8.0828e+58. So it’s creeping up, but ever more slowly! I’ll see where it’s got to in the morning.

If you plot the curve of x^(1/(x+1)), you will see a very broad peak. Your program may find lots of stuff in the range of 10^58, but there is no strong “pull” toward the absolute peak.

72. So in a world in which coin toss results could be reproduced with random variation, some specification may eventually be produced.

I doubt Dembski will be impressed…

73. Joe G: The paper refers to specification only.

No, it does not. He’s changed his terminology somewhat since CSI, and in this paper (which he clearly states supercedes earlier treatments) he says that “specification” refers to patterns that exhibit both “pattern simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance)”.

If you read the paper you will see that “pattern simplicity” is also referred to as “compressibility” and “event-complexity” is his old “complexity”: essentially Shannon Entropy * string length.

That combination is what he calls “specification” and also “specified complexity”, which he quantifies as “chi”. When the value of chi exceeds 1, he claims that we must reject non-Design.

My Darwinian algorithm, however, produces patterns in which chi exceeds 1. Ergo, Darwinian processes, as well as Design processes, can produce “specified complexity” >1.

74. Joe G:
So in a world in which coin toss results could be reproduced with random variation, some specification may eventually be produced.

I doubt Dembski will be impressed…

Well, he should be. The “some specification” my system produces exceeds his very stringent threshold of 1.

75. Mike Elzinga: If you plot the curve of x^(1/(x+1)), you will see a very broad peak.Your program may find lots of stuff in the range of 10^58, but there is no strong “pull” toward the absolute peak.

Yes, I thought that might be the case. Certainly the rate of increase is now very slow. But still going! I’m at 9.4720e+58 as I go to bed

76. Joe G: Umm Darwinian processes have not produced any self-replicators with variation.

As I said:

But yeah if any ole nucleotide sequence could just start reproducing with variation- that means functional variation- then Dembski would be falsified.

You can’t even get started with Darwinian processes.

No, you can’t get started with Darwinian processes. Darwin himself, famously said that.

But that’s not what Dembski is saying. You may be, but he isn’t.

And we don’t yet know what the simplest possible Darwinian-capable self-replicator is so we can’t compute its CSI. But if it’s <1, then it can happen by chance.

ETA: and it doesn’t even have to. It could also happen by chemistry.

77. But the argument that Dembski presents in that paper is a simple mathematical one, and it is simply and demonstrably false.

Yes, it was simple, And your demonstration is simple. And the argument is simply false. Great. And NOW, why do you suppose Joe G and William Murray can’t see that? Do you propose they are stupid? Your same arguments have been presented (repeatedly) to Dembski. He ignores them. Do you suppose he is also stupid? You have presented analogous (and equally well-supported) arguments on UD, and met with universal rejection. Are they ALL stupid? What could possibly explain such a thumpingly consistent unwillingness to accept what is so simple and obvious?

Sigh. The argument Dembski is making is NOT mathematical, despite careful construction to create that appearance. It is a spiritual argument dressed up in mathematical terms. You can easily refute the math, missing the point in the process. Joe G and William Murray haven’t (and couldn’t) do any of the math; they wouldn’t know a distributive property from eggnog. But they KNOW you are wrong. How do you suppose they know this?

78. The size of the target space can be estimated as follows.

As has been already pointed out, the best solutions are the two periodic sequences in which four Hs are followed by one T: (HHHHT)(HHHHT)…(HHHHT) and (THHHH)(THHHH)…(THHHH). Both yield the fitness equal to 4100 = 1.61 ×1060. These sequences can viewed as Ts forming a crystal: the distance between adjacent Ts is always 4.

Other solutions from the target space can be obtained by slightly changing these “ground states.”

One type of a perturbation shifts one of the Ts by one position, so we get THHHTHHHHHT with 3 and 5 Hs. The fitness goes down by a factor (3×5)/(4×4) to 1.51×1060, so we are well within the target space.

Let’s count these sequences. In Liz’s notation, we convert one of the 4s in {4,4…4} to a 3 and another to a 5. These do not have to be adjacent. There are 400×399=159600 possible ways to choose which 4s to convert. Multiply that by 2 (to account for 2 initial, perfectly periodic states) to get 319200 states in the target space.

(contd)

79. Elizabeth: No, you can’t get started with Darwinian processes.Darwin himself, famously said that.

But that’s not what Dembski is saying.You may be, but he isn’t.

And we don’t yet know what the simplest possible Darwinian-capable self-replicator is so we can’t compute its CSI.But if it’s <1, then it can happen by chance.

ETA: and it doesn’t even have to.It could also happen by chemistry.

Liz, You don’t have any idea what Dembski is saying- and getting started is what Dembski is talking about

80. some points-

1- self-replicators with variation is not a living organism

2- you need self-replicators with complexity-increasing-variation, and not just any complexity will do

3- a self-replicator may not have to have CSI- again if any sequence will do then there isn’t any specification

4- A self-replicator with SI that can evolve via Darwinian mechanisms into a replicator with CSI would put a huge whole into CSI = design

5- Self-replicators are imaginary as even RNA replication takes TWO- one RNA for a template and one RNA for the catalyst

6- The evidence now says that the RNA world couldn’t exist without proteins, meaning there wasn’t a RNA world

7- A ribonulceoprotein world is the new RNA world

81. Elizabeth: Well, he should be.The “some specification” my system produces exceeds his very stringent threshold of 1.

Your system has nothing to do with his claims.

82. In the case of a maximally uncertain binary population, the set would need be simply describable such as “every fourth coin is tails”, or “heads and tails alternating,” or the first 100,000 is heads and rest is tails” etc. I would say any description under about 1000 characters would be considered simply describable. unless I’m misunderstanding, taking repeats as sets and adding them up is not simply describable

83. Yes, I do have an idea of what he is saying, Joe, because I have read his paper very carefully, something it appears you have not done, as you have made several blatant errors, like getting the compressibility thing diametrically wrong, and failing to note that his definition of “specification” incorporates both complexity and compressibility. And is also referred to as Specified Complexity. And is Information. And that he regards it as his most up-to-date treatment of CSI.

I’m off to bed now, so I won’t be able to release any more of your comments from the holding tank for a few hours. If you can go for a week without inducing me to send any to guano, I might release you into the wild.

Sleep well

84. Elizabeth:
Yes, I do have an idea of what he is saying, Joe, because I have read his paper very carefully, something it appears you have not done, as you have made several blatant errors, like getting the compressibility thing diametrically wrong, and failing to note that his definition of “specification” incorporates both complexity and compressibility.And is also referred to as Specified Complexity.And is Information.And that he regards it as his most up-to-date treatment of CSI.

I’m off to bed now, so I won’t be able to release any more of your comments from the holding tank for a few hours.If you can go for a week without inducing me to send any to guano, I might release you into the wild.

Sleep well

Liz- I am starting to not care- if you think you can read that one paper- in isolation- and know what Dembski is saying, without running it by him, I say you are just whacked.

85. (contd)

OK, the <sup>n</sup> trick for superscripts did not work. Too bad.

We can convert more 4s into 3s and 5s. With four conversions, the fitness is still an acceptable 1.41×10^60. There are 2×(400×399×398×397)/(2×2) = 1.26×10^10 such configurations. That’s way more than the number of ground states.

We can convert up to fourteen 4s into 3s and 5s with the fitness staying above 10^60. The number of these configurations is 2×400!/(386!×7!×7!) = 1.67×10^29. I think these sequences represent the bulk of the target space.

86. There is, as people have noted here, no particular reason why natural selection should generally tend to increase complexity of the organism. That means that NS is not a good explanation for increased complexity if that complexity does not increase fitness. When there is increased “complexity” which increases fitness, then that is what we need to explain, and fortunately NS is then relevant to explaning it.

Dembski just says, in effect, to use some relevant scale. I try to use a scale related directly to fitness, to avoid getting tangled in these issues. We are, after all, basically talking about how adaptation can be explained. Even though at certain points Dembski talks of using compressibility of the description of the phenotype as the criterion, I think that this runs into the problem that it is unrelated to adaptation. A perfect sphere is then more Complex Specified than is an actual organism.

Using fitness itself is better, and lets us move on to the more real issue, of whether Dembski’s conservation laws work and are relevant.

87. (contd)

There are other ways to introduce defects into the “ground states” consisting of all 4s, but they probably contribute less to the target space than the above describe sequences.

For example, we can remove one T somewhere in the middle converting 4 and 4 Ts into 9. This reduces the fitness by a factor 9/(4×4) thus making it less than 10^60. It looks like this won’t help.

However, we can move one of the Ts between the 9 Hs and an adjacent 4 Hs so that {…9,4…} becomes {…8,5…} or, even better, {…7,6…}. Then the fitness goes down by (7×6)/(4×4×4) and that keeps the new sequence in the target space, barely so. The number of these sequences is only 641600, which does not add up much to the previously identified ones.

88. junkdnaforlife:
In the case of a maximally uncertain binary population, the set would need be simply describable such as “every fourth coin is tails”, or “heads and tails alternating,” or the first 100,000 is heads and rest is tails” etc. I would say any description under about 1000 characters would be considered simply describable. unless I’m misunderstanding, taking repeats as sets and adding them up is not simply describable

It’s a lot more describable than a DNA sequence coding for a protein

Yes, it’s simply describable, and in any case, Dembski’s condition is that the subset should consist of patterns that are as simply, or more simply describable than the observed pattern.

My patterns are describable as: sequences of 500 coin tosses where the product of the lengths of runs-of-heads is greater than [threshold].

I guess you have a point though, in that I should also include even easier-to-describe patterns in my N. I think that’s a problem for Dembski, though, because he doesn’t operationalise his “compressibility” thing at all well, and I can’t think of a way of operationalising it that would make a DNA sequence for a protein one of a really rather large subset of sequences that could be “compressed” by a “semiotic agent” with sufficient ingenuity!

The fact remains though that NS (as exemplified in my little script) can find a member of a vary rare subset of patterns really quite rapidly, where without NS it would require more than the “probabilistic resources” of the universe (which wouldn’t of course mean that it couldn’t happen – improbable things do).

Dembski’s counter-argument would be, I think,that I have “smuggled” the specification into my fitness function. Well, it’s not smuggled – it’s there in plain sight. My counter-rebuttal is that the natural world presents any population of self-replicators with abundant fitness criteria, and those criteria will fairly reliably result in patterns that match the criteria, and enable the populations to thrive.

So it then makes to sense (IMO) to then say: hey, look! How improbable that this population should “by chance” have a genome that fits its phenotypes so well for this environment! Because what has happened, of course, is that the reason the genome fits its phenotype for its environment is precisely because the specification (fitness for environment) is right there in the environment!

89. Joe Felsenstein:
I think that this runs into the problem that it is unrelated to adaptation.A perfect sphere is then more Complex Specified than is an actual organism.

Yes, indeed, which is why I keep bringing up Chesil Beach! Which is a dead simple ranked sorting of pebble sizes over 18 miles. Very compressible, very Complex, and achieved by means of a simple natural sorting algorithm.

I keep meaning to try to compute its CSI. I got started once.

90. Joe G: Liz- I am starting to not care- if you think you can read that one paper- in isolation- and know what Dembski is saying, without running it by him, I say you are just whacked.

I think I can understand what he is saying in that paper. And if that paper only makes sense if read in combination with some other paper, then it’s incompetently written.

But in fact it isn’t, and he specifically says that it supercedes his earlier treatments.

If I was a reviewer of that paper (and it’s got up like a peer-reviewed paper) I would point out that his case is flawed and he needs to tackle my objections before publication.

Many people have done this, in fact, but he hasn’t tackled them.

91. Joe Felsenstein: There is, as people have noted here, no particular reason why natural selection should generally tend to increase complexity of the organism. That means that NS is not a good explanation for increased complexity if that complexity does not increase fitness. When there is increased “complexity” which increases fitness, then that is what we need to explain, and fortunately NS is then relevant to explaning it.

The biologists have generally had the better terminology because their definitions reflect what is based in observation and the laws of chemistry and physics.

There is nothing wrong with trying to find good mathematical ways of pulling out the patterns in the evolution of a system; but these methods had better be anchored back to observational evidence and physical laws.

Fitness makes more sense that some kind of “information” or “compressibility.” Some of the simplest organisms are quite robust while the more complex ones have more things that can go wrong.

A perfect sphere is then more Complex Specified than is an actual organism.

Indeed. Consider a spherical cow.

92. Thanks Olegt:)

That seems to leave me some leeway.

(odd that the tags didn’t work for you. They do for me. hmmm.)

ETA I mean the [sup] tags (substitute <>) lol.

93. Elizabeth: Thanks Olegt:)That seems to leave me some leeway.(odd that the tags didn’t work for you. They do for me. hmmm.)

Yes; I have also been noticing that some of the tags I am used to don’t work.

94. olegt:
(contd)

OK, the <sup>n</sup> trick for superscripts did not work. Too bad.

We can convert more 4s into 3s and 5s. With four conversions, the fitness is still an acceptable 1.41×10^60. There are 2×(400×399×398×397)/(2×2) = 1.26×10^10 such configurations. That’s way more than the number of ground states.

We can convert up to fourteen 4s into 3s and 5s with the fitness staying above 10^60. The number of these configurations is 2×400!/(386!×7!×7!) = 1.67×10^29. I think these sequences represent the bulk of the target space.

Correction: 2×400!/(386!×7!×7!) = 6.48×10^31.

95. You say:
“sequences of 500 coin tosses where the product of the lengths of runs-of-heads is greater than [threshold].”

Your statement is not simply describable, this is:
Whereas the pattern is (ahem) described simply.

You say:
“It’s a lot more describable than a DNA sequence coding for a protein”

DNA sequences have a near optimal functioning sequence, k/n, whereas
k are arrangments that code for specific function, and n are all possible configurations. This is specification. If we are trying to demonstrate this by flipping coins, then the string must be simply describable. This is how specificity can be mapped with binary populations. Whereas k is the amount of simply describable patterns of {H,T}, and n is all possible patterns of {H,T}.

Joe F’s example of a sphere is dead on. I read that in his paper and thought is was a great example of CSI, such that an equiprobable population of {0,1,2,…,9} outputting {3.141592653589793238462643383279502884,…,n}, whereas n approaches infinity, that can then be compressed into a single character, or Pi, is the most elegant example of CSI.

96. One needn’t compare a functional sequence to all possible functional sequences. It’s only necessary to know if there’s a nearby functional sequence.

97. DNA sequences have a near optimal functioning sequence, k/n, whereas
k are arrangments that code for specific function, and n are all possible configurations. This is specification. If we are trying to demonstrate this by flipping coins, then the string must be simply describable. This is how specificity can be mapped with binary populations. Whereas k is the amount of simply describable patterns of {H,T}, and n is all possible patterns of {H,T}.

I don’t understand. The DNA sequence is not required to be simply describable in order to be called *specified*, but the sequence of coin flips must be simply describable in order to be called *specific*?

Joe F’s example of a sphere is dead on. I read that in his paper and thought is was a great example of CSI, such that an equiprobable population of {0,1,2,…,9} outputting {3.141592653589793238462643383279502884,…,n}, whereas n approaches infinity, that can then be compressed into a single character, or Pi, is the most elegant example of CSI.

Again, I don’t understand. In what sense is the character Pi a *compression*? It is simply a symbol that we use to signify a particular infinite sequence. Under that logic, it seems to me that I can assign a symbol to signify any particular sequence I wish and then call the sequence *compressible*.

98. Sorry, first sentence is supposed to read: The DNA sequence is not required to be simply describable in order to be called *specified*, but the sequence of coin flips must be simply describable in order to be called *specified*?