Intelligent Design advocates are still talking about CSI and determining the value of it.
CSI measures whether an event X is best explained by a chance hypothesis C, or some specification S.
So I’d like this thread to be a list of biological entities and the value of CSI that has been determined for each.
If no entries are made then I believe that would demonstrate that CSI might measure X Y or Z but it never actually has done so.
Out of interest, what is the CSI of a bacterial flagellum?
But to what exactly is he trying to change the subject? I’m trying to be charitable to Eric and assume he has a reason for what he posts and that he understands the concerns with multiple definitions of CSI and the need to pick one, or explicitly justify why he does not need to.
Same charitableness for my bit on the xor stuff. I assume Eric knows the difference between xor and a permutation, so why would he go to an xor? I agree it does not make sense when considering efficiency of either implementation or specification of f. Maybe there is some theorectical reason to pick a sequence of simple XORs that Eric had in mind? I admit that seems to be extreme charitableness.
Right, I knew about that paper, but neither “design” nor “stochastic” appear according to my text search of that paper. Besides providing that bound on ASC, the paper (eg in conclusion) claims that ASC measures “how well a probability explains a given event”.
In the Game of Life paper, that phrase changes to
[start of quote}
“Objects with high ASC defy explanation by the stochastic process model. Thus, we expect objects with large ASC are designed rather than arising spontaneously. Note, however, we are only approximating the complexity of patterns and the result is only probabilistic.” (p. 586)
[end of quote]
But going from “wrong probability model” to “it must be designed” seems to omit some steps. Like, what other probability distribution is possible given what science tells us about the world?
That issue reminds me of the issue with Dembski’s CSI version that depends first on determining the probability that the event/system can be explained by evolution.
I understand you to have noted that issue for G of L paper by criticizing the chosen probability distribution as ignoring the physics of the game.
Looking more closely at the G of L paper, I found this at the end of the introduction section:
[start of quote]
“Use of KCS complexity has been used elsewhere to measure meaning in other ways. Kolmogorov sufficient statistics [2], [23] can be used in a two-part procedure. First, the degree to which an object deviates from random is ascertained. What remains is algorithmically random. The algorithmically nonrandom portion of the object is then said to capture the meaning of the object [24]. The term meaning here is solely determined by the internal structure of the object under consideration and does not directly consider the context available to the observer as is done in ASC”
[end of quote]
I think that “algorithmic randomness” is related to the maximum entropy principle that Eric name dropped in this thread and in an OP. Because he did not explain why that mattered to ASC/CSI, I think “name dropping” is the right description of his usage of these terms.
I’m not sure if that claimed advantage of adding context to the standard approach, mentioned at the end of the quote, is really an advantage. I need to go back and review with you and Tom said about that in the other thread.
Random thought:
Suppose I generate a few thousand bits of data via radioactive decay.
Is there any computational or logical process for insuring that this sequence does not occur in the expansion of pi?
petrushka,
No. And though it hasn’t been proven, mathematicians suspect that every possible finite sequence of digits will occur somewhere in the decimal expansion of pi.
OMagain,
I cannot access your claim until you support it.
From known empirical evidence you cannot so I have to assume it is false at this point unless you bring new information to the table.
keiths:
Bruce:
To Montañez’s version of CSI, rather than Dembski’s. That’s why he keeps talking about “normalization by kardis”.
Just sloppiness, as far as I can tell. He remembered that Joe was scrambling the image, but he didn’t remember that it was being done by permutation — and he didn’t bother to check:
Bruce,
Yes. In Dembski’s 2005 version of CSI, he’s explicit that P(T|H) must account for “Darwinian and other material mechanisms”. Somehow that lesson was forgotten when it came to ASC.
Right. They make this confession in the paper:
I commented:
LOL! Good old Bill. Demands the exact thing from others he refuses to provide for his own ID nonsense.
Looks like Bill’s ID claims have failed miserably again. Too bad. 🙂
Not saying it was Eric’s intent, but the “XOR with a one-time pad” function has the ‘benefit’ that its information content scales with the size of the image (as described by Eric), whereas a permutation function could be rather simple and still obliterate the CSI arbitrarially large images, AIUI.
Either way, citing an XOR obscures the fact that the function need not scale with the data acted upon.
Jock,
Joe’s permutation function was also tied to the size of the image. Here’s his June 1st description:
keiths,
Good point. I had forgotten that Joe had stipulated that level of ‘security’ for his function.
Not sure what “security” is in this context. I didn’t say (until later) how the permutation was chosen (it was with pseudorandom numbers). Lots of other permutations would have done the job of destroying CSI while also able to “create” it when going the other way. Applying a random dot pattern with an XOR function would do much the same things.
I was alluding to cryptographic ‘security’. I forgot that you stipulated that the size of the permutation block be equal to the size of the message.
We agree that a smaller block would be sufficient to destroy any CSI, whether f is an XOR or a permutation.
DNA_Jock,
One doesn’t need “security”. I could have used a permutation that kept odd-numbered columns unchanged, and replaced the even-numbered columns by those same ones in reverse order of those columns.
I am pretty sure that would destroy like-a-flower-ness. Plus it’s inverse is easy — it:s the same function applied again.
Yes, I noticed those brief references, but his subsequent posts switched to ASC rather than trying to show how Montañez’s CSI solved the issues Joe raised.
Has there ever been any discussion of that Montañez paper in TSZ?
I’ll leave it to Joe to demonstrate the issues with ASC when it comes to science, but I’m still curious about Eric’s name dropping post, particularly the stuff starting with the “things like ID” bulleted list.
Being charitable again, I assume Eric has something in mind when he claims ID is just re-using those ideas.
But how do all those concepts in that OP fit together with ID in general with ASC or Montanez paper in particular, at least in Eric’s thinking?
Exactly. This is why we will never see a real-life worked out ID example of CSI, SI, SCI, ASC or whatever flavour of alphabet soup is current this month. It simply cannot be done because the probabilities involved cannot be estimated to any reasonable degree.
This emperor has no clothes and all they do is try to befuddle us with hocus-pocus math. You actually don’t have to be a math guru to see right through it all.
Eric, next time you visit, please provide a worked-through biological example of this metric that you claim demonstrates design. If you can’t do it yourself, then go and ask one of your ID buddies. If they can’t do it either, then please explain to us what the fecking use is of all of this.
In the absence of a positive response, why would you expect anyone to take it seriously?
faded_Glory:
I predict he’ll fall back on this excuse:
…which would be a bit more believable if he were consistent about it.
fG:
Eric doesn’t like talking about that, either. The response I quoted above came after I asked him about Dembski’s failed attempt at showing that the flagellum was designed.
Bruce,
It’s been mentioned more than once, but I’m not aware of any extended discussions of it.
It’s a defense strategy. When people attack ID, Eric likes to pretend that they are attacking mainstream ideas. That’s the basis for his Fields Medal babbling, for instance.
What he doesn’t realize is that his strategy is self-defeating. If there were nothing new in ID, then Dembski, Marks and Ewert would be plagiarists presenting old ideas as if they were original. On the other hand, if there is something new to ID, then critics can criticize ID without thereby criticizing the mainstream ideas from which ID borrows.
I’ll take a shot at chewing through the leather straps. Perhaps you’ll see an OP in the next day or two. (Apologies to Joe for ignoring another matter.)
Bruce,
In light of recent discussions, it’s worth pointing out that Montañez’s paper uses the CSI described in Dembski (2005), for which Dembski never claimed conservation, and not the version in Dembski (2002), for which he did.
Eric has said that Dembski’s 2002 claim is correct, which is why Joe and I don’t let him get away with changing the subject to Montañez.
Actually, it doesn’t, though Montañez claims that it does. That’s the main thing I hope to explain in an OP. Note that Dembski (2005) assigns specified complexity to the set T containing all of the possible outcomes that match pattern T. (Dembski denotes the pattern and the set of matching outcomes identically.) Montañez assigns specified complexity to individual outcomes.
Suppose that object is in the vocabulary of the semiotic agent, and that all possible outcomes are objects. Suppose also that the actual outcome is very low in probability. Dembski (2005) will not say that a particular low-probability object is high in specified complexity, because it is the probability of an object (=1) that he considers. Montañez will say that a low-probability object is high in semiotic specified complexity, because he mistakenly puts the (very low) probability of the particular outcome in place of the probability (= 1) of the set of
objects[outcomes] matching the pattern object.Edited.
For the curious, Tom is talking about the following equation from Montañez’s paper. Note that Montañez is using p(x) in place of Dembski’s P(T|H).
There is something new in ID, but what critics criticize are the mainstream ideas from which ID borrows.
EricMH:
This thread indicates otherwise. Those 20+ errors are original to Ewert, Dembski and Marks.
Ive come a bit late to the party. Is there a list of things with calculated CSI ?
It’s a null set as far as biology goes.
The one ‘new’ criticism there is by RichardHughes, and it is the same criticism as Dr. Felsenstein and Dr. English, where he uses a different probability distribution than the original chance hypothesis, and claims victory. All this criticism does is substantiate the ASC claim that it can falsify the chance hypothesis, so it is not really a criticism but a corroboration.
Additionally, RH’s beef is really with randomness deficiency, which ASC is based on. Another piece of mainstream mathematics that you can earn a fields medal by refuting. Like I say, you guys are aiming way too low wasting your time with ID writers when you could be taking on the entire mathematical establishment.
You’re a terrible bluffer, Eric.
That thread is full of unanswered, detailed criticisms, not of mainstream mathematics, but of the claims that Ewert, Dembski, and Marks make in their Game of Life paper.
Where are your rebuttals?
Hey Eric, why don’t you submit your mathematical proof evolution is impossible to any mainstream science journal? A Fields medal is nothing compared to the Nobel Prize you’re sure to earn.
Or maybe deep down you realize you’re just an egotistical empty drum making lots of meaningless noise and don’t want to embarrass yourself any further.
Depends what you mean exactly, but I’d be happy to.
If by ‘evolution is impossible’ you mean evolution cannot generate ASC, then that is already proven in ‘Improbability of ASC’. Evolution is a stochastic process, so generates a probability distribution over possible events. We set the chance hypothesis to that probability distribution, and by the ‘improbability of ASC’ theorem evolution cannot generate X bits of ASC with probability better than 2^-X.
There you go!
EricMH,
This sounds like a refutation of the ‘tornado in a junkyard’ scenario, not a refutation of evolution through stepwise changes and cumulative selection, where the added complexity between successive steps can actually be quite minor. Surely the probabilty distribution of each single step is vastly different from the probability distribution of the final end product when that is regarded as a one-step occurrence? Moreover, the probability distributions of each step are by no means necessarily identical, so how do you go about specifying each one of these?
When will you be publishing this evolution killing evidence and going for your Nobel Prize?
With your huge ego and minuscule knowledge of actual evolutionary biology you’re what the British refer to as “too clever by half”.
The successive steps scenario still sets up a probability distribution over outcomes, so the main argument still stands.