A robot is presented with a collection of 2000 randomly configured fair coins. The robot orients them all to heads. How much CSI is evidenced by the 2000 coins after the robot is done with them? I said 2000 bits. Winston said 0 bits. Other IDists said something in between.
If IDists can’t agree on such a simple example, then why use the CSI convention to analyze designs? I have lobbied instead to use deviations from expectation as an indicator as to how credible it is to reject the chance hypothesis for an artifact.
Whether ID is true or not is a separate issue from the issue of metrics which IDists can agree on. KairosFocus lobbies heavily for his FSCO/I.
Why even calculate bit values for CSI? The primary issue with the Explanatory Filter is whether the chance hypothesis can be rejected. By chance, I mean a process that maximizes uncertainty. An uncertainty maximizing process can easily be rejected as an explanation for the 2000 coins being heads through basic probability.
What if we just found the 2000 coins long after the robot is gone and all the observer had to go on was the set of 2000 coins? Would the CSI number still be 0 bits as Winston asserted?
At this point it doesn’t matter so much who is right. The fact there is no agreement on what should be a trivial example does not inspire enthusiasm from me. At that point I said, it was becoming prohibitive to use CSI as a means of implementing the Explanatory Filter. It is too cumbersome, adds too many confusion factors.
here was a differing view by Winston Ewert and our exchange, plus some of my protests:
Mark Frank’s view:
This is not as irrelevant to Darwnism as you think. The answer will depend on:
a)what you define the target as e.g all heads or all the same or at least 1999 the same and so on.
b)what assumptions you are making about how the coins got that way – was one tossed and then some natural mechanism duplicated it 1999 times or were 500 tossed and then some neutral mechanism duplicated it three times or was each individual coin tossed.
The point being that it is nonsense to talk about the CSI in an outcome. It depends on the target and the chance hypothesis you are assuming which underlies it. Demski’s own formula makes that clear. Your example makes the point rather nicely.
Why make the 2000 fair coin example needlessly difficult? Does invoking unnecessary fancy math and information theory add force to the arguments or does it just add confusion? Maximized uncertainty is not expected to make 100% coins by many sigmas, therefore we can reject the chance hypothesis. Simple!
If CSI methods can’t reject the chance hypothesis in such a simple manner, then IDists ought to reconsider using CSI arguments in the first place.