Intelligent Design advocates are still talking about CSI and determining the value of it.

**CSI measures whether an event X is best explained by a chance hypothesis C, or some specification S.**

So I’d like this thread to be a list of biological entities and the value of CSI that has been determined for each.

If no entries are made then I believe that would demonstrate that CSI *might* measure X Y or Z but it never actually has done so.

Out of interest, what is the CSI of a bacterial flagellum?

0

Bill,

I brought up the “tornado in a junkyard” argument because you were employing a version of it in your discussion of

the origin of the flagellum. The flagellum was the topic of discussion, not OOL.Your response, amusingly, was to ask what was wrong with the tornado/junkyard argument.

How many times has it already been explained to you? How many times

mustit be?Be gentle with Bill, he has a terribly debilitating problem with his memory. He can’t remember scientific explanations he was given for even a day, and often forgets in the span of a few posts. Like now.

In the meantime, still not a single example of the measured or calculated CSI of a biological entity.

We all know that none will be presented. What would be interesting is a discussion about why that is so. My take is that it is defined in a way that makes it impossible to calculate it in the first place.

Seconded, that may be somewhat more productive.

To me, it seems possible in principle to calculate SI for a biological entity but it doesn’t perform the task that ID proponents want it to do. In the way that ID proponents use it, it is just an attempt (one of several) to formalize the intuition “this looks so complex, it must be designed”. This attempt is fueled by the naive assumption that one can somehow eliminate all potential natural causes, and then be justified in concluding that all patterns and order that is left is the handiwork of the Designer. But one can’t eliminate the “chance hypothesis” for ALL natural causes, and the obvious denial of the impact of

knownnatural causes doesn’t inspire confidence that IDers will ever be capable of doing this in an objective way. And even so: concluding the meddling of an intelligent agent by subtraction does require quite the leap of faith.Interesting that the discovery of the mechanism for ratcheting information preceded the claim of common descent.

Mutations occur. Any doubters? Anyone claim that every living thing has copy errors it its DNA? Bueller?

Anyone deny that if those errors are non fatal and non debilitating, they will be passed on to the next generation?

That’s the ratchet. If you want to base a claim on statistics, calculate the odds that a hundred mutations will be exactly reversed. I believe Dollo wrote about this.

Again, SI can be measured, in principle. CSI is simply present or absent. It is a statement of whether or not 500 bits of SI is present. Similarly, the weight of an object can be measured, but “is heavier than 1000kg” cannot be measured because it is not a measurement.

But that is quibbling. I agree with Corneel and petrushka that eliminating all possible natural causes is not possible. So the post-2005 definition of CSI by Dembski is unworkable. And, as I think keiths was first to point out here some years ago, unnecessary — if you can eliminate all natural causes, then the declaration that CSI is present simply summarizes that and adds nothing of interest.

There is also the pre-2005 Dembski definition of SI, which is essentially the same as Functional Information, where one uses a uniform distribution of genotypes, say all 1000-base DNA sequences. Jack Szostak and co-authors have actually published empirical measurements of FI. We can be pretty sure CSI is present if a degree of adaptation requires more than 500 bases of a particular sequence — a universe consisting only of monkeys with 4-key typewriters would not by now have had enough time to find that particular sequence.

And, as Corneel noted above, the real problem with that pre-2005 CSI is that there is no proof that it cannot be achieved by natural selection.

If you could prove that incremental change is impossible — selection, drift, or both together — then you are left with Poof. No calculations required.

So why the squirrel of SI?

They need to sound sciency to bamboozle their stupid audience

Can you say more about the role of the maximum entropy distribution.

As I understand the related principle of maximum entropy, it says that we should choose our epistemic probabilities based on the probability distribution with maximal entropy

under all known constraints.So we need to be cognizant of the relevant constraints to ensure we are using all available evidence in our reasoning.

For example, when the principle is applied in Statistical Mechanics, the constraints include those implied by knowledge of the energy and number of particles in the system.

By analogy, in applying the principle to evolution, it seems we should include the constraints imposed by physics on the nature of the environment and its dynamics as well as on the biochemistry of life (including eg mutations in genomes).

Is that your understanding?

I am unclear at this point as to what aspects of ID you are defending (I read Tom as expressing a similar concern). Is it one of the versions of CSI expressed by Dembski in some paper or book, is it that expressed in the other thread where you included the CSI of Joe’s function f, or is it that described in the Montanez paper (which may be Dembski’s NFL version as I read footnote 6 on page 12).

It might be helpful if you were explicit and then consistent on this point, or if you explain why you think you have been in posts in this thread and the other one you have recently posted in.

CSI as defined by Dembski (1996, 2002) as well as Functional Information (FI) do not make use of ASC or any of the Kolmogorov/Chaitin/Solomonoff machinery. Dembski has invoked them (later) but his examples for analyzing biological evolution are all traits correlated with fitness, not calculations involving simplicity of description.

Worse than that, I think. One has to have some scale of “bidirectional totary motor-driven propellerness” and then calculate the probabilities of all possible structures that are as far out on that scale as the observed flagellum, or farther.

Perhaps we are confusing Complex Specified Information with Specified Complexity.

Joe Felsenstein,The I cannot measure it is canard is not going to fly for long. If I have my pet grown Elephant with me and the limit for animals getting on a plane is 200 lbs I can safely decline to board him based on passed experience and other data.

The ID guys made a mistake earlier that you and Tom exploited which was trying to prove evolution was impossible. All they need to do now is pivot and claim it is a poor explanation given the facts such as nucleic acids and amino acids being arranged in a combinatorial space. Meyer is now doing this. Behe has always taken this conservative approach.

Your solution is to show that you can build mobility in a bacteria with less than 500 bits. To me this is a waste of time as it like trying to get an elephant past a 200 lb traveling requirement. Better to admit we don’t have a known natural mechanism.

keiths,The tornado discussion is base on applications where you need to get from step A to step B without guidance. The flagellar is probably applicable here because even if there is an interim like a type 3 secretary system you still have to explain the origin of that system. As you would have to explain a single jet engine appearing in the junkyard.

Where is natural selection in that analogy?

Rumraket,At the type 3 secretory system and when motility is established. If the assumption is true that the type 3 came first and its unmodified form works as a flagellar component.

Hi Bill, could you give your take on why, as it were, not a single elephant has been weighed yet. Is it hard? Is it useful? Is it possible at all?

In Bill’s “POOF the Designer made it appear thru magic!” world natural selection doesn’t exist. Usually he can’t even bring himself to type the words. 🙂

https://youtu.be/Vg5z_zeZP60?t=570

This seems to refute Eric’s take on Bell’s theorem, Am I too far off track here?

Corneel,I think measurement is not required if my elephant is around average size. We can easily see that the edge of the range at 2.5 tons is 20x the limit.

Bill doesn’t need any of that stinkin’ scientific data crap for his ID assertions. God showed Bill

DA TRUTH!!All he has to do is look at an object and tell it’s chock full of CSI!!Heh. sorry, I should have expressed myself clearer. The weighing of the elephant is of course a metaphor (which you introduced, incidentally) for determining the presence of CSI in biological entities. Could you contribute to the discussion why the list is still zero items long?

Darn, I thought ID folks could instantly spot a metaphor 😉

It looks designed to me, no need to even attempt to measure it.This discussion just gets more like surreal comedy every day.

Note:

‘canard’ = ‘duck’ in FrenchBK: What is the airspeed velocity of an elephant-laden duck?

KA: What do you mean? An African or Asian elephant?

BK: Huh? I… I don’t know that. Aaaaaargh!

I’ve done that. It was quite easy in fact. I did it with 250 bits.

I’d be happy to submit my procedure for your examination, once you explain to me how you will determine how many “bits” are present.

Shave and a haircut = 2 bits.

BIG LOL! Now you’re channeling my late Dad! 😀

Eric says ” no kind of stochastic process that can generate ASC”. Bell says : “no local reality can exist given QM.” (I’m leaving out details for simplicity) [ETA: fixed to “local”]

So I think Eirc is arguing that

looking for a non-stochastic aspect of reality to generate ASC

is just as valid a pursuit as

looking for non-local interpretations of QM.

But this analogy fails, because Bell’s inequality relied on QM, which had abundant experimental evidence when he created his proof. Furthermore, subsequent experiments have shown that the Bell inequality holds in our world, providing more evidence for QM.

But ASC is at best a mathematical statement; there is no evidence or reason to believe that it makes any claim about or restriction on what is possible once we take into account the physics and biochemistry of our world.

ETA: There is no gaps argument from ASC because there is no gap, at least none that has been scientifically demonstrated.

BruceS,That seems right to me. But my objection was to Eric’s claim about hidden variables. If I got Maudlin right, that’s not what Bell was on about with his proof.

Of course I realise I could be wrong. This is well above my paygrade

Now reading Bell’s theorem’s wiki entry and it looks like I was wrong and Eric’s depiction of it was accurate:

ETA: Bell proved that

localhidden variables couldn’t be added to “fix” QM, hence confirming non-locality as a fundamental aspect of the theory… unless I’m missing something yet againOMagain,So now your best argument for evolution is to say unless we can establish a precise calculation of how many bits are available we assume evolution is true. Do you see this argument is a non starter even for children?

We can easily estimate it is much larger than 500 bits given current empirical data. Why are you even bothering to argue this point.

Just admit you don’t know and I will accept your position.

colewd, to OMagain:

No, you can’t. To calculate the CSI, you first need to determine the probability that the structure in question was produced by “Darwinian and other material mechanisms”. That means no tornado-in-junkyard bullshit.

Good luck.

Allan,

Sorry, but it’s only CSI if it exceeds 500 bits. 🙁

keiths,We are estimating CSI and we can easily do this given the empirical data available. If it was close you have a point but it is not.

The burden of proof is on those claiming evolution is true and so you need to show that this calculation is less than 150 bits in all cases. Until you do this the design guys have falsified the theory and the reality is you cannot so let’s just get on with life. Could it be a natural mechanism …sure but until you have one that stands up to criticism you don’t have a valid theory.

Bill,

Youare the one claiming that the CSI of the flagellum is greater than 500 bits. The burden is onyouto justifyyourclaim.To do that, you need to evaluate the probability that the flagellum was produced “by Darwinian or other material mechanisms”. That is Dembski’s own stipulation; he finally learned that lesson in 2005, some 14 years ago. You are a decade and a half behind.

Tornado in a junkyard won’t cut it.

No, it’s that other evidence shows evolution is true independently of how well we know the total sequence space. The problem is YOU are saying this fatuous CSI gibberish proves evolution wrong because it magically translates into knowledge of sequence space even though it doesn’t.

We point out how and why it doesn’t, and that in order for it to accomplish what you want it to, you need to show how CSI entails you know the topology of seqeuence space.

You then complain we are assuming evolution is true even though we don’t know the topology of sequence space. But we aren’t assuming it, we have other evidence independent of total knowledge of sequence space. So we’re back to square one: CSI does not accomplish what it was advertised to do, it does not show that evolution of X can’t happen.

In that case I estimate the CSI to be less than 400 bits so therefore evolution is possible. Demonstrate my estimate is wrong

Yes, the locality result was Bell’s contribution. Maudlin goes to great pains in the video you posted (and his paper which I’d seen) to separate that contribution from Einstein’s EPR results. Einstein and P, R showed that if you insisted on locality (as Einstein did), you needed to add something to QM, ie hidden variables.

Maudlin video is also a good for debunking the common view that Einstein rejected the non-determinism of QM.

Eric’s statement of Bell is common, ie he proved “no local hidden variables”. But he built on Einstein, as Maudlin shows

Rumraket,I agree with your comment, except that it isn’t knowledge of the sequence space that is required — it’s knowledge of the relevant fitness landscapes.

Also, this isn’t quite right:

CSI (2005 version) actually

doesshow that X couldn’t have evolved —but only because you have to know that X couldn’t have evolved before you assign CSI to it.That makes CSI useless as a tool for disproving evolution. This has been obvious for years. Poor colewd is behind the times, as usual.

If I got it right, Maudlin explains that Einstein’s position was that it was either determinism, or locality, but not both, and that he was mainly concerned about non-locality (what he called “spooky action at a distance”)

So is it correct to say that Bell proved mathematically QM’s non-locality

andthat it’s deterministic? Or just the former?This is all so confusing. I thought that the very nature of the wave function implies that QM is inherently indeterministic, but apparently that’s not the case

It’s a lot to try to absorb from one video presentation of a paper that was published for specialists in the QM Foundations! Here is my understanding:

1. Bell proved under certain assumptions that QM interpretations had to be non-local.

2. Einstein inferred (not assumed) determinism from how local hidden variables would have to behave to reproduce perfect correlations possible when measuring two entangled systems.

3. The main interpretations have these characteristics:

Bohm (Pilot Wave): non-local, deterministic (trajectories for the hidden particles). We see probabilities only because we lack knowledge of initial conditions in general.

Objective Collapse ( eg GRW): non-local, non-deterministic,

Many Worlds: deterministic at level of universe, and I think within branch too, but this depends on how MW recovers probabilities, so I am not sure. Considered to be local (I think) within a branch; Bell’s result does not apply because he assumes only one outcome to experiment but all occur in MW (going be memory on that). I’ve seen (but not attempted to understand) attempts to reformulate Bell for MW.

(ETA: fixed deterministic within branch for mw)

If you mean the Schrodinger equation, which provides the time evolution of the wave function, it is a deterministic diffy q.

It’s how you account for the measurement “collapse” that brings in probabilities. But that collapse is not part of the formalism, only of the interpretation (and also the shut up and calculate approach used in the teaching shortcut to physics problem solving).

We should probably go back to sandbox to continue….

Things I am unclear on from Eric;s posts:

– which version of CSI is being defended

– taking Joe’s f as an example of a transformation that CSI is supposedly conserved through, how does including f itself in the specification or the CSI affect the Demski CSI definition and claimed results?

– why Eric spent time defending CSI only to say it was flawed in this thread

– what and how the principle of maximum entropy is being used (is there an ASC paper on that to your knowledge?)

– what ASC argument against stochastic evolution is Eric referring to

– what Eric’s separate argument using KMI and Levin’s results adds to that ASC argument

BTW, as an intellectual exercise: on the XOR stuff for Joe’s permutation f in the other thread : I think it might be possible to specify any permutation as a sequence of bit swaps using the XOR swap algorithm (3 XORs per bit). If permutation is 1->2, 2->3, 3->1, 4->4 for four bit string, then just do swaps reading cross. The masks have to include the original bit string, however, as well as 0s for positions not being swapped. But getting near my bed time so tell me where I screwed up on that.

Bruce,

The CSI being discussed is Dembski’s 2002 version, though Eric keeps trying to change the subject. The 2002 version of CSI is the one which Dembski claimed was conserved — a claim that Eric has been unable to defend, as we’ve seen.

The original image X exhibits CSI with respect to the specification “looks like a flower”. The permuted image f(X) does not exhibit CSI with respect to that specification.

If you include in the specification, you just invert the problem: X no longer exhibits CSI, but f(X) does.

Conservation fails either way.

I think he regrets trying to defend Dembski, but he doesn’t want to admit that, since he’s on record saying that Dembski was right. So he tries to defend ASC instead while pretending that he’s defending Dembski’s original CSI.

The approach, in other words, is “defend what you think you can defend, but pretend it’s what you wish you could defend.”

I’m unaware of any ASC paper on that topic.

He’s referring to this paper:

Bruce:

You’ll have to ask him. My impression is that he’s just trying to bluff his way out of a tight spot.

It seems wasteful to implement the permutation using bit swaps when the information is really only flowing one way around each cycle.

Every permutation can be decomposed into cycles. In your example, the cycles are 1->2->3 and 4. The latter doesn’t require any action, and the former can be handled as a rotation using fewer operations than a bit-swapping approach:

1. save the value of bit 3

2. move bit 2 to bit 3

3. move bit 1 to bit 2

4. move the saved value to bit 1

The larger the cycles, the greater the advantage of rotating versus bit-swapping. The only extra cost is a single register in which to store the saved bit value, and even that could be avoided depending on the instruction set.

Given that post-2005 version, maybe you only need 1 bit of SI to be something that can’t be explained by normal evolutionary processes. Not all of 150.

Well that’s what I meant which is why I wrote topology of sequence space. By that I mean the shape of the fitness landscape in that space.

🙂

Given that you have to prove the conclusion before you start.

I’m not sure how you turned what I said into that?

Here’s what I said:

I never said anything about assuming evolution was true. What I said was in response to this:

And that is what I am saying. I have built mobility in a bacterial with far fewer then 500 bits.

So, will you be assessing my claim? If so, how? What is it I need to provide to you?