A TSZ member recently made this claim:
Sanford’s recent paper with Cordova was rejected by multiple venues for bogus reasons. Everyone agreed the science is solid, but made up reasons why the paper should not be accepted.
And then is asked:
Name the venues and give their reasons for the rejection.
I’ve actually been asking this for literally years over at UD, although not lately. The claim that papers are rejected not because of the science but instead because of some other reason is often made. But I’ve never seen any actual evidence of this. Has anyone?
In fact, it’s also the stated reason from some at UD as to why they don’t even attempt to formally publish their work, they know it would simply be rejected for ideological reasons only.
Yet despite many years of asking I have never once seen any evidence for this claim. So in this thread I’d like to see the paper that was submitted, the journal or other it was submitted to and the rejection itself.
If nobody can supply any such evidence then this OP can be used in rebuttal in the future when such claims are again made, as we all know they will be.
Joe Felsenstein,
His argument requires formal mathematical proofs. I am really not very familiar with the math bur hopefully Eric can comment.
This may be why the sequence is used and possibly needed in the Weasel search. Natural selection is not really information about a target sequence. I think your idea of single mutation adaptive improvements is viable for certain adaptions but maybe not for constructing a new complex sequence where several changes to a single allele or non coding DNA sequence is required.
Evolution doesn’t search for any specific target sequence. You’ll ride your sharpshooter fallacy hobbyhorse until its legs fall off.
The various theorems that Dembski et al. invoke are supposed to apply to any scheme of selection, including the very simple one I used in my 2012 example. How come they don’t stop the selection in that example from putting more and more Specified Information into the genome? They blatantly don’t stop that.
No one, not Dembski, not Marks, not Ewert, not Holloway, not Montañez — not even you — has explained why all these powerful and supposedly-relevant theorems can’t do that job. That’s what needs explaining.
colewd,
You’re quoting from an article that addresses active information, which is very different from complex specified information.
In “LIFE’S CONSERVATION LAW: Why Darwinian Evolution Cannot Create Biological Information” (2010; preprint 2008), Dembski and Marks silently abandoned the botched Law of Conservation of Information that Dembski stated in No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligent (2002). They reattached the term “Law of Conservation of Information,” but not Dembski’s essential notion of conservation, to some mathematical expressions involving active information. They made no reference at all to Dembski’s prior use of the term. I have emphasized this in the opening paragraphs of at least three OPs on this blog. Unfortunately, it seems that not even the adversaries of ID have gotten the message. So I won’t say that you’re oblivious because you’re a proponent of ID.
EricMH is presently pulling a similar switcheroo. Last year, he was authoritatively asserting, here, there, and everywhere, that algorithmic specified complexity (ASC) is conserved in the sense that algorithmic mutual information is conserved. Last November, I proved the contrary in “Evo-Info 4: Non-Conservation of Algorithmic Specified Complexity.” You can see in the present thread that EricMH continues to say that ASC is conserved. What you cannot see is that Eric has abandoned his prior claim, and is now assigning a very different meaning to the term conservation than he previously did. The new sense of conservation comes from the BIO-Complexity article by George Montañez that was published last December, i.e., several weeks after I posted my proof that ASC is not conserved in the sense that algorithmic mutual information is conserved. If you search the new BIO-Complexity article by David Nemati and Eric Holloway, “Expected Algorithmic Specified Complexity,” you’ll find a reference to Montañez’s article, but no reference at all to algorithmic mutual information.
ETA: I don’t mean to say that you’re quoting from “Life’s Conservation Law.”
Joe Felsenstein,
Tom English,
It’s not our job to show they can’t. It’s your job to show that improved fitness is the result of new information not just altered information.
The other issue is in your argument you separated specification and information. This appears to be a change to Dembski’s argument.
I think the key is to nail down if there is a difference in CSI vs other versions of information that really invalidates the math. If Tom was using a straw-man version for his proof then his counter argument is not valid.
LOL! Now evolution doesn’t provide new information, just altered information.
There’s no Creationist meme so stupid Bill Cole won’t latch onto it. 🙂
which Joe did
Yes, it is. If you support a claim you are supposed to be able to back it up. More important for you personally: you are first supposed to understand what the claim actually is, and then support it. It’s really embarassing if everybody can spot that you have no clue (hint hint).
I had written:
Let me qualify my statement a little. Dembski and Marks, in their Active Information argument do allow gene frequencies to change and populations to move upwards on a fitness surface. So they would say that the Active Information argument is consistent with my numerical results. They then argue that different fitnesses of the genotypes, in pattern that allows this improvement of fitness, needs Design to set it up (I disagree, ordinary physics can do that).
So their Active Information argument is not refuted by my numerical example, but also their Active Information argument does not prevent fitness from improving in my example.
Nonsense. The CSI arguments were supposed to show that changes of genotypes that increase fitness run into some limit defined by CSI calculations. My numerical example finds that there is no limit there in this simple and straightforward case.
But you do not see that. You want me to prove “newness”? No thanks, not interested.
Joe Felsenstein,
The limit is shown by gpuccio in his critique using the multi safe example. What needs to be deeply understood is what 500 bits really means.
If you do not how is it you can claim more than a change in existing information?
Gpuccio’s example is one that assumes that there is no path to the specified sequences that has increasing fitness along it. So it is irrelevant to the general case.
Yes, I agree that “what needs to be deeply understood is what 500 bits really means”. Thanks for the arrogant lecture.
Joe Felsenstein,
I truly apologize.
Dembski’s notion of CSI in NFL is not refined enough. It is technically flawed, specification can form a harmonic series which diverges, and he doesn’t formally prove his LCI, in particular the stochastic portion.
ASC is the variant that does have a formal conservation theorem, but it is only proven for a particular instance, and not under stochastic processing.
I prove a complexity conservation bound for stochastic processing in the Expected ASC paper I coauthored with David Nemati. So, I provisionally agree there is not the strict conservation bound like algorithmic information has, although it seems the bound I proved is too loose. What ever the truth may be, there is clearly a conservation bound of some degree. But, that is why I keep referring to ASC, because the formalisms are proven in that regard. Even better is active information, because Dembski does have a formal proof of conservation under stochastic processing, but I’m less clear on the exact mathematical relation between CSI and active information, so I don’t use it at this time.
But, Felsenstein’s scrambled image example does not address the above concerns. Like I keep saying, he is picking a specification that is closely tied to the specific event, which is the unscrambling of the flower image. And rereading his argument, I also see that Shallit’s criticism is confused that the specification “when permuted back by G, it too is more like a flower” is somehow dependent on the event. The chance hypothesis is uniform distribution over pixels, and the F and G are not a part of that chance hypothesis, so there is no dependence between the event and the specification. Shallit seems confused because he knows that in actuality the scrambled image was produced with F, and then equivocates his knowledge with the chance hypothesis. Too much confusion going on here. Perhaps in a day when I have free time I can write an OP explaining the various ways the argument is confused.
I recognize that Felsenstein spent a lot of time on the article, and I really appreciate his patience in his interactions with me. But, in all honesty I do not believe this is a good counter example to Dembski’s argument. If I did think it was good I would completely accept it and reject Dembski’s argument. Like I keep saying, ID for me is not a religious thing that I hold to because I’m grasping at straws to support my religious beliefs. As mentioned to Gregory, the causality goes in the other direction for me.
For his fitness argument, Felsenstein’s fitness level example seems to be ill posed. If the evolutionary process is guaranteed to increase fitness, then the probability of the organisms evolving past whatever fitness level given enough time is 1, so the CSI with this chance hypothesis is always going to be 0. It seems Felsenstein is not sticking to Dembski’s definition of CSI, but substituting his own understanding. For example, this paragraph:
“This is an increase of information: the fourfold uncertainty about the allele has been replaced by near-certainty. It is also specified information — the population has more and more individuals of high fitness, so that the distribution of alleles in the population moves further and further into the upper tail of the original distribution of fitnesses.”
it is clearly confused. If an event is near certain, then its complexity is near zero. Complexity is the upper bound on specified complexity, so specified complexity is reduced as evolution happens.
Additionally, same problem as with the scrambled flower example. His specification is clearly dependent on the chance hypothesis, because the chance hypothesis is guaranteed to move individuals to higher and higher specificity.
As for the rest of his article responding to Dembski’s NFL argument, there are more confusions, but they’ve been well addressed by others over the years. I am not personally finding these to be compelling counter arguments. Plus, in the NFL arena, Dembski has formally proven all his claims, so really there is nothing to argue at this point.
The one point I do grant is that Dembski’s work has not been applied to biology in a really formally rigorous way. But, Dembski’s NFL is mathematical, so it can be applied, at least in theory, although the practicality of doing this may be quite tricky to manage.
Again, sorry for the frustration, but I can only say what I have determined to be true to the best of my abilities. And at this point I cannot say either Tom English or Felsenstein have a coherent counter example to Dembski’s conservation law. Of course I may be extra dense, or totally misunderstanding thing, but I’ve put a pretty decent effort and amount of time doing my best to understand their counter examples.
EricMH,
I disagree with the arguments you have presented. Let me briefly reply to three of them (1) Dembski’s Law of Conservation of Complex Specified Information, (2) Dembski’s 2005 Specified Complexity argument, and (3) Dembski’s use of the No Free Lunch theorem in his 2002 book. I will put these into three different comments, so as to allow the diligent reader coffee breaks.
Dembski’s LCCSI
No, I am not arguing that Dembski’s LCCSI is unproven or unprovable. It may quite possibly be provable. But even if the LCCSI were totally valid, my central assertion about it in my 2007 article was that it does not show that natural selection and other ordinary evolutionary processes cannot get the population into regions of the space of genotypes that have high fitness or high adaptation. That is because the LCCSI does not maintain one specification and then show that you needed to have that specification in an the previous generation in order to have it in this generation. It does not show that, because Dembski’s sketched proof changes the specification in each generation.
And the minute we require that the specification (a region of high fitness, let’s say) stays the same from generation to generation, it is dead easy to show that one can get by ordinary evolutionary processes from outside the high-fitness set of genotypes to inside it. That is what my gene frequency example in my 2007 paper did, show fitness increasing from the simple gene frequency change under selection. That is what my 100-locus numerical example at TSZ in 2012 showed even more dramatically.
I will just note that Hazen, Griffin, Carothers, and Szostak’s “Functional Information” (2007; Szostak 2003) does too. It is very much like CSI, and ID advocates often point that out, as a validation of their argument. But it differs in one big critical way from Dembski’s 2002 argument, so Dembski’s LCCSI cannot apply to it. Namely, it does not allow you to change the function. Namely, if you want to know how much FI there is in successive generations, you must compare using the same definition of function.
Conclusion: Dembski’s LCCSI may be provable as a theorem, but that theorem cannot be used to show that high fitness cannot be achieved unless you start with fitness that high. It may be a provable theorem, but it doesn’t do the job. And my argument in RNCSE in 2007 raised the crucial point about the specification changing, when it needs to stay the same to make any argument against the effectiveness of natural selection.
coffee break #1
EricMH,
Dembski’s 2005 Specified Complexity argument
In Dembski’s 2005 argument, he does not allow us to declare Complex Specified Information to be present unless we can first show that ordinary evolutionary processes cannot get us into that set of genotypes. And he does not say how we are to show that. This is a different procedure for arguing against the ability of natural selection et al. to do the job. It is like one of those jokes about a simple way to kill cockroaches that starts “First, catch the cockroach”.
Yes, in the deterministic gene frequency examples in 2007 and 2012, I used gene frequency changes that had probability 1 of increasing the fitness. If extended in obvious ways they would have probability 1 of getting to high fitness. Thus they can “catch the cockroach”. They are good counterexamples to the 2002 argument that the LCCSI prevents high fitnesses from being achieved.
And of course they do not invalidate the 2005 concept. Because the 2005 concept is defined to exclude any case where natural selection et al. can be effective. It is also defined so that Specified Complexity is a totally useless concept. Because one must first prove the ineffectiveness of ordinary evolutionary processes in order to show that Specified Complexity is present, which one does in order to use it to show that ordinary evolutionary processes are ineffective. (I think commenter keiths here at TSZ was the first to point this out, declaring that the use of SC to prove Design was “circular”).
coffee break #2
Eric: Suppose CSI or ASC was applied to biology; in particular, the mechanism of natural selection operating on a population where relative fitness was assessed with respect to the environment. By applied, I mean that the relevant states and dynamics were all mapped to the formalism of CSI or ASC in a way that allowed suitably-trained people to make predictions from the formalism and compare them to what happens in real world biology. That mapping would only involve stochastic functions in the sense you specify them to be in other threads.
ETA: I am assuming that both biologists and IDists agree that formalism and its predictions has been correctly mapped to biology. By this, I am meaning to exclude cases where there is no such agreement.
Now suppose further that the predictions from CSI or ASC were not in line with that reality, eg that some quantity that was predicted to be conserved or bounded was not found to be so in reality.
What would you conclude?
1. The model and hence ASC/CSI are not scientifically useful for understanding how population genetics change over time.
2. Science can only model stochastic processes but the real world cannot always be modeled that way because of a guiding intelligence which cannot be captured by stochastic models (or for other reasons). The results demonstrate that limitation.
3. Something else.
EricMH,
Dembski’s No Free Lunch argument
In his book No Free Lunch, Dembski makes use of Wolpert and Macready’s No Free Lunch Theorem, which is about the ineffectiveness of search when averaged across all possible ways one can associate function values with points in the search space. The W&M NFL theorem is valid. (And in fact I am told that a more general version of it was proven by Tom English a bit earlier).
Is the NFL theorem usable to show that in a typical biological case evolutionary processes, modeled by “greedy” uphill search for higher fitnesses in a space of genotypes, cannot raise fitness by much? The NFL theorem applies to such cases, but what it shows is average-case behavior over all ways of associating fitnesses with genotypes. One keeps the set of fitness values the same, but tries all ways of mapping them onto a fixed space of genotypes.
As about 7 people argued immediately after Dembski’s NFL argument appeared, fitnesses of genotypes in models of evolving populations are not at all typical of random associations of fitnesses with genotypes. In my 2007 article I spent a lot of time on this, and I hope I was very clear in summarizing what these authors saw. In associating fitnesses with genotypes randomly, one makes a situation where a single mutation in a sequence carries us to a randomly chosen one of the fitnesses. If one instead mutates simultaneously every base in the genome, where does one get? The same, to a random one of the fitnesses.
Thus for typical fitness surfaces in Dembski’s argument, one has that a single mutation is just as serious as changing all the bases in the genome simultaneously. Now in real life, while mutations can be bad for you, often they are not nearly that bad. So biologically realistic fitness surfaces are a lot smoother than infinitely jagged “white noise” fitness surfaces. And those unrealistic jagged surfaces make the overwhelming contribution to the poor average behavior of search in the NFL argument. So the NFL argument describes average-case behavior, but biological cases are not typical of the ones in that argument.
So yes, the NFL theorem is correct, but when applied by Dembski it describes average-case behavior, where overwhelmingly those cases are biologically bizarrely unrealistic.
The Active Information argument
I should just note that the later Active Information argument by Dembski and Marks (and Ewert) acknowledges that one might indeed have smooth enough fitness surfaces to allow ordinary evolutionary processes to reach high fitnesses. However it then argues that intervention by a Designer (at the outset) is necessary to set up these smooth fitness surfaces. I have disagreed with that part of their argument, and pointed out that processes affected by different genes are often separated in the organism by time and space. Then ordinary physics and chemistry predicts some smoothness of the fitness space without need for Design. The fact that single mutations are typically far less disastrous than mutating all sites in the genome is also predicted by those ordinary physical and chemical processes. (EricMH did not bring up this Active Information argument for Design in this thread, but it is relevant because it shows Dembski, Marks, and Ewert acknowledging that typical evolutionary surfaces might be smooth enough to allow substantial gain in fitness. So I thought I would mention this).
End of arguments, Eric’s comment successfully refuted, no more coffee needed. Actually I don’t like coffee and don’t drink it, but you might have needed it to get through all of this.
Joe Felsenstein,
It appears that there are two different arguments going on. One about fitness and one about complex specified information and the ability for it to be generated algorithmically without information about the sequence in the algorithm.
colewd,
No, you’re wrong. Eric and I are discussing primarily William Dembski’s arguments about CSI. For which the specification is fitness or some component of fitness. Even the function in the Functional Information arguments makes sense only when more of it is better fitnesswise.
The ASC arguments are different, do not necessarily involve fitness, and, as I hope to show soon in a post, do not make sense as arguments about evolutionary biology.
But right now we’re all talking about fitness and closely related quantities.
Joe Felsenstein,
Fitness and information are very different metrics as you can improve fitness while losing information as Behe dedicated a book to this subject. Eric is arguing for information non growth not fitness non growth. My opinion I will let him weigh in.
colewd,
Whatever Eric is “arguing for”, the important question is whether specified information, with the specification being closely related to fitness, cannot get into the genome through ordinary evolutionary processes.
As for holding off and letting Eric “weigh in”, that would be very wise of you.