A nontechnical recap

David Nemati and Eric Holloway, “Expected Algorithmic Specified Complexity.” Bio-Complexity 2019 (2):1-10. doi:10.5048/BIO-C.2019.2. Editor: William Basener. Editor-in-Chief: Robert J. Marks II.

In Section 4 of their article, Nemati and Holloway claim to have identified an error in a post of mine. They do not cite the post, but instead name me, and link to the homepage of The Skeptical Zone. Thus there can be no question as to whether the authors regard technical material that I post here as worthy of a response in Bio-Complexity. (A year earlier, George Montañez modified a Bio-Complexity article, adding information that I supplied in precisely the post that Nemati and Holloway address.) Interacting with me at TSZ, a month ago, Eric Holloway acknowledged error in an equation that I had told him was wrong, expressed interest in seeing the next part of my review, and said, “If there is a fundamental flaw in the second half, as you claim, then I’ll retract it if it is unfixable.” I subsequently put a great deal of work into “The Old Switcheroo,” trying to anticipate all of the ways in which Holloway might wiggle out of acknowledging his errors. Evidently I left him no avenue of escape, given that he now refuses to engage at all, and insists that I submit my criticisms to Bio-Complexity.

The notion that I must submit to Bio-Complexity is ludicrous, considering my past interactions with the editor-in-chief, Robert Marks, and a member of the editorial board, Winston Ewert. In 2011, I discovered that more than half of the introduction to Ewert’s thesis was copied from two articles by Dembski and Marks. There was no sign of quotation, and the articles were cited nowhere in the thesis. Stunningly, Marks was Ewert’s thesis advisor. That is, Marks approved a document that obviously plagiarized his own publications. And I reported the academic misconduct to EthicsPoint. Baylor University subsequently required Ewert to submit a revised thesis, including a preamble in which he admits to plagiarism in the original version. In all likelihood, Marks was censured privately. (I reported on the sordid affair here, here, and here.) The truly amazing aspect of Bio-Complexity is that more than half of the articles it has published were authored by Ewert. I have no more interest in legitimizing the journal than I have reason to expect fair handling of a submission exposing negligence in the review and editing of the article by Nemati and Holloway.

According to Nemati and Holloway (and their former advisor, Robert Marks), each and every measure of algorithmic specified complexity — there are infinitely many of them — is a quantification of the meaningful information in data. The first “fundamental flaw” in Section 4 is that the way in which the algorithmic specified complexity of the data is measured depends on how we refer to the data. Suppose that the expression y refers to the data, and that y = f(x). You need not know how f and x are defined to recognize that f(x) is another way of referring to the data. The algorithmic specified complexity of the data should not depend on whether we refer to the data as y or as f(x). This is something that most high schoolers grasp, and I am amazed to find myself explaining it here. According to the authors, the algorithmic specified complexity of the data must be measured in a way that depends upon the function f when the data is referred to as f(x). If you measure algorithmic specified complexity as they prescribe, then there are cases in which the result is infinite when the data is referred to as y, and the result is a negative number when the data is referred to as f(x).

The second “fundamental flaw” is that an ostensible characterization of “conservation of complexity for ASC” turns out, when deobfuscated, to be a comparison of differently measured quantities of algorithmic specified complexity. For concreteness, let us say that f(x) is the data output by a process when x is the data input to the process. In any claim that a quantity of algorithmic specified complexity is conserved in the process, the measure must be the same for the output as for the input. However, Nemati and Holloway use the function f, which represents the process, to make the measure of algorithmic specified complexity different for the output of the process, expressed as f(x), than for the input to the process, expressed as x. It is absurd to suggest that a quantity of algorithmic specified complexity is conserved in the process when ASC is measured differently for the output than for the input. In fact, the ASC measure applied to the output is customized to the process.

Eric Holloway will be sorely tempted to seize upon parts of this vague recap, and twist them to suit his purposes. Please keep it in mind that I supplied the mathematical details in “The Old Switcheroo,” and that Eric wants nothing to do with them. In the following appendix, I repeat three simple questions that he has yet to answer. The questions make it impossible for him not to see the two fatal flaws described in the preceding paragraphs. In the thread where I first posed them, he thus far has responded only with diversionary tactics. I believe that everyone should understand that if Eric Holloway is operating in good faith, then he will answer the questions before launching into rhetoric.

Appendix: Questions for Nemati and Holloway

Question 1. Is your definition of algorithmic specified complexity precisely equivalent to the definition given by Ewert, Dembski, and Marks in “Algorithmic Specified Complexity,”

(A)   \[ASC(x, C, p) = -\!\log_2 p(x) - K(x|C), \]

even though you write I(x) in place of -\!\log_2 p(x)?

Question 2. The identity I(x) = ASC(x, C, p) + K(x|C) follows from your definition of algorithmic specified complexity. Is the following extension of your inequality (43) correct?

    \begin{align*} f\!ASC(x, C, p, f) & < I(x) = ASC(x, C, p) + K(x|C) \end{align*}

Question 3. You refer to the upper bound on fASC as “conservation of complexity for ASC,” so you evidently regard fASC as algorithmic specified complexity (ASC). I observe that

    \begin{align*} & f\!ASC(x, C, p, f) & & \\ & \quad = I(f(x)) - K(f(x)|C) & & \text{[definition]}\\ & \quad = -\!\log_2 \Pr[f(\mathcal{X}) = f(x)] - K(f(x)|C) & & \text{[by definition (39)]}\\ & \quad = -\!\log_2 p_f(f(x)) - K(f(x)|C) & & [\text{notation: }f(\mathcal{X}) \sim p_f] \\ & \quad = ASC(f(x), C, p_f), & & \text{[by definition (A)]} \end{align*}

where p_f denotes the probability distribution of the random variable f(\mathcal{X}). Have I correctly expressed fASC as algorithmic specified complexity?

It follows from the foregoing that your “conservation of complexity” is equivalent to

    \begin{align*} ASC(f(x), C, p_f) & < ASC(x, C, p) + K(x|C). \\ \end{align*}

As explained in “The Old Switcheroo,” it is absurd to change from one ASC measure to another, and speak of conservation.

4+

71 thoughts on “A nontechnical recap

  1. That’s interesting background info on your interactions with Bio Complexity. It helps me to understand the tone of your posts on Eric’s work.

    0
  2. “In Section 4 of their article, Nemati and Holloway claim to have identified an error in a post of mine. They do not cite the post, but instead name me, and link to the homepage of The Skeptical Zone. Thus there can be no question as to whether the authors regard technical material that I post here as worthy of a response in Bio-Complexity.

    True, but it is not the same thing as submitting a paper for a review…

    “The notion that I must submit to Bio-Complexity is ludicrous, considering my past interactions with the editor-in-chief, Robert Marks, and a member of the editorial board, Winston Ewert. In 2011, I discovered that more than half of the introduction to Ewert’s thesis was copied from two articles by Dembski and Marks.

    No so fast!

    The reason why they are reluctant to link to the proper sections of the your publishing, or reveal the details of them, is plain simple:

    They do not want to continue to play the cat and mouse game with you anymore.
    Get it???

    Eric Holloway has already admitted he had made an error, but he doesn’t think it discredits the whole paper…
    He also thinks you have a tendency to make noise, rather than real and substantiated claims…If you think you have something of value to publish, do it in the right way!

    What else do you want?
    Take credit for Milosavljecvic’s work?
    Or, for making Dembski to retire from DI? He is back with DI, you know?

    Do you realize how ludicrous some of your claims are?

    0
  3. EricMH: I mostly comment here because I like the ego boost of ardent skeptics being unable to refute my ideas, especially in the case of Tom English.

    Sounds too much like Salvador for my liking. I think I’m going to have to suspend my donation to the Discovery Institute this year, perhaps donate it to re-elect Donald Trump.

    Thanks for your efforts Tom.

    4+
  4. “Eric wants nothing to do with them.”

    Yes, this seems to be a character trait of many in the IDM. Avoidance when challenged. EricMH continues to violate his own academic integrity in public, and is too irresponsible to be held to account, sheltered instead by the IDM & DI.

    Remove donations & make 2020 the year when the “Intelligent Design” Movement experiences massive shrinkage and diffusion of members to better platforms, & ideas.

    2+
  5. Mung: Sounds too much like Salvador for my liking. I think I’m going to have to suspend my donation to the Discovery Institute this year, perhaps donate it to re-elect Donald Trump.

    Thanks for your efforts Tom.

    Maybe the American Cancer Society the help save the American hero Rush Limbaugh.

    0
  6. Tom English: I’m glad to know that you noticed.

    There are facts, and there is rhetoric. You’re a facts guy and pretty much keep it that way. ID needs people like you and ignoring you because of the venue in which you choose to publish is not a good look.

    Hoping you are well.

    2+
  7. Mung: You’re a facts guy

    Mung: perhaps donate it to re-elect Donald Trump.

    Glad you still have your sense of humor Mung.

    4+
  8. BruceS: That’s interesting background info on your interactions with Bio Complexity. It helps me to understand the tone of your posts on Eric’s work.

    I recognized Eric Holloway as an incredibly confused crank about eight years ago. I deemed him unworthy of mention until the Discovery Institute and Uncommon Descent began promoting him as an expert. The first part of my review, “Stark Incompetence,” establishes that, although the “Charles Darwin of intelligent design” managed to get him credentialed by Baylor University, Eric is very, very bad at math. It also establishes that the review process at Bio-Complexity was very, very bad.

    As for my tone, it is quite annoying that most of Eric’s comments are supported by nothing but his professions of his own expertise. In reality, his knowledge of ID theory is quite poor. (What amazes me most is that he conflates specified complexity with active information, though the two are, loosely speaking, opposite of one another.) Whenever someone manages to corner him with specifics, he vanishes. Eventually he returns to TSZ, but he never returns to threads where he is shown to be wrong. Nonetheless, he continues to tout his own expertise, and to claim that no one has ever mounted a significant challenge to his claims.

    The ID movement dubbed William Dembski and Robert Marks, respectively, the “Isaac Newton of information theory” and the “Charles Darwin of intelligent design.” It was not just for his plagiarism, but also for his fabulously dishonest response to a Panda’s Thumb article by Joe Felsenstein and me, that I dubbed Winston Ewert the “Charles Ingram of active information.” (I intentionally selected a third Brit. It was by accident that I came up with a second Charles.) Sticking with the naming convention, it’s a no-brainer that Eric Holloway is the Brave Sir Robin of meaningful information. (It is not by accident that I came up with a second English knnnnigget. The Chrome browser does not flag knnnnigget as a misspelling, and I presume that it is not by accident that the word is in the browser’s dictionary.)

    2+
  9. Tom English:
    You said elsewhere that you were waiting to make technical comments. Feel free to post them here.

    Putting aside the equivocation in Eric’s claim of conservation of ASC that you point out, have you checked the math in his derivation of the inequality in equation 40?

    I think that he does not prove it; instead, he only provides a special case where it holds. I think there are counter-examples, but if you are happy the inequality holds as a matter of math only, then I must be missing something obvious.

    1+
  10. Tom English: Whenever someone manages to corner him with specifics, he vanishes. Eventually he returns to TSZ, but he never returns to threads where he is shown to be wrong. Nonetheless, he continues to tout his own expertise, and to claim that no one has ever mounted a significant challenge to his claims.

    That is also my impression of the exchanges of his with you and Joe.

    2+
  11. Mung: You’re a facts guy and pretty much keep it that way.

    I should keep it more that way, because people who don’t grasp the essential facts latch onto my inessential remarks.

    phoodoo:

    Mung: You’re a facts guy

    Mung: perhaps donate it to re-elect Donald Trump.

    Glad you still have your sense of humor Mung.

    You like to argue with people who accept the aspects of evolutionary theory that are settled science. But I’ve never seen any indication that you understand the technical claims made by Dembski and his successors. You’ve fed only on pabulum, as best I can tell.

    I worked pretty damned hard at providing an elementary explanation of algorithmic specified complexity, in the second part of my review. To my knowledge, no one in the ID movement has ever done anything comparable. What the Wizards of ID do is to tell you what you ought to make of their mathematical analysis. They never attempt to explain the math to you.

    I’m sure that my efforts at explaining the math are mostly futile. People who are capable of understanding an elementary explanation do not care to work at understanding. But it is quite easy to see that I am dealing in mathematical particulars, and that few others are doing the same.

    1+
  12. BruceS: Putting aside the equivocation in Eric’s claim of conservation of ASC that you point out, have you checked the math in his derivation of the inequality in equation 40?

    I think that he does not prove it; instead, he only provides a special case where it holds. I think there are counter-examples, but if you are happy the inequality holds as a matter of math only, then I must be missing something obvious.

    Nemati and Holloway do not prove the inequality in

    (40)   \[I(f(x)) \leq -\!\log_2 p(x) = I(x). \]

    In the immediately preceding equation, they define I(f(x)) as shorthand notation for \Pr[f(\mathcal{X}) = f(x)], but do not define \mathcal{X}. The inequality in (40) holds if you assume that \mathcal{X} is a random variable following probability distribution p.

    The three questions I pose at the end of the OP are carefully constructed to avoid debate about (39) and (40). Coming up with three simple questions that go to the heart of the matter was not simple.

    0
  13. Tom English: The three questions I pose at the end of the OP are carefully constructed to avoid debate about (39) and (40)

    Right, but it’s 40 and another one I am missing something on. I know that is not the type of issue you are asking Eric to respond to, which is one reason why I waited to see if he would respond.

    The other reason is I am pretty sure I am just missing the point and will hence just feel foolish for asking. But I never let that stop me before, so here are my concerns:

    For 40, let x0, x1 be two bit strings with p(x0)=0.1 and p(x1)=0.9 and take f(x0)=x1 and f(x1)=x0. That would seem to break the inequality for one of x0, x1.

    0
  14. BruceS: Right, but it’s 40 and another one I am missing something on.

    I was trying to tell you that something is missing from the article. I wouldn’t say that you’re missing something. Only by assuming something that Nemati and Holloway do not state, and should have stated, can we derive the inequality in (40).

    BruceS: I know that is not the type of issue you are asking Eric to respond to, which is one reason why I waited to see if he would respond.

    No, it’s absolutely fine to bring it up here. I was trying to tell you that you have identified something that truly is a mess. I want three straight answers from Eric before getting mired in the mess with him. But I don’t mind going there with you.

    BruceS: For 40, let x0, x1 be two bit strings with p(x0)=0.1 and p(x1)=0.9 and take f(x0)=x1 and f(x1)=x0. That would seem to break the inequality for one of x0, x1.

    Sorry, but I need for you to show your calculations. I suspect, but cannot be sure, that you are taking I(f(x1)) to be equal to -log p(f(x1)) = -log p(x0). What Nemati and Holloway do in (39), where they define what they call the “surprisal of function application” is to override the rules for evaluation of expressions. Ordinarily, if f(x1) is equal to x0, then I(f(x1)) is equal to I(x0). In the OP, I indicate that Nemati and Holloway say that the algorithmic specified complexity of the data should depend on the expression that refers to the data. But they actually begin by making the surprisal of the data depend on the expression that refers to the data. You’ve homed in on the part of the article where the surprisal is different for f(x1) than for x0, even though the two expressions refer to the same data (a string of bits), i.e., f(x1) = x0.

    0
  15. Tom English: I suspect, but cannot be sure, that you are taking I(f(x1)) to be equal to -log p(f(x1)) = -log p(x0)

    Yes, that is what I assumed. I don’t understand what Eric is doing if it is not that. But whatever it is, it is not how I am understanding equation 37.

    I suspect at this point that trying to understanding Eric’s “math” is just going to confuse me more. So I will stop.

    Thanks for your help.

    0
  16. Tom English:
    I do hope that Eric hasn’t been laid low by an intelligently designed coronavirus or an unintelligently guided Mack truck.

    Me too…
    Without him, nobody from DI is going to pay any attention to TSZ… unless my online math courses take me to your level… 😉

    0
  17. Tom English:
    I do hope that Eric hasn’t been laid low by an intelligently designed coronavirus or an unintelligently guided Mack truck.

    Or J-Mack truck…😉

    0
  18. Tom English: I should keep it more that way, because people who don’t grasp the essential facts latch onto my inessential remarks.

    This will always be the case. It takes a willingness to see when “inessential remarks” are what they are. Better to focus on the inessential if it can deflect from the matter at hand.

    0
  19. Tom English: But it is quite easy to see that I am dealing in mathematical particulars, and that few others are doing the same.

    I think its fair to assume that most people who have an interest in the ID theory are not all that interested theoretical math for which there is not great compelling reason to believe has any relation to reality.

    As animals and beings don’t exist as x’s in parenthesis, how would we ever know if those x’s ever actually mean anything? If mathematical concepts don’t correspond to real entities, is there really such a thing as being right or wrong?

    If 1 doesn’t correspond to a singular object, does 1+1=2 mean anything? Can it be right or wrong? I think not.

    For example, if “1” refers to a mound of sand, and I add two mounds of sand together, then 1+1=1. So is the math right or wrong?

    0
  20. phoodoo: For example, if “1” refers to a mound of sand, and I add two mounds of sand together, then 1+1=1. So is the math right or wrong?

    If you approximate the count of sand grains in the mounds to infinity, wouldn’t that be correct?

    0
  21. Alan Fox: If you approximate the count of sand grains in the mounds to infinity, wouldn’t that be correct?

    It doesn’t matter Alan. If two buckets of water adds up to one bucket of water, you can’t claim the math is wrong, only that math without the right correlation to reality is meaningless.

    0
  22. phoodoo,

    I’d agree that counting identical objects works. Counting things that you can place in a coherent category seems useful too.

    0
  23. Simplification to make a useful model of reality seems a practical strategy in scientific research, too.

    0
  24. Alan Fox:
    phoodoo,

    I’d agree that counting identical objects works. Counting things that you can place in a coherent category seems useful too.

    A lot of things you write just seem to be meaningless scrambles of words, which do nothing other than to obfuscate and deflect from the essential points. Again, I think you just like practicing typing. Or perhaps you are just intentionally meaningless. The Niche. Alabaster. Phlogiston. Sytematics!

    The original point being, that math is not right or wrong, if it can not be tied to something which we can identify as real, to the best of our abilities. Theoretical math, is just like playing a board game with no rules. Trying to say that there is right or wrong is just making up a new game.

    0
  25. phoodoo: The original point being, that math is not right or wrong, if it can not be tied to something which we can identify as real, to the best of our abilities. Theoretical math, is just like playing a board game with no rules. Trying to say that there is right or wrong is just making up a new game.

    I think mathematics is invented rather than discovered. It’s a very useful tool for modelling reality but in a contest between math and reality, we should go with reality.

    0
  26. phoodoo: I think its fair to assume that most people who have an interest in the ID theory are not all that interested theoretical math for which there is not great compelling reason to believe has any relation to reality.

    You’ve really nailed it, phoodoo!
    I have tried to understand, without any bias, if this issue actually has any real application to reality. Since I used to be good at math, not anymore 😏 , I had to dig deep. I have found that generally math can be cooked. Einstein did it. Or I should say, otheres cooked it for him.

    Everyone here has seen Joe cooking his math by adding natural selection to the equations, as if it were a holy grail, or omnipotent…

    In the end, both sides, Eric and Tom admitted, they don’t know if their math applies to biology… I think it doesn’t because DNA; the nucleotides represented by letters A G C T , are not just letters. They are vibrating chemical components: adenine, guanine, cytosine, and thymine connected by hydrogen bonds, which are governed by quantum mechanics.

    https://www.google.com/url?sa=t&source=web&rct=j&url=http://iopscience.iop.org/1742-6596/597/1/012033/pdf/1742-6596_597_1_012033.pdf&ved=2ahUKEwiR3aiBmNbnAhUnh-AKHRCUATcQFjAEegQIAhAB&usg=AOvVaw05EU_TPCJjFxp45y8hc6Yq

    Recently, it has been confirmed what I, and many otheres suspected. Mutations are controlled by quantum mechanics, quantum jitters.

    http://theskepticalzone.com/wp/quantum-jitters-behind-dna-mutations/

    According to the law of conservation of quantum information, anytime a mutations happens, there can’t be an increase of information, even in case of base insertions.
    Why? Here it is from the paper:

    “The strength of the single base von Neumann entropy depends on the neighbouring sites, thus questioning the notion of treating single bases as logically independent units.

    There can be only one conclusion… Mutations lead to rearrangement of quantum states in basepairs and loss of functional information in a gene (s) as a whole…
    Behe’s Darwin Devolves on quantum level…

    0
  27. Alan Fox: It’s a very useful tool for modelling reality but in a contest between math and reality, we should go with reality.

    There’s never a contest between math and reality. As suggested by your own statement, it’s the mathematical model, not mathematics itself, that conflicts with reality. “All models are wrong, but some are useful.”

    0
  28. Alan Fox: Simplification to make a useful model of reality seems a practical strategy in scientific research, too.

    Useful. There you’ve got it right. Models don’t have to be expressed in mathematical terms, but there are big advantages when they are.

    0
  29. phoodoo: I think its fair to assume that most people who have an interest in the ID theory are not all that interested theoretical math for which there is not great compelling reason to believe has any relation to reality.

    There’s no “great compelling reason to believe” that specified complexity “has any relation to reality”? You’re telling me that “most people who have an interest in the ID theory are not all that interested” in specified complexity? Really‽

    What do you think that Dembski’s Law of Conservation of Information is, other than mathematics that supposedly constrains physical reality?

    Dembski’s approach to design detection was to “sweep the field of chance hypotheses”? What do you think a chance hypothesis does, other than to provide a probabilistic (mathematical) model of a process resulting in an observed outcome?

    Fans of ID theory absolutely love to say that it’s mathematically proven. They love it just as much as they love saying that evolutionary theory isn’t mathematically proven (laboring under the pathetic misconception that the “laws” of physics are proven mathematically to be true).

    Tell me what in ID theory is proven, apart from “conservation of information” theorems. If you prune the Dembskian branch of ID, then what is the theory in ID theory?

    0
  30. Tom,

    I, for one, took phoodoo and J-Mac to be mounting a sweeping indictment of Dembski’s, and especially Holloway’s, contributions to ID.
    I was wondering if anyone else took it that way.
    With that discarded, what is left? Irreducible Complexity?

    2+
  31. DNA_Jock: I, for one, took phoodoo and J-Mac to be mounting a sweeping indictment of Dembski’s, and especially Holloway’s, contributions to ID.
    I was wondering if anyone else took it that way.

    That’s how I saw it. But I responded by posing Is that really what you want to say? questions.

    0
  32. Tom English,

    Tom English: Tell me what in ID theory is proven

    That you can take a bunch of bacteria, put them in a jar, watch them replicate into a few trillion, and not much happens to them. They just stay being bacteria, over and over and over. And if this is how all of life around us was created, there won’t be enough time in ten universes. So, if random sitting around waiting for something to occur doesn’t work, the only logical alternative is ID.

    So tell me, what in evolution theory, if there is such a thing, has been proven? That finches sometimes have long beaks, and sometimes they have short ones? Or that moths come in different colors?

    0
  33. phoodoo,

    The OP is the last part of a review of theoretical work in intelligent design — specifically, an article on algorithmic specified complexity. If you want to continue bashing ID theory, as you haplessly did above, be my guest. But you’re not going to dodge my questions by changing the topic. I’m not going there with you.

    0
  34. Tom English: That’s how I saw it. But I responded by posing Is that really what you want to say? questions.

    There is usually a flip side to each story…In this case I’d ask:

    What is Darwinism left with if Joe and you can’t play the cat and mouse game of CSI anymore? The long-term evolution experiment that wasn’t really intelligently designed to prove evolution?😉

    0
  35. DNA_Jock: With that discarded, what is left?

    In true science, one theory is usually replaced by a better one…
    Newtonian physics were replaced by Einstein’s relativity…
    The case of CSI is no different… Not only that, the new theory can be applied to biology and tested, which can only mean one thing…😉

    0
  36. J-Mac: In true science, one theory is usually replaced by a better one…
    Newtonian physics werereplaced by Einstein’s relativity…

    Creationism was replaced by evolution.

    The case of CSI is no different… Not only that, the new theory can be applied to biology and tested, which can only mean one thing…

    He said, trailing off at the precise point better communicators would attempt to make the point explicit …

    1+
  37. Tom English:
    phoodoo,

    The OP is the last part of a review of theoretical work in intelligent design — specifically, an article on algorithmic specified complexity. If you want to continue bashing ID theory, as you haplessly did above, be my guest. But you’re not going to dodge my questions by changing the topic. I’m not going there with you.

    You asked what is proven in ID, but you then refuse to say what is proven in your vague “evolution” theory. You are not going to dodge my question, I do declare!

    If there is no math to support evolution, just say so!

    0
  38. phoodoo: You asked what is proven in ID, but you then refuse to say what is proven in your vague “evolution” theory

    Your inability to note what is proven in ID is, well, noted.

    phoodoo: If there is no math to support evolution, just say so!

    What difference does it make what support something has that you believe is wrong anyway?

    What impression do you think you give when you deflect reasonable questions about ID into questions about evolution? It’s almost as if ID is nothing more then criticizing evolution without any actual substance of its own.

    0
  39. phoodoo: You asked what is proven in ID, but you then refuse to say what is proven in your vague “evolution” theory

    You nailed it again, phoodoo!
    CSI is a perfect example, so is LTEE, of how Darwinian hopes have never materialized in real science.
    If CSI CANNOT be applied to biology, what’s left of the theoretical argument other than math can be cooked? Vague speculative assumptions?

    In LTEE bacteria are still bacteria, and by Lenski’s own admission, it will remain bacteria for another 31 years and 65.000 plus generations…

    Why?

    Excuses are well known… “..Yeah…it was intelligently designed but not to prove evolution…”

    So why call it a long-term evolution experiment? Why not call it the long-term devolution experiment? Because that’s exactly what it has been proven and ID, but especially Behe, love it…

    Why even bother to do an evolution experiment, if you goal is to prove what’s already well known: genes continue to break and degrade organisms with time…

    phoodoo: If there is no math to support evolution, just say so!

    The funny thing about math is that it can be cooked…
    The more I research relativity the more I’m convinced that it was cooked; well, at least some aspects of it… There is more and more experimental evidence that particles travel faster than the speed of light, photons, neutrinos and now muons…
    The idea of space is fading not only because of QM….😊

    0
  40. Allan Miller: Creationism was replaced by evolution.

    Darwin saw variations within genus, or kinds… We see it today too…I can go to a fox farm and prove it by breeding a red fox with silver one within one generation.
    But no matter how long I keep breeding the foxes, they will remain variations of foxes often with degraded genes for fur thickness etc.

    Allan Miller: He said, trailing off at the precise point better communicators would attempt to make the point explicit …

    Your lack of comprehension is not my problem…
    Read the thread! If that doesn’t help, move to Panda’s thumb…

    0
  41. J-Mac:
    But no matter how long I keep breeding the foxes, they will remain variations of foxesoften with degraded genes for fur thickness etc.

    No matter how long huh? You tried this experiment for a sufficient period to rule it out, have you?

    Your lack of comprehension is not my problem…

    No, but your lack of clarity is. See, you did it again with that very paragraph. Say something vague, trail off into ellipsis, hope someone thinks there’s a thought at the other end of it. There isn’t. Dot dot dot.

    0
  42. Allan Miller: No matter how long huh? You tried this experiment for a sufficient period to rule it out, have you?

    The Russians have been doing it for over 100 years…
    They had to stop breeding some foxes (I can’t remember now which mix) because the quality of the fur detrioated and some breeds developed some diseases.. Same problems as with dog breeding…

    Allan Miller: No, but your lack of clarity is. See, you did it again with that very paragraph. Say something vague, trail off into ellipsis, hope someone thinks there’s a thought at the other end of it. There isn’t. Dot dot dot.

    Who forces you to comment? Not me….

    0
  43. J-Mac: But no matter how long I keep breeding the foxes, they will remain variations of foxes often with degraded genes for fur thickness etc.

    J-Mac: The Russians have been doing it for over 100 years…

    Well, since 1959. Math not your strong suit, I guess…
    But they make great pets, I hear…

    0

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.