Evo-Info: Publication delayed, supporting materials online

Introduction to Evolutionary Informatics, by Robert J. Marks II, the “Charles Darwin of Intelligent Design”; William A. Dembski, the “Isaac Newton of Information Theory”; and Winston Ewert, the “Charles Ingram of Active Information.” World Scientific, 350 pages. Jan 31 May 1, 2017.
Classification: Engineering mathematics. Engineering analysis. (TA347)
Subjects: Evolutionary computation. Information technology–Mathematics.

I cannot tell you exactly what will be in the forthcoming book by Marks, Dembski, and Ewert. I made it clear in Evo-Info 1 and Evo-Info 2 that I was responding primarily to technical papers on which the book is based. With publication delayed once again, I worry that the authors will revise the manuscript to deflect my criticisms. Thus I’m going to focus for a while on the recent contributions to the “evolutionary informatics” strain of creationism by George D. Montañez, a former advisee of Marks who is presently a doctoral candidate in machine learning at Carnegie Mellon University (advisor: Cosma Shalizi). My advice for George is that if he wants not to taken for a duck, then he had better not walk like a duck and swim like a duck and quack like a duck.

Interestingly, young-earth creationist Jonathan Bartlett did an Amazon “customer review” of Introduction to Evolutionary Informatics in late January, after World Scientific had changed its online announcement to indicate that the book would be published in May. When I let the folks at Amazon headquarters know that they were misrepresenting the book as available for purchase, they went above and beyond the call of duty to correct the mistake. I’m interested in hearing from Jonathan whether he removed his “customer review” voluntarily. Of course, I’d like to know also what led him to post it in the first place.

I’ll hazard to suggest that the book will be much like the supporting materials, which were revised extensively in January. The presentations on the Weasel, ev, and Avida models of evolution are self-contained. And they cast doubt on the advertising claim:

Built on the foundation of a series of peer-reviewed papers published by the authors, the book is written at a level easily understandable to readers with knowledge of rudimentary high school math.

Click on the “Mathematics” tab here, and you will see that the math — the easy stuff, as it happens — is something that almost everyone will skip. It’s there to impress, not to enlighten, the general reader. As I’ve said before, I would love to address the math, and not the rhetoric that the authors attach to it. Things would be much easier for me if the authors turned out to have magical teaching powers. But we have evidence now, and the evidence says no magic.

201 thoughts on “Evo-Info: Publication delayed, supporting materials online

  1. johnnyb,

    I’m not going to let my response to you be buried in the clutter. And I’m going to improve somewhat on my previous remarks.

    1. You got yourself involved in false advertising of a not-yet-published book as available for purchase. The obvious benefit was to establish demand in advance of the first printing of the book. As a publisher, you surely understand the benefit.

    2. You learned that I filed a report of false advertising.

    3. You did not express regret for having involved yourself in an activity that was sleazy, and perhaps illegal. You instead resorted to diverting attention to my behavior, and trying to make it into something wrong.

    4. You blamed me for Amazon’s deletion of your “customer review” — which “just happened” to contribute to the false impression that the book was available for purchase — along with the comment I made on it. Anyone with an ethical bone in his body would prefer not to contribute to false advertising. Instead, you try to make yourself the victim, and me, not Amazon, the villain.

    5. Young-earth creationist that you are, you have responded to an absence of evidence by telling the story you prefer. I created doubt in my comment on your “customer review.” And you’ve allowed in this thread that you only took a weekend stab at the book, and did not read all of it. But, hey, there’s no evidence at Amazon now. So tell everyone that I said outright that you were lying. And make up a story about my motives.

    6. I am tempted to speculate on your motives, but I will not. I will observe simply that you pass the “duck test” for someone who has a guilty conscience.

    Nothing suits me better than for the first review of the book to come from a young-earth creationist who has published prior work of the authors, and for the review to be as pathetic as the one you posted — provided that it does not suggest that an unpublished book is available for purchase.

  2. keiths: It isn’t surprising to see Mung behaving this way, but I’m disappointed to see you defending his obvious lie.

    I have demonstrated rather conclusively that my program did not run out of memory for the reason you advanced. So you are wrong. Tom sees it. And I am not lying.

    Why can’t you see it?

  3. Since keiths has accused me of lying, and then repeated that accusation, I think it’s important to see just who it is that is being dishonest. Or perhaps keiths is just ignorant. I’d hate to think he was doing this intentionally.

    iter = 0
    %w[1 2 3 4 5 6 7 8 9 * / + -].permutation.each {|arr| iter += 1}
    puts iter
    => 6_227_020_800

    The above code iterates through every single one of the permutations and does not result in a NoMemoryError. So keiths is wrong. Three times wrong now, and absolutely digging in his heels and refusing to admit it.

    keiths: His program ran out of memory because he tried to store all the permutations before evaluating them.

    That’s demonstrably false.

    Mung: It doesn’t run out of memory after 1.685 billion permutations. That’s just you refusing to see the evidence that is before your eyes.

    My program doesn’t run out of memory after 6 billion permutations. One might think that if I was storing every single permutation that it would. But it doesn’t. So perhaps your premise is false.

  4. dazz: Again, WTF is that supposed to mean Tom?

    Nothing personal. 😉

    I think the “fake person” thing started here. Search that page and the following page of comments for “fake person.” And you should look ponder the distinction of William J. Murray and “William J. Murray.” I’ve provided, in bits and pieces, a pretty good idea of what I have in mind. A rule-based system like “keiths” is unable to assemble the pieces, however.

    Issues of identity and authenticity are rapidly growing more complicated.

  5. Tom,

    You called dazz a ‘fake person’. He would like to know what you meant by that. Why won’t you tell him?

    You’re becoming more Mungish by the day.

  6. Tom English:

    The Discovery Institute has always objected strenuously to the observation that “intelligent design” creationism is creationism.

    Not just the DI, but so have the YECs like me. Creationism is creationism, not ID.

    Modern ID is Paley on steroids which is closer to natural theology than it is to Biblical theology.

    from wiki:

    Natural theology, once also termed physico-theology, is a type of theology that provides arguments for the existence of God based on reason and ordinary experience of nature. This distinguishes it from revealed theology, which is based on scripture and/or religious experiences, and also from transcendental theology, which is based on a priori reasoning.

    YEC is mostly a Biblical theology. The interest in natural theology as a complement to Biblical theology is that not every theist is comfortable starting from the assumption the Bible is true, myself included. Starting from natural theology is more defensible. ID is a component of natural theology.

    FWIW, I dissociated myself from CSI, active information, etc. for at least a couple years. I don’t claim ID is scientific or unscientific, I view it as TRANS-scientific. Since my friend Bill Dembski said he’s moving on from the ID movement, will this be his last ID publication?

    Behe’s route of biochemistry and molecular biology seems a more fruitful avenue for the defense of ID. Minnich also. Minnich has National Academy of Science members review some of his criticisms of Lenski. From my understanding, the reviews were favorable. Minnich didn’t advocate ID in his paper, but he did show where Lenski botched his interpretation of experimental results. Minnich got his experiment to do in 19 days what took Lenski 15 years, hence Minnich was a better evolutionist than Lenski!

    Minnich and the more technical aspects of Behe’s work (like his peer-reviewed work on loss of function mutations) are mostly ignored, but it’s some of the best ID stuff out there.

    I thought Montanyez and Marks work on polyconstrained functions was pretty good, some of the best ID material out there. But it remains ignored even by the ID community to this day. Same could be said of Minnich’s work.

  7. Object-Oriented Concepts with UML
    Advanced Java Programming
    Java Servlets and JSP
    Certificate Program in Professional Java 2 Programming
    Object-Oriented Programming Principles
    C++ Programming – Level 1
    C++ Programming Level – 2
    XML Introduction
    XML Advanced
    SQL – Structured Query Language
    Perl Programming
    Certificate Program in C Programming

    Lucky me though. I never had to actually write any working code to pass any of these courses.

    😀

  8. Mung,

    Why won’t you admit you were wrong.

    You should know by now that I have no problem admitting my mistakes, once I’m aware of them. I was wrong. I misdiagnosed the problem with your program.

    I think dazz got it right:

    Well, I translated that to Python and it runs just fine.
    In Ruby it’s the eval function that triggers the “leak”, using something like

    current_value = rand(100000)

    to generate the value, the program runs no probs.

    See how easy that was?

  9. keiths: Tom is defending your obvious lie.

    Show me where. (I’ve never checked to see whether he lied. This bit of making me into his defender, because I refuse to see him as black or white, is pathetic.)

    ETA: I’m asking you to show me where I defended him, not where he lied.

  10. ops = {'+', '-', '*', '/'}

    def is_legal_improved(str):
        """
        Replace 'range' with 'xrange', and eliminate first and last iterations.
        
        Also replace calls to 'is_op' with tests of membership in set 'ops'.
        Also use Boolean data type.
        Also reference last element of string with `str[-1]'.
        """
        if str[0] in ops or str[-1] in ops:
            return False
        for i in xrange(1, len(str)-2):
            if str[i] in ops and str[i+1] in ops:
                return False 
        return True

  11. from itertools import permutations
    import numpy as np

    a = np.array(['1', '2', '3', '4', '5', '6', '7', '8', '9', '+', '-', '*', '/'])

    def brute(begin=0):
        expression = ''
        value = 0
        for p in permutations(a[begin:]):
            if is_legal_hacked(p):
                e = ''.join(p)
                v = eval(e)
                if v > value:
                    value = v
                    expression = e
        print expression, value

  12. stcordova,

    Minnich got his experiment to do in 19 days what took Lenski 15 years, hence Minnich was a better evolutionist than Lenski!

    That deserves a big fat facepalm.

  13. stcordova: Minnich and the more technical aspects of Behe’s work (like his peer-reviewed work on loss of function mutations) are mostly ignored, but it’s some of the best ID stuff out there.

    I agree, it’s definitely the best ID stuff out there. 🙂

  14. So I let the script run overnight… 9 hours and just shy of 3% computed permutations, the process had already hogged 140GB of virtual memory…
    It just kept growing linearly, so it probably would have hit the 5TB mark after the 2 weeks it should have taken it to complete

    LMFAO

    ETA: actually I had an extra element in my char array, so with the proper input it should have taken about 22 hours and 350GB of virtual memory to complete, but still

  15. That deserves a big fat facepalm.

    It took Lenski 33,000 generations over 15 years to get 1 success, it only took Minnich’s lab about 100 generations in a matter of weeks to get 46 independent successes.

    Lenski made it look like the change was soooooo inaccessible when it wasn’t. Like it was a necessary long trail of trials and errors to finally achieve success. It wasn’t.

    This is somewhat like saying, “hey, I got antibiotic resistance to evolve in 15 years when others have gotten to do it in a matter of weeks. I should be in the headlines for this. I found something so rare since it takes 15 years to achieve.” That’s what Lenski did, and he gets elected to the National Academy of Science for this.

    http://jb.asm.org/content/198/7/1022.full

    E. coli cannot use citrate aerobically. Long-term evolution experiments (LTEE) performed by Blount et al. (Z. D. Blount, J. E. Barrick, C. J. Davidson, and R. E. Lenski, Nature 489:513–518, 2012, http://dx.doi.org/10.1038/nature11514 ) found a single aerobic, citrate-utilizing E. coli strain after 33,000 generations (15 years). This was interpreted as a speciation event. Here we show why it probably was not a speciation event. Using similar media, 46 independent citrate-utilizing mutants were isolated in as few as 12 to 100 generations.

  16. stcordova,

    You think that doesn’t deserve another big fat facepalm?

    You miss the point of the LTEE by a country mile. Hint: it wasn’t to evolve citrate metabolism.

  17. stcordova: Lenski made it look like the change was soooooo inaccessible when it wasn’t.

    Then why did Lenski and Blount repeat the experiment with different strains, getting much faster evolution of Cit?

  18. You miss the point of the LTEE by a country mile. Hint: it wasn’t to evolve citrate metabolism.

    Yes, and he demonstrated that if he doesn’t actively intelligently select for a trait, it’s less likely the trait will evolve when it is actively intelligently selected for. The point is, the hype surrounding the results of the experiment were hyped in a misleading way.

    When Minnich and company intelligently designed the selection pressure toward a goal, it ran circles around Lenski’s less intelligently designed, aimless directionless artificial environment. In fact, Lenski is wrong to represent his experiment as some sort of analog for natural selection, because nature wouldn’t let his artificially selected creatures live in the wild. It shows that when there is even less intelligent direction and interference, we should expect even slower evolution of new function.

    Gee, should someone get elected to the national academy for showing when we avoid intelligent selection it takes longer to evolve (if at all) something compared to targeted intelligently designed selection? So why don’t we have Cit+ mutation surviving in the wild? Didn’t we know these things since Darwin watched pigeon breeders effect through intelligent selection amazing phenotypic changes which don’t happen naturally?

    What happens naturally is that Cit+ doesn’t evolve even after millions of years in E. Coli. Therefore what Lenski did is un-natural selection, and even then, it doesn’t do much compared to intelligently designed selection.

    He should re-title his experiment. “Long Term Un-natural Selection Experiments that don’t evolve new function as well as Minnich’s Intelligently Designed selection experiments.” Ergo, it is mostly a pointless 15 year, 4 million dollar taxpayer funded exercise in showing us something we already knew, but advertised as something else than what it really was.

    In any case, back to Tom’s OP. Minnich’s work I think is more relevant and defensible than esoteric approaches to ID that involve CSI and active information because it deals with real biological systems vs. things like Avida and Ev.

  19. stcordova: Yes, and he demonstrated that if he doesn’t actively intelligently select for a trait, it’s less likely the trait will evolve

    But it evolved, even without “intelligence”
    Of course your characterization of “intelligent selection” is ridiculous, but if you believe “intelligent selection” is the hallmark of ID, then ID was falsified right there.

    Thanks for playing tho

  20. stcordova,

    The evolution of citrate metabolism, I can only repeat, was merely an incidental occurrence. To find different or ‘better’ ways of achieving that non-goal has no relevance.

    It suits your long game to misunderstand the experiment. You’ll be high-fiving Pascal in no time!

  21. If all those strains where evolving Cit+ simultaneously and independently, repeatedly following precise pathways, irrespective of (“intelligent”) selective pressure, Sal would be dancing the victory dance. So what happens when the observations are 100% consistent with what would be expected to happen if RM+NS was at play?

    Victory dance anyway.

    BTW Sal, are you going to retract your claim that Lenski was trying to make it look like the change was soooooo inaccessible?

    http://www.pnas.org/content/105/23/7899.full

  22. dazz:

    So I let the script run overnight… 9 hours and just shy of 3% computed permutations, the process had already hogged 140GB of virtual memory…

    I initially found it puzzling that eval() was leaking so badly. This is not some obscure method, after all, so it seemed surprising that the leak hadn’t already been detected and fixed by someone out there.

    But then it occurred to me that the problem might be due to ill-formed strings. Mung’s program passes every permutation to eval(), legal or not, and handles the bad ones by doing a “rescue SyntaxError”. Most users of eval() won’t pass lots of illegal strings to it, if any, so it seemed plausible that a memory leak in eval’s error handling code might have gone undetected.

    To test that hypothesis, I ported the is_legal() check from my Python script and changed Mung’s program to call eval() only for legal permutations.

    That seems to work. I no longer see the leak.

    I’ll look into filing a bug report later, if the problem isn’t already known.

  23. keiths:
    dazz:

    I initially found it puzzling that eval() was leaking so badly.This is not some obscure method, after all, so it seemed surprising that the leak hadn’t already been detected and fixed by someone out there.

    But then it occurred to me that the problem might be due to ill-formed strings.Mung’s program passes every permutation to eval(), legal or not, and handles the bad ones by doing a “rescue SyntaxError”. Most users of eval() won’t pass lots of illegal strings to it, if any, so it seemed plausible that a memory leak in eval’s error handling code might have gone undetected.

    To test that hypothesis, I ported the is_legal() check from my Python script and changed Mung’s program to call eval() only for legal permutations.

    That seems to work.I no longer see the leak.

    I’ll look into filing a bug report later, if the problem isn’t already known.

    Interesting. I did some googling and found a bug report for a memory leak in eval, but it was supposed to be fixed in version 1.8.something

  24. dazz: So I let the script run overnight

    Python or Ruby?

    What I posted above isn’t the way I really do things. It’s keiths’s basic approach, made to run a lot faster by using an iterator and by removing inefficiencies from his ‘is_legal’ predicate.

    Screen shot below of what I’m trying to improve upon (Sandy Bridge 2.5GHz).

  25. Tom,

    Why bother tweaking it? It’s a throw-away script designed to be run just once, and it takes less than ten minutes to run on my machine.

  26. Tom English: Python or Ruby?

    What I posted above isn’t the way I really do things. It’s keiths’s basic approach,made to run a lot faster by using an iterator and by removing inefficiencies from his ‘is_legal’ predicate.

    Screen shot below of what I’m trying to improve upon (Sandy Bridge 2.5GHz).

    Oh, I was referring to Mung’s Ruby script, the one that leaks memory. Interestingly it runs almost twice as fast in Python.
    Is that 47.9 million expressions in 13min 56s?
    Mung’s algo (Python version) crunches 120 million expressions in that time in my 3.5GHz Sandy Bridge (could try with a 4.7GHz overclock just for kicks). It doesn’t check for legal expressions, but I’m pretty sure it would be even faster if it did

  27. Yep, added your “is_legal_improved” function and it runs at 24 million expressions per minute. Almost 3 times faster

  28. keiths:
    Tom,

    Why bother tweaking it?It’s a throw-away script designed to be run just once, and it takes less than ten minutes to run on my machine.

    Want me to say you’re lying, or want to take another shot at describing what you actually did?

  29. My bad — the version I ran had a hack to reduce the character set size. Using all of the digits, it takes longer than ten minutes.

    Still, the point remains. Why tweak a one-off script that doesn’t take long to run?

  30. dazz: Yep, added your “is_legal_improved” function and it runs at 24 million expressions per minute. Almost 3 times faster

    It’s even faster (and easier to code), though somewhat of a hack, to go with comparisons like str[i] < '1'. The operators are all less than the character ‘1’.

    But there’s no need to do that. Just generate valid expressions in the first place. It’s not hard.

  31. keiths: it takes longer than ten minutes.

    It takes a lot longer. And I’m not tweaking. I did what you should have done in the first place, and checked to see how much difference it made.

  32. dance. So what happens when the observations are 100% consistent with what would be expected to happen if RM+NS was at play?

    Lenski’s experiments are not RM+NS they are RM+Un-Natural Selection since the environment was unnatural. What happens naturally in an artificial environment isn’t what happens naturally in a natural environment. You’re equivocating the meaning of “natural”. It’s highly misleading.

    So Lenski’s crowing about how he can evolve new function 2,750 times slower than Minnich, and millions of times slower (if ever) than what happens naturally in nature versus what happens un-natrually through un-natural evolution experiments like his LTEE. And this proves Darwinian evolution how? It shows the blindwatchmaker isn’t as capable as directed-goal-oriented evolution of complexity through directed selection. After 33,000 generations, we have something not much different than what was started with. If that is extrapolated to human evolution, that’s not much change in a million years, like say developing lactose tolerance, not much else.

  33. stcordova,

    After 33,000 generations, we have something not much different than what was started with. If that is extrapolated to human evolution, that’s not much change in a million years, like say developing lactose tolerance, not much else.

    Why on earth would one extrapolate this experimental setup (12 flasks, bottlenecked daily, static environment, prokaryote) to human evolution?

  34. Tom,

    You can see how fast my script is running by simply looking at the output. I did, and it wasn’t worth tweaking.

    Again, why would I waste time tweaking a script that’s designed to be run once, in the background, and doesn’t take that long anyway?

  35. Allan Miller,

    Why on earth would one extrapolate this experimental setup (12 flasks, bottlenecked daily, static environment, prokaryote) to human evolution?

    Good point. What was the intended ROI (investment return) on the 4 million spent?

  36. colewd,

    Good point. What was the intended ROI (investment return) on the 4 million spent?

    I dunno. What does knowledge cost these days?

  37. Mung’s algo (Python): 24 million expressions per minute
    Tom’s algo (Python): 26 million expressions per minute

    @3.5GHz

  38. If experiments with bacteria don’t have any interest to creos, let’s submerge a few of them in citrate and see how long it takes them to evolve the ability to breathe.

  39. stcordova: since the environment was unnatural.

    So the environment in every controlled experiment is “unnatural”? Holy shit, just like that you just destroyed most scientific fields. I guess every paper using CERN data, the double slit experiment, etc.. etc.. all bunk!

    That was a genius move Sal. Well played

Leave a Reply