Thorp, Shannon: Inspiration for Alternative Perspectives on the ID vs. Naturalism Debate

The writings and life work of Ed Thorp, professor at MIT, influenced many of my notions of ID (though Thorp and Shannon are not ID proponents). I happened upon a forgotten mathematical paper by Ed Thorp in 1961 in the Proceedings of the National Academy of Sciences that launched his stellar career into Wall Street. If the TSZ regulars are tired of talking and arguing ID, then I offer a link to Thorp’s landmark paper. That 1961 PNAS article consists of a mere three pages. It is terse, and almost shocking in its economy of words and straightforward English. The paper can be downloaded from:

A Favorable Strategy for Twenty One, Proceedings National Academy of Sciences.

Thorp was a colleague of Claude Shannon (founder of information theory, and inventor of the notion of “bit”) at MIT. Thorp managed to publish his theory about blackjack through the sponsorship of Shannon. He was able to scientifically prove his theories in the casinos and Wall Street and went on to make hundreds of millions of dollars through his scientific approach to estimating and profiting from expected value. Thorp was the central figure in the real life stories featured in the book
Fortune’s Formula: The Untold Story of the Scientific Betting System that Beat the Casino’s and Wall Street by William Poundstone.

fortune's formula

Poundstone’s book doesn’t actually go into detail what formulas actually work in today’s markets because once something works well, it stops working once everyone else figures it out. Thorp’s only published work on how to make money on Wall Street became obsolete and he had to find new avenues of success with secrets he will take with him to the grave….But for those interested, here is Thorp’s only published book on how to make money on Wall Street. As I said it is now obsolete, but it showcases Thorp’s genius and insight. It used to retail for $300 used on Amazon, but then Thorp offered a PDF copy for free:
BEAT THE MARKET
A Scientific Stock Market System
.

So if you want a change of pace from the usual arguments over ID, I offer Thorp’s work and you can skip the rest of what is written below since it is my version of ID inspired by Thorp and Pascal.

Though I had followed Thorp’s work on and off for 10 years, I only recently discovered Thorp’s 1961 article while preparing my own draft of a paper that encapsulates my version of Intelligent Design presented at the Blyth Institute’s “Alternatives to Methodological Naturalism Conference” (AM-NAT 2016). I present to the TSZ readers a draft of a paper that I’m submitting as part of the AM-NAT 2016 conference proceedings. AM-NAT 2016 was a conference organized and mentioned by JohnnyB at TSZ and UD. So if you want to argue ID instead of discuss Thorp’s work, that’s ok too.

I got fascinated by the body of math surrounding expected values partly as a result of Thorp’s work. Because of this body of math, I concluded ID theory has been focused too much on information theory and the 2nd law of thermodynamics, and I’ve argued this is a misguided approach. A more clear cut way to frame the probability arguments is to leverage expected values and the law of large numbers and apply similar mathematical approaches, not the approach laid out by Dembski and his almost intractable conception of specification and CSI.

My approach to the question of ID at the personal and practical level has been more along the lines of Pascal’s wagering ideas than trying to make absolutist assertions about reality. Pascal’s wagering ideas were not limited to the theological questions of heaven and hell, but were originally developed to answer theoretical questions about fair values of wagers in gambling games. His solutions using his notion of “expected value” became foundational in probability and statistics, and the notion of expected (or expectation) values has found its way into the realms of physics, chemistry and finance, etc. I’ve framed ID vs. Naturism debate at a personal and practical level more in terms of what is to be gained if ID is right and what might be lost if ID is wrong and how to move forward in science without formal resolution the question of ID.

In my paper, I focused on a practical (not theological) dimension regarding the NIH’s half-billion dollar research investment into the ENCODE and RoadmapEpigenomics projects. Evolutionary biologist Dan Graur has labeled the ENCODE project leaders “crooks” and “ignormuses” and likened the chief architect of ENCODE, Ewan Birney, to Saddam Hussein. Even money aside, there is an issue of bragging rights as to which group of scientists (ENCODE vs. Gruar) should be praised and which group will have egg on their faces after all the dust settles.

To my surprise, the fight over ENCODE spilled over into a fight over what I thought was a rather innocuous article in the New Yorker that promoted the chromatin-centric viewpoint of the epigenome. I did not realize there was a camp (for lack of a better name, I’ll call them the dinosaurs or the transcription-factor proponents or gene-centrists) that was furious at the chromatin-centrists. ENCODE is not only labeled as promoting an “evolution-free gospel” (verbatim words used by rival scientists in a peer-reviewed publication), but they are also not exactly liked by the gene-centrists for their chromatin-centric viewpoint of the epigenome. Creationists and IDists are more sympathetic to the chromatin-centrists, with the qualification that creationists and IDists in general are more favorable to all forms of non-DNA somatic and transgenerational inheritance mechanisms that may reside outside DNA be it organelle, structural, glycomic or whatever “-omic” inheritance devices that may be out there, not just chromatin based mechanisms.

I’ve qualitatively argued the favorable wager in practical terms is on ENCODE, not on evolutionary theory. Most of the paper is rehash of debates I’ve had here at TSZ, so the material is nothing new. It can be said, my paper is really a product of the debates I’ve had at TSZ. The interactions here have helped provide editorial and technical improvements. The paper is still a draft, the figures and formatting will be cleaned up by the myself, reviewers, and the Blyth Institute before it is published in the AM-NAT 2016 conference proceedings. This draft still has a lot of cleanup needed, so I’m posting it to invite improvements. Some of the material might later be reworked as reading material for the high school home school and/or creationist biology students in college. I don’t consider the paper a professional offering, but a way to codify some of my ideas for later reference.

For those tired of reading and arguing what I’ve posted before and have no inclination to read my paper, I provided a link to Thorp’s paper in the chance it may be of passing interest and a change of pace to some readers in this blog.

But for those interested in my paper, here it is:
Gambler’s Epistemology vs. Insistence on Impractical Naturalism: The Unwitting Half-Billion Dollar Wager by the NIH Against Evolutionary Theory.

ACKNOWLEDGEMENTS
Thanks to all at TSZ who have contributed to the refinement of the ideas in my paper. Thanks to the admins and mods of the skepticalzone hosting my postings. TSZ has been a place where I’ve had the chance to receive critical and editorial feedback on materials I’m publishing in various venues.

PS

I had the opportunity to put in practice some of Thorp’s theory in the casino and so did a group of Christians. Their story is documented in the DvD Holy Rollers. I’m listed in the credits of the Holy Rollers documentary.

Here is the trailer. Featured in the trailer is Pastor Mike and other pastors who were part of the Holy Rollers:

holy rollers

303 thoughts on “Thorp, Shannon: Inspiration for Alternative Perspectives on the ID vs. Naturalism Debate

  1. Of course natural selection in the presence of genetic drift has very similar mathematics to gambling in the presence of a slight advantage to one side. We have been there before:

    1. You argued that in the presence of genetic drift natural selection would be mostly ineffective. Your post (at Uncommon Descent) is here.

    2. I replied, showing that calculations disproved your assertions. My post (at Panda’s Thumb) is here.

    The math is remarkably similar. Let us all learn from that.

    (By the way, many of us who are old enough of course remember Thorp’s book Beat the Dealer. It came out when I was in college and every college student seemed to have heard about it. The relevant point is that natural selection rigs the bets.)

  2. There is no ID vs. Naturalism debate Sal.

    AAt the very least, if experimental science cannot practically confirm macro-evolutionary transitions, evolutionary biology’s status as a scientific discipline might be deemed dubious at least relative to physics and chemistry. As evolutionary biologist Jerry Coyne himself said, “In science’s pecking order, evolutionary biology lurks somewhere near the bottom far closer to phrenology than to physics”

    I’m surprised you did not manage to get your Darwin beating a dog quote in there.

    It is no surprise therefore that evolutionary biologists who are also naturalists are often inclined to insist biological systems are not that complex after all, that the complexity is an illusion. They argue the convoluted apparently clumsy ways living things go about their business is evidence against the intelligent design and in favor of natural evolution.

    So they argue that things are not that complex and then argue that the convoluted (i.e. complex) thing is complex? Whatever.

    The exceptional property of life is illustrated by Haeckel’s claim that if the doctrine of spontaneous generation were false, then the emergence of life would have to be of miraculous supernatural origin.

    Oh, I thought Haeckel was a fraud and a con-artist? Now he’s the last word on the origin of life?

    Great, you’ve written a paper. You should know that’s only part of it.

    So, let’s peer review it!

    Insisting on the truth of naturalism in the disguise of evolutionary theory could impede scientific progress in the medical sciences if the whims of some evolutionary biologists like Dan Graur are realized.

    Whereas of course attending to the whims of Salvador Cordova, creationist, will bring progress like never before.

    The funniest thing about your ‘paper’ is that it’s like a cargo-cult version of science. That and an opportunity to poke at particular named people you have a problem with.

    All science so far!

  3. As I have said repeatedly, you confuse gene-centrism as an evolutionary position with gene-centric developmental models. Two different things.

    All cytoplasmic factors are rooted in DNA. There is no evidence of multi-generational inheritance of epigenetic states. What persists through the generations is the sequential information content of DNA – naturally cloaked, at any one moment, in its products.

  4. The math is remarkably similar. Let us all learn from that.

    (By the way, many of us who are old enough of course remember Thorp’s book Beat the Dealer. It came out when I was in college and every college student seemed to have heard about it. The relevant point is that natural selection rigs the bets.)

    Nice to hear from you as always Joe, and thank you for all your responses to me over the last 8 years since our first exchange which you referenced in 2008.

    One thing pointed out in Fortune’s Formula is the probability of Gambler’s ruin which is related mathematically to the likelihood of even a favorable trait drifting out of a population.

    The risk of Gambler’s ruin is reduced by investors and skilled gamblers applying the Kelly Criterion (aka Fortune’s Formula):

    https://en.wikipedia.org/wiki/Kelly_criterion

    My point was that the is no guarantee a favorable trait will fix into a population — drifting weasel if you will.

    That said, I did mention you in my paper:

    One of the world’s most respected theoretical geneticists, Joseph (Joe) Felsenstein authored the gold standard graduate textbook Theoretical Evolutionary Genetics. In the book, Felsenstein explicitly mentions the ENCODE project and why its claims are at variance with the mathematics of evolutionary geneticsi which would imply that essentially Graur’s assertion, “If ENCODE is right, evolution is wrong.”
    Thus, in the present day we are in a situation where orthodox textbook theory in evolutionary genetics is openly in conflict with the claims of highly respected laboratory researchers commissioned by the NIH. There would appear to be uncertainty in deciding where research efforts should be focused in the face of unresolved questions over evolution vs. ENCODE, but such situations are tailor made to applying gambler’s epistemology.

    [The appendix will lay out a simplified description of Felsenstein’s and Graur’s arguments which are (ironically and for totally different reasons) supported by creationists like respected Cornell geneticist John Sanford.]

    I went on to summarize the problem in the Appendix and showed indirectly why creationists are (ironically) so enthusiastic about your work:

    Simplified Explanation of Genetic Entropy and Reasons for Dan Graur’s Complaint Against ENCODE

    A population can tolerate a certain number of mutations per individual per generation. The tolerable load of mutations is also known as “mutational load”. Graur’s complaint against ENCODE can be summarized as the problem of mutational load.

    Calculations of mutational load for humans was prominently put forward by Hermann Muller who estimated the human genome can tolerate at most 1 bad mutation per individual per generation. i Muller won the Nobel Prize for his research into genetic deterioration due to radiation.

    If ENCODE is right, the functional genome would be on the order of 3 giga base pairs, and given accepted mutation rates, the size of the functional genome would imply on the order of 100 function-compromising mutations per generation per individual.ii

    Darwin and Spencer asserted “survival of the fittest” as an axiom of nature. But survival of the fittest occurs between siblings and cousins of a generation, not between ancestors and descendants across generations. If the children are substantially more damaged than their parents on average, no amount of selecting the best kids among their peers will result in genetic advancement over time, but rather deterioration even though the axiom of “survival of the fittest” were true. A simplified conception of this problem is illustrated in figure 6.

    The problems posed by mutational load and other aspects leading to genetic deterioration has been summarized in a book by genetic engineer John Sanford at Cornell.iii Curiously, though Sanford is a creationist, he would likely agree with Graur and the evolutionary biologists, “If ENCODE is right, evolution is wrong.”

    HA: caught an error in my draft,

    “with of” should be “with the claims of “

  5. As I have said repeatedly, you confuse gene-centrism as an evolutionary position with gene-centric developmental models. Two different things.

    Thank you for the comment, thankfully I didn’t make the claim in the paper. I did however make that statement in the OP. Thanks for the correction. 🙂

    All cytoplasmic factors are rooted in DNA.

    The sequences of the cytoplasmic proteins are rooted in the DNA, not their post translational modifications, and certainly not all the details of the glyco-protein complexes since the sequences and structures of the glycans are not coded by DNA.

    Along those lines, there is organelle in hereitance. That is to say, remove all the same class of organelles in a cell, and the template is lost to construct further organelles, hence there are somatic if not trangeneration heritable features not stored in the DNA, which is no surprise. 1 gigabyte of memory (roughly the human genome) seems pretty insufficient to provide the developmental instructions to create all the organs and systems of an adult human. That intuition has been and will be borne out be experiment and observation.

    This was among the first paper to show heritability of information outside of DNA. It also shows why it was so difficult to see that the heritable features of organelles was not readily detected because the information is so redundantly stored (as in numerous copies of organelles). One did not see the heritable potential of organelles until they were removed and then re-inserted into cells.

    http://www.cell.com/cell/abstract/S0092-8674(00)81284-2?_returnURL=http%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0092867400812842%3Fshowall%3Dtrue

  6. stcordova,

    The sequences of the cytoplasmic proteins are rooted in the DNA, not their post translational modifications,

    Post translational modifications are carried out by other proteins. The relationship between DNA and protein is not always 1-step.

    and certainly not all the details of the glyco-protein complexes since the sequences and structures of the glycans are not coded by DNA.

    I’m not aware of a persistent glycan modification that is not, on examination, caused by a protein.

    Along those lines, there is organelle in hereitance.

    Organelles have DNA. Their substance is comprised partly from nuclear and partly from organellar genes. The organelle itself provides a framework for these gene products to attach to during organelle growth.

    That is to say, remove all the same class of organelles in a cell, and the template is lost to construct further organelles, hence there are somatic if not trangeneration heritable features not stored in the DNA, which is no surprise.

    Building upon a prior matrix does not invalidate DNA inheritance.

    1 gigabyte of memory (roughly the human genome) seems pretty insufficient to provide the developmental instructions to create all the organs and systems of an adult human.

    That’s because it’s not ‘memory’.

    That intuition has been and will be borne out be experiment and observation.</bl

  7. Sal, you’re trying to create a conflict where there is none. Or, more correctly, you’re trying to create a conflict between sources of data where all the conflict is in the interpretation of those data. Many lines of evidence in evolutionary biology say that most of the human genome is junk. ENCODE found “function” in most of the human genome. That contradiction can be resolved if we realize that ENCODE’s definition of “function” does not preclude those sequences being junk in any ordinary sense. Random sequences bind transcription factors, etc. So where’s the problem?

  8. The problem is that Larry Moran is absolutely correct, this is about deflated egoes. Some people just can’t live with it. Literally, their minds cannot deal with a genome most of which is evolutionary remnants that no longer contribute to normal organismal development and function.

  9. Regarding Sal’s paper, McAfee WebAdvisor says:

    My virus scanner didn’t pick anything up, however, it’s good to know my paper is triggering warnings which other users haven’t mentioned.

    Ok, in the interest of protecting TSZ readers, instead of downloading here is the first 5 of 26 pages without much formatting:

    Gambler’s Epistemology vs. Insistence on Impractical Naturalism:
    The Unwitting Half-Billion Dollar Wager by the NIH Against Evolutionary Theory

    Salvador Cordova, Millennium Analytics

    Abstract

    The 2015 Nobel prize winner in chemistry, Aziz Sancar, may have unwittingly given life to Paley’s watch argument when he used the phrase “Rube Goldbergesque designs” to describe the nano-molecular clocks that provide timing to various processes in the human body. Other Rube Goldbergesque designs have been elucidated by National Institutes of Health (NIH) research initiatives such as the ENCODE and RoadmapEpigenomics projects which represent approximately a half-billion dollar total investment.

    The success of NIH initiatives and various other projects has drawn a bizarre reaction from some methodological naturalists like evolutionary biologist Dan Graur who said in 2012 “If ENCODE is right, evolution is wrong.” Graur’s comment is reminiscent of Haeckel who said in 1876: “If we do not accept the hypothesis of spontaneous generation, then at this one point in the history of evolution we must have recourse to the miracle of a supernatural creation.”

    An unconventional approach called “gambler’s epistemology” is introduced as a perspective to clarify why naturalism should not be equated with science. Gambler’s epistemology with its reliance on the notion of mathematical expectation shows that the intuitive perception that “life is a miracle” is not rooted in after-the-fact, ad-hoc probabilities, but is consistent with standard practice in science, and thus without formally settling the question whether God or supernatural entities actually exist, Haeckel’s unwitting assertion that the emergence of life must be of miraculous origin is at least closer to the truth statistically speaking.

    Gambler’s epistemology also shows that applying reward-to-risk analysis such as seen in the professional investment and gambling world could be a better practical guide in committing financial and human resources to scientific exploration than enforcement of unspoken creeds of impractical naturalism that may actually be detrimental to scientific discovery.

    INTRODUCTION
    Though it may be intuitively satisfying to attempt explanations of various phenomena in terms of accessible and repeatable mechanisms such as those deduced via the scientific method, there may be physical phenomena whose explanations may escape such reproducibility. One of the most prominent such examples is the emergence of life. Despite the evidence of Pasteur’s 1861 experiments refuting spontaneous generation, it was assumed by Haeckel as late as 1876 that it was a common and ordinary occurrence for life to emerge spontaneously from non-living matter. This belief was epitomized by his 1876 statement, “If we do not accept the hypothesis of spontaneous generation, then at this one point in the history of evolution we must have recourse to the miracle of a supernatural creation.”i
    The insistence by Haeckel and others that the structure of life could be explained by easily repeatable mechanisms was falsified experimentally. Even the most basic life forms are so complex and exceptional relative to non-living matter that some scientists even now argue the emergence of the first life on Earth may not be subject to ordinary and repeatable mechanisms as a matter of principle and thus outside direct scientific explanation.ii
    There is lasting tribute to Pasteur’s experiments against spontaneous generation by the word “Pasteurized” on bottles of milk. The Pasteurization process is testament to the scientifically verified viewpoint that the emergence of life is so exceptional that it is not expected to happen again for all practical purposes. If there is a lesson to be learned from Haeckel’s flawed views on the emergence of life is that insistence on explanations for all phenomena in terms of repeatable mechanisms should not be conflated or equated with scientific understanding.

    If science supports the insight that a phenomenon is so exceptional it looks miraculous (if only statistically speaking, not theologically speaking), then this insight should not be suppressed merely because it could conflict with the claims of naturalism. The origin of the universe and the origin of life are events that are real, not repeatable and highly exceptional.

    If the strength of naturalism is based on axioms that all phenomenon can be explained by repeatable and ordinary mechanisms, then when naturalism is confronted with phenomenon not reducible to ordinary and repeatable mechanisms, it would seem naturalism reduces to a collection of vague faith-based assertions with no possible formal proof.

    Supposing for the sake of argument no God or supernatural forces exist, it would then be still hypothetically possible that a phenomenon could be so exceptional and singular that it cannot be reproduced and hence outside direct scientific verification as a matter of principle. But this raises a philosophical question (that is relevant but unfortunately beyond the scope of this essay), “at what point would a so-called natural phenomenon be so exceptional that it is statistically indistinguishable from a miracle of supernatural origin?”

    Extending Pasteur’s law of biogenesis that “life comes from life”, one might claim on experimental grounds alone that it is quite reasonable to assume animals emerge from animals, plants from plants, eukaryotes emerge from eukaryotes, etc. But these experimental observations would suggest there is immutability of certain formsiii, and that macro evolutionary steps are the result of exceptional rather than typical events. The observation of macro evolution still remains a matter of inference (and some would say imaginative story telling) rather than physical experiment.iv So in addition to the origin of universe, the origin of life, one might be able to include the origin of complexity in novel biological forms as the result of unique, exceptional and non-repeatable events. At the very least, if experimental science cannot practically confirm macro-evolutionary transitions, evolutionary biology’s status as a scientific discipline might be deemed dubious at least relative to physics and chemistry. As evolutionary biologist Jerry Coyne himself said, “In science’s pecking order, evolutionary biology lurks somewhere near the bottom far closer to phrenology than to physics.v”

    Gambler’s Epistemology
    An unconventional but hopefully fruitful perspective in framing the issue of naturalism vs. the scientific method is the perspective often adopted by professional gamblers and investors in realms where uncertainty is the norm in decision making. For the purposes of this paper, this informal perspective will be labeled “gambler’s epistemology.” Gambler’s epistemology is neither formally codified nor used as a term explicitly in the gambling and investment world, but coined for the purposes of this essay as a label for a body of principles used by skilled gamblers and investment managers. Rather than offer a strict definition of gamblers epistemology, it suffices to mention some of the elements of this epistemology relevant to the issue of naturalism and the scientific method.
    The principles of gambler’s epistemology are listed in numerous books (even if the term “gambler’s epistemology” is not used). But first, it might helpful to highlight the success of some of the most successful practitioners of this epistemology. Edward O. Thorp was a professor of mathematics at MIT and was author of the books like The Mathematics of Gambling,vi Beat the Dealervii and Beat the Market, A Scientific Stock Market System.viii He teamed up at MIT with Claude Shannon (the famous pioneer of information theory) during his successful foray into using computer and mathematical analysis to develop techniques to win money from casinos and Wall Street. Thorp, with Shannon’s support, published his first work on gambling in the prestigious Proceedings of the National Academy of Sciences in 1961.ix,x Thorp made hundreds of millions of dollars after starting an investment fund that applied his theories and his pupil, Bill Gross went on to manage a trillion dollar hedge fund.xi

    Many decisions in the realm of human affairs are made with far less facts available than the decision makers would like. In the world of successful gambling and hedge fund investment, uncertainty is the order of the day. But uncertainty in one dimension does not necessarily imply uncertainty in another dimension. In fact, maximizing uncertainty of one aspect of a system can lead to near certainty about another aspect of a system. This paradox about reality has been exploited profitably in the business world particularly in skilled casino gambling, casino management, the insurance industry and investment arbitrage.

    The ability to gain near certainty about one aspect of a system despite uncertainty about another aspect of the same system is easily illustrated by applying the law of large numbers to a system of 500 fair coins. If we take 500 fair coins, place them in a jar, shake them vigorously and then pour them out on a table, we will induce maximum uncertainty in the heads tails configuration of each coin. But given the binomial distribution, we can be practically certain the coins will not be 100% heads as we examine them on the table.

    Fundamental to the law of large numbers is the notion of mathematical expectation (or expected value) that was pioneered by mathematician Blaise Pascal in the middle of the 17th century. Expected value is the expected average of many outcomes or the average behavior of a system of composed of many parts. For example, the expected number of coins that are heads in a large system of coins randomly shaken is 50%, and the law of large numbers constrains that deviations from that expectation would be increasingly exceptional the larger the deviation. For 500 fair coins, 100% heads would be an astronomically large deviation from the expectation of 50% heads from a random (uncertainty maximizing) process. 100% fair coins from a randomizing process (like shaking them in a jar and pouring them on a table) would be a statistical “miracle”.

    Though it would take some work to rigorously formulate the notions of average vs. exceptional types of outcomes for a deck of cards stirred by a tornado, suffice to say a tornado is not mathematically and physically expected to spontaneously assemble a house of cards. If we happened upon a house of cards, we would expect that it wasn’t the result of an uncertainty maximizing process like a tornado. The perception that a house of cards is a special configuration relative to random arrangements of cards is not due to some after-the-fact projection of a pattern by our mind but can be derived from physical and mathematical principles of expectation. If one were to play devil’s advocate and argue that “mathematical expectation is itself an imaginary construct” in order to argue that there are not in actuality any special configurations of matter in the universe (like a house of cards or life), but rather “special” is an imaginary construct, one would have to abandon all scientific inferences that are based on the notion of expected results, which would effectively dispose of much of science.

    As more is learned about the complexity of life and the high specificity of its components and their connections to each other, it becomes increasingly harder to argue that life is the result of ordinary or typical events in a way that makes mechanical sense. It is much like arguing a 747 can be assembled by a tornado passing through a junkyard.xii This applies not only to the origin of life problem but also to creating functional biochemical systems that require emergence of numerous well-matched interacting parts.xiii
    Finally, the notions of expected value can be applied to decisions involving wagering and investment whereby the best investment is chosen by wagering on a choice that has the highest expected value in terms of payoff — that is, the investment that returns the largest reward for being right scaled by multiplying it by the corresponding probability of being right.
    For example, if there is a million dollar payoff for being right, but only a 1% probability of being right, the expected value payoff is 1,000,000 x 1% = 10,000. Whereas if there is zero payoff for being right, and a 99% chance of being right, the expected value payoff is 0.00 x 99% = 0.00. If the cost of placing a wager is a mere 100, over many trials, it is better to wager 100 on the %1 long shot that offers great reward than the 99% certain bet that offers zero reward.

    A business executive by the name of Don Johnson was able to win millions from casinos in part by exploiting marketing coupons and rebates that were structured comparable odds and payoffs.xiv The trick of course was for Johnson to find and negotiate such absurdly favorable terms for himself against the casinos. Thorp and Gross used similar strategies in construct their highly successful casino and hedge funds careers. Pascal himself extrapolated his wagering ideas to the realm of the theological in his controversial claim known as “Pascal’s Wager” over the existence of the Christian God.xv

    Because of the law of large numbers, an investment strategy based on selecting investments with the highest expected value payoff will yield on average the best return over a large number of trials. This procedure has been highly effective in business contexts and there is abundant literature on the topic and thus will not be covered here in detail except as it applies to the question of investing resources in scientific research into the complexity of life.

  10. Here is the next few pages or so:

    Evolutionary Biologists vs. the National Institutes of Health: The Half-Billion Dollar Exploration of the Epigenome

    Complexity or the exceptional quality of physical systems is not an artifact of our imagination, but can be derivable from physical and mathematical analysis alone. The origin of life problem is a prime example of how the hope of naturalism to explain all phenomena in terms of ordinary and repeatable mechanisms was dashed. But less well known is the fact in the present day, the discovery of large scale complexity in the epigenome of life is also challenging explanations solely in terms of ordinary and repeatable mechanisms.

    A set of projects known as ENCODE and RoadmapEpigenomics (which commands a combined research budget exceeding half a billion dollars) is at the forefront of efforts by the National Institutes of Health (NIH) to explore the genome and epigenome. This research has contributed to development of FDA-approved treatments such as Histone Deacetylase Inhibitors for the diseased epigenomes resulting in rare cancers.i

    But beyond the benefit to medical science by the ENCODE and RoadmapEpigenomics projets, the insights derived from the ENCODE and RoadmapEpigenomics projects led the projects’ researchers to go out on a limb and make pronouncements that they believed the genome was 80% or more functional. Their declaration was summarized by the 2012 headline in the prestigious journal Science,

    “ENCODE Project Writes Eulogy for Junk DNA — This week, 30 research papers, including six in Nature and additional papers published online by Science, sound the death knell for the idea that our DNA is mostly littered with useless bases. “ii

    When the researchers declared their strong opinion that the genome was 10 times more functionally complex than previous estimates by certain evolutionary biologistsiii, this induced a reaction epitomized by evolutionary biologist Dan Graur who said, “If ENCODE is right, evolution is wrong.”iv Dan Graur also offered these thoughts:

    “the evolution-free philosophy of ENCODE has not started in 2012… the wannabe ignoramuses, self-promoting bureaucrats, and ol’ fashion crooks of ENCODE are protected from criticism and penalties for cheating by the person who gives them the money. Thus, they can continue to take as much money from the public as their pockets would hold, and in return they will continue to produce large piles of excrement that are hungrily consumed by gullible journalists who double as Science editors.”v

    Graur is a professor at University of Houston and fellow of the American Association Advancement of Science (AAAS). Graur and several co-authors with the full sanction and cooperation of several fellow evolutionary biologists published the paper, “On the Immortality of Television Sets: ‘Function’ in the Human Genome According to the Evolution-Free Gospel of ENCODE.”vi The tone of the paper passed peer-review by the evolutionary biologists serving as reviewers and editors but alarmed onlookers with its overtly hostile and trollish tone, ill-suited to science and scholarshipvii. Graur’s shrill rhetoric inspired a reporter for the prestigious journal Science to refer to him as “The Vigilante” viii.

    Graur’s tone might mislead one to think he’s an isolated individual, but he has supporters in the community of evolutionary biologists and population geneticists. One of the world’s most respected theoretical geneticists, Joseph (Joe) Felsenstein authored the gold standard graduate textbook Theoretical Evolutionary Genetics. In the book, Felsenstein explicitly mentions the ENCODE project and why its claims are at variance with the mathematics of evolutionary geneticsix which would imply that essentially Graur’s assertion, “If ENCODE is right, evolution is wrong.”

    Thus, in the present day we are in a situation where orthodox textbook theory in evolutionary genetics is openly in conflict with of highly respected laboratory researchers commissioned by the NIH. There would appear to be uncertainty in deciding where research efforts should be focused in the face of unresolved questions over evolution vs. ENCODE, but such situations are tailor made to applying gambler’s epistemology.
    [The appendix will lay out a simplified description of Felsenstein’s and Graur’s arguments which are (ironically and for totally different reasons) supported by creationists like respected Cornell geneticist John Sanford.]

    ENCODE, RoadmapEpigenomics, E4
    Subsequent to the success of the multibillion dollar Human Genome Project (which enumerated the DNA sequences in the human genome) which was completed in 2003, the question remained as to how the individual parts of the genome worked. The head of the Human Genome project and now current director of the NIH, Francis Collins predicted it would take centuries to understand how each part of the genome worksx. Among the first steps into this exploration was the NIH ENCODE project whose mission was to start cataloging the parts of the genome and the role of the individual parts.

    The ENCODE project commanded a budget of 288 million dollarsxi and began in 2003. The RoadmapEpigenomics project has a budget of 300 million dollarsxii and began in 2008. There is a peripherally related project that is in the planning stages called E4 (Enabling Exploration of the Eukaryotic Epitranscriptome) with a projected budget of 205 million.xiii

    The ENCODE project developed many experimental techniques and established databases which are now being continued in the follow-on RoadmapEpigenomics project. There are about 40 classes of experiments performed by ENCODExiv, some of which are depicted in figure 1 below:

    Figure 1. i A small sampling of the experiments conducted by ENCODE on a stretch of DNA (the multi-colored bar toward the bottom). The gray bubbles represent classes of experiments. Many of the experiments such as WGBS (whole genome bisulfate sequencing), RRBS, methyl450k, ChiP-seq, RNA-seq are relevant to exploring the human epigenome.

  11. The experimental findings of the ENCODE project startled the researchers since it suggested substantially more of the genome was functional (80% or more) than predicted by evolutionary theorists ( less than 10%).i This functionality includes DNA’s involvement in a conceptual entity known as the epigenome.
    DNA is widely viewed as read only memory (ROM), but the ENCODE project added to the view that DNA also acts as a component of cellular random access memory (RAM). Figure 2 and 3 shows an amusing coincidence in the superficial structure of man-made RAM and biological RAM.

    Figure 2. Primitive man-made random access memory (RAM). Notice the “beads on a string” like structure. Each “bead” is where a single bit of memory is stored.i

  12. Next section:

    Figure 3. i,ii Biological RAM. Actual electron micrographs of histone/nucleosome complexes of DNA. Its structure is referred to as “beads on a string” where the “beads” are the histone/nucleosomes and the “string” is the DNA connecting the beads. The Stem Cell Handbook refers to this complex as part of the “random access memory” of the cell. Amazingly, even though the chemical and physical mechanisms of memory in this biological RAM are different than the man-made RAM depicted in figure 2, they coincidentally have a comparable appearance.

    Though the definition of the epigenome is in constant flux and dispute, a segment of researchers generally define the epigenome as including methyl modifications to the DNA itself, chemical modifications to the histones which the DNA wraps around, and even the non-coding RNAs that are involved in cellular operation.iii The word “epigenetic” refers to isolated parts of the epigenome, and the epigenome refers to the sum total of epigenetic parts. The ENCODE and RoadmapEpigenomics project are generally sympathetic to the definition of the epigenome that emphasizes methyl modifications to DNA and chemical modifications to histones.

    Furthermore, the Stem Cells Handbook considers the genome as an analogy to ROM and the epigenome as an analogy to RAM.iv Whatever the labels, epigenomic research commands a large amount of financial interest in the medical community with an estimated therapeutic market of 8 billion dollars in 2017 and much more in the future.v

    The discoveries by the ENCODE and RoadmapEpigenomics projects contributed to the understanding of the epigenetic RAM. Given there are about 100 trillion cells in the adult human, 16.5 million nucleosomes per cell, and at least 40 bits of information per nucleosome (see figure 4 and 5)vi, a back-of-the envelope calculation yields an approximate total RAM in an adult human on the order of sextillion (1021) bits. Some of this RAM is believed to be utilized in the brain for learning and cognition, for the body in self-healing and development, and many yet-to-be discovered functions required to implement the various organs and systems in the body.

  13. stcordova: My virus scanner didn’t pick anything up, however, it’s good to know my paper is triggering warnings which other users haven’t mentioned.

    Ok, in the interest of protecting TSZ readers, instead of downloading here is the first 5 of 26 pages without much formatting:

    There are numerous places to share documents online. Google Docs, Scribd, and Dropbox come immediately to mind.

  14. Conclusion (technical appendixes later):

    There are actually separate epigenomes and transcriptomes for each cell and each slightly unique. In addition to the epigenome, in the last few years, there has emerged the notion of the epitranscriptome which represent chemical modifications to RNA transcriptsi. For a eukaryotic organism to manage such a vast amount of information suggests a degree of complexity that is incompatible with current evolutionary genetics. The appendix will go into some of the technical details of this inference. But suffice to say, if evolutionary genetics cannot explain the complexity of epigenome and epistranscriptome, it is not currently (perhaps not even in the future) feasible to explain the certain complexities in biology in terms of repeatable and ordinary mechanisms, and thus it weakens the claims of naturalism which rely on such mechanisms.

    CONCLUSION
    Although it is a philosophical question as to what point a phenomenon passes a threshold of being either natural or supernatural, a sufficiently extraordinary set of events might be perceived as indistinguishable from a supernatural miracle even if hypothetically there were no God or gods to speak of. The exceptional property of life is illustrated by Haeckel’s claim that if the doctrine of spontaneous generation were false, then the emergence of life would have to be of miraculous supernatural origin. Questions of God and the existence of the supernatural are outside the scope of this paper, but the resolution of the question of God and the supernatural are not needed to realize that naturalism is not to be equated with science.

    The case for naturalism is weakened if a phenomenon exists that would hint that astronomically exceptional circumstances were involved in its emergence. It would appear life is one such phenomenon. If specialness of life doesn’t challenges naturalism, at the very least, it challenges the ability to explain it in terms of ordinary and repeatable processes.

    It is understandable that some methodological naturalists find the idea of miraculous-looking complexity in life as incompatible with a naturalistic narrative that insists miraculous events had no role in the emergence of life and its complex features. But such sentiments are speculations, and though superficially sounding like scientific explanations, such assertions should not be conflated or equated with actual science, and hence investment decisions in committing resources to scientific exploration should not be constrained if the potential outcome is unfavorable to naturalistic philosophy.
    If ENCODE is right and the genome is more functional than evolutionary biologists have argued, but no money is invested in research friendly to ENCODE’s claims, medical science and the chance to alleviate human suffering risks being permanently compromised. On the other hand, if money is invested to prove that Dan Graur and the evolutionary biologists he represents are right, there will be no benefit to the human medical condition even if they are right. Thus, according to Pascal’s wagering theories, in light of these payoffs, and providing there is some small probability that ENCODE is right, money should be wagered on ENCODE, and indeed that is where the money is being wagered by the NIH on behalf of US taxpayers.

    Insisting on the truth of naturalism in the disguise of evolutionary theory could impede scientific progress in the medical sciences if the whims of some evolutionary biologists like Dan Graur are realized. The National Science Foundation (NSF) has invested 170 million dollars in unresolvable evolutionary phylogenies of little or no utility to medical science.ii To date, no therapies based on the 170 million dollar phylogeny project have come to market. By way of contrast, with the help of research like ENCODE, epigenetic therapies are already being delivered to patients with more such therapies in the pipeline. Therefore, a gambler’s epistemology that seeks to maximize reward in the face of uncertainty would seem a superior approach versus blind insistence on impractical naturalism.

    Figure 4. From the RoadmapEpigenomics project. A depiction of DNA conceptually uncoiled from the cell nucleus (left) to reveal the “beads on a string” architecture of chromatin. Chromatin is composed of DNA and histones around which the DNA wraps. The “bead” is called a nucleosome and consists of DNA wrapped around 8 histones. The nucleosomes occur at a frequency of about every 200 base pairs of DNA.

  15. Figure 5.i ii A tabulation of the known chemical modifications to histone tails in the DNA nucleosome complexes. Each nucleosome occurs regularly for about every 200 base pairs of DNA. This figure shows 42 possible chemical modifications to the histone core of a nucleosome, but there are likely more modifications. “Me” means methylation, “Ac” acetylation, “Ph” phosphorylation, “Ub” ubiquitination. Each modification can be approximated as representing 1 bit of information. In truth, a chemical modification shown in the above diagram can sometimes represent more than one bit of information because in cases such a methylation, there are up to 3 different degrees of methylation.

  16. What a giant wall of total horseshit. It is still true, it has remained true ever since this was first pointed out to you, that nobody has said it isn’t worth it to try to find out which specific parts of the genome are functional and what exactly it is they do. The fact that most of the genome is defective transposons doesn’t mean all of them are defective (for example), so of course it pays to find out which ones are and which aren’t. And of course it pays off to see if some times mutations in defective (junk) transposons can resurrect some kind of biochemical activity that interferes with normal organismal function.

    This has been pointed out to you twenty fucking times at least. Why do you show no sign of getting this elementary concept? Why do you persist in propping up this grotesque fucking lie that evolutionary biologists are somehow opposed to genome research (as opposed to grandiose press releases)? Honestly, you are starting to disgust me.

  17. The fact that most of the genome is defective transposons doesn’t mean all of them are defective (for example), so of course it pays to find out which ones are and which aren’t. And of course it pays off to see if some times mutations in defective (junk) transposons can resurrect some kind of function that interferes with normal organismal function.

    This has been pointed out to you twenty fucking times at least. Why do you show no sign of getting this elementary concept? Why do you persist in propping up this grotesque fucking lie that evolutionary biologists are somehow opposed to genome research (as opposed to grandiose press releases)? Honestly, you are starting to disgust me.

    Thanks for your comments.

    So then what difference does it make that ENCODE researchers assume the entire genome is functional. Will it hurt the way business is done? I don’t think so.

    Apparently ENCODErs offense is that they believe the genome is functional and they promote it. That’s what Graur is really complaining about.

    But according to Gambler’s epistemology, if there is no loss for making an assumption but something to gain, then make it. If the ENCODE assumption inspires researchers to look at the genome more, then make that assumption.

    I feel then I’ve proven my point, no harm in dispensing with Graur and other Evo’s insistence the genome is junk.

    FWIW, Dawkins and Lawrence Krauss assume ENCODE is right. No harm done. See. What’s all the fuss about? It’s not scientific, it’s philosophical. It is insistence on naturalism posing as science. Is it too hard to say science may not absolutely know how much or where the complexity of biology came from?

    However, I do think there is potential harm in disparaging the fine work of the NIH.

    Thank you anyway for your comments. I didn’t think my paper was that disgusting. I know your a huge fan of Dan Graur as I saw you as one of the most frequent commenters praising him on his website, so I can see why you would be upset.

    Personally, I think he’s doing great service to the creationist community by his shrill rhetoric against the 400 or so researchers in 26 of the worlds finest scientific institutions.

  18. Sincere apologies to the readers regarding the download issues, and thanks to everyone for their forbearance. Thanks to everyone, critics included, for reading. List of 60 references are in the original paper, but will not be posted at TSZ since they can’t be linked to the body of my text anyway. I will be talking to some InfoSec guys tonight about the issue that Keiths brought up. Thanks to Keiths for alerting me to the problem.

    I wish to add, creationists actually agree with Dan Graur “If ENCODE is right, evolution is wrong” except they think ENCODE is right. I’ve highly recommended Joe Felsenstein’s book to creationist students of biology and genetics, and encourage study and mastery of Graur’s mutational load arguments (which were really Muller’s and company).

    The diagram below is a not-so-good reproduction from the paper, and in light of the quality, I may have the graphics redone.

    That said, here is the explanation from the paper which I think encapsulates Graur’s (and Joe Felsenstein’s) viewpoint of the mutational load problem. It begins with a simplified haploid model, but can be extrapolated to more complex models. It does show the essential problem of mutation load.

    Figure 6. Conceptual diagram of inevitable genetic deterioration. The Bubbles represent individuals in a generation and the red stars are detrimental mutations. The parent from generation 1 has hypothetically no mutations. Each of the kids in generation 2 has lots of mutations the parent didn’t have. Even supposing the kid in generation 2 with the least mutations is selected to spawn the kids in generation 3, it will pass on its defects to its children in addition to adding new bad mutations to kids in generation 3. The number of detrimental mutations increases with each generation. Granted this is a simplified single parent (haploid) model, but it is provided for conceptual purposes. The more complex models (such as developed by Muller and others afterwards which leverage the Poisson distribution) arrive at the same essential conclusion, furthermore their calculations yield a estimate that 1 bad mutation per individual per generation on average cannot be tolerated by the human species.

  19. Just for fun, I’m going to make a prediction and an associated paper trade based on concepts of expected value and some of Thorp’s hedge fund theory.

    Thorp independently derived the Black-Scholes equation which won the Nobel Prize in Economics which related heat flow thermodynamics to the pricing of stock market derivates. Sadly Thorp didn’t publish in time and didn’t share in the Nobel Prize. Frankly, some of the best economic theories won’t be published if they make big bugs for the practitioners.

    Thorp’s best ideas have remained a trade secret…

    But any way here is the Black Scholes equation:
    https://en.wikipedia.org/wiki/Black%E2%80%93Scholes_model

    It was also the subject of the PBS documentary: The Trillion Dollar Bet.

    I punched the following parameters into one of the Black-Sholes calculators:
    http://www.hoadley.net/options/probgraphs.aspx

    with the following parameters:

    stock price: 51.82
    volatility: 37%
    days: 140
    expected return: 1.%

    Some comments, “stock price” is the price of the underlying security, it could also be the commodity price like Crude oil. Ok, so I’m really targeting crude oil.

    Volatility is the annualized standard deviation in a log normal distribution. It’s a guestimate based on the standard deviation of prices over a sample size. There is no absolute law that it has to be any particular number, we can only make estimates based on recent or what we believe the most relevant samples of price quotes.

    Days to expiration is number of days to some target date like from today to November 16, 2016.

    Expected return: is the “risk free rate of return” like say US treasuries or CDs or whatever is considered almost guaranteed. Ha! Since CD rates are so low, even 0% would probably be good enough, I punched in 1%.

    So I punch these guestimates into the calculator, and voila, it says, I have a 3.4% chance that Crude oil reaches and STAYS above a target price of 77.02 by November 16, 2016. Are you feeling lucky?

    There are some ideas regarding volatility arbitrage not worth getting into, but it is theoretically a reasonable bet to sell an option call that expires on November 16, 2016 with a contract strike price of 80.00. We have a 96.6% chance of success! Woohoo! Volatility arbitrage theory predicts the expected value of the trade is favorable provide that the present price volatility is above the implied volatility of the security being short sold (ok, too much theory already…)

    The security in question is West-Texas Intermediate Crude oil December 2016 contract which Expires in November 2016. Current price of oil under the ticker CLQ6 is trading currently at 49.64. By November 16, 2016 the ticker will be CLF6, and my paper trade is betting the price of oil won’t be beyond 85 dollar a barrel. And frankly all of us should hope the price of oil stays below 80 dollars a barrel!

    Volatility arbitrage theory says to short sell a call derivative, and the appropriate ticker symbol for the derivative is “LOZ6 C8000” which is the December 2016 option contract that expires on November 16, 2016. “LOZ6 C8000” is selling for an average price of 80 dollars. Volatility arbitrage theory says to sell it — a naked call — not for the faint of heart.

    If Crude oil hits 60 anytime soon, the short seller will be in the hole a thousand bucks maybe more. But if one can tolerate the pain of short term loss, there will be 85 dollars to gain.

    So one must pony up some collateral (margin) to reassure the broker that you can cover your tail during the trying times while one short sale of LOZ6 C8000 is way in the red until (hopefully) expiration day is reached and one can safely realize a profit.

    Ok, I’m going to short sell “LOZ6 C8000” for 85 dollars. Since this is a paper trade I’ll put up 850 dollars collateral, hoping the short sale won’t go in the red by more than 850 dollars. If things go bad I could just put up more collateral, which is like playing martigales — yikes.

    https://en.wikipedia.org/wiki/Martingale_(probability_theory)

    Originally, martingale referred to a class of betting strategies that was popular in 18th-century France.[1][2] The simplest of these strategies was designed for a game in which the gambler wins his stake if a coin comes up heads and loses it if the coin comes up tails. The strategy had the gambler double his bet after every loss so that the first win would recover all previous losses plus win a profit equal to the original stake. As the gambler’s wealth and available time jointly approach infinity, his probability of eventually flipping heads approaches 1, which makes the martingale betting strategy seem like a sure thing. However, the exponential growth of the bets eventually bankrupts its users, assuming the obvious and realistic finite bankrolls (one of the reasons casinos, though normatively enjoying a mathematical edge in the games offered to their patrons, impose betting limits). Stopped Brownian motion, which is a martingale process, can be used to model the trajectory of such games.

    But anyway, if my paper trade is right, it will yield 10% return (85 dollars profit / 850 collateral) by November 16, 2016 for an annualized rate of return of 28%.

    I’m betting Crude oil won’t close above 80 dollars by November 16, 2016.

    So I’m putting my make-believe money where my mouth is. 🙂

    At my discretion, if the trade become profitable before November 16, 2016, I can close it and lock in the profit. I’ll post at TSZ if this happens on this thread.

  20. stcordova: Apparently ENCODErs offense is that they believe the genome is functional and they promote it. That’s what Graur is really complaining about.

    Yes. Why the hell did it take you this long to fathom?

    One should not push unsupported conclusions in press releases (or in abstracts). It’s bad science. It is misinformation. And it is misinformation due to incompetence.

    stcordova: I know your a huge fan of Dan Graur as I saw you as one of the most frequent commenters praising him on his website, so I can see why you would be upset.

    Dan Graur has a website? I’m one of the commenters that “praised him most frequently” over there? What the hell are you on about?

    This isn’t about Dan Graur and his mindless hyperbole(at least to me, but it seems it is to you), this is about what is true. It simply isn’t true to say that most of geneome is functional, and it simply isn’t true to say that ENCODE has shown or even implied it is.

    By now it is apparent you simply don’t understand the arguments against your position.

  21. Dan Graur has a website? I’m one of the commenters that “praised him most frequently” over there? What the hell are you on about?

    Ok, I could be mistaken but I saw comments by someone going under the handle “Rumraket”.

    Here is the website for interested readers. It was here Graur called ENCODE guys “crooks” and “ignoramuses”.

    http://judgestarling.tumblr.com/

  22. It simply isn’t true to say that most of geneome is functional,

    No one knows for sure, certainly not evolutionary biologists who can’t agree even amongst themselves whether it is or isn’t!

    and it simply isn’t true to say that ENCODE has shown or even implied it is.

    And neither has Gruar shown it isn’t functional with his mutational load arguments and the assumption evolution is right. For all we know ENCODE is right and evolution is actually wrong. But that is beside the point as far as medical research is concerned.

    But the question of evolutionary theory, naturalism, ID doesn’t have to be settled in order for ENCODE to do their work, and what if they find the genome is highly functional, highly constrained (which I think it is) and mutational load theory is blown out of the water. So what?

    Personally, I think most heritable features in Eukaryotic multicellular creatures are not in the DNA but all over the zygote cell. The glycome does not have a direct DNA template for starters.

    Even assuming universal common ancestry, the glycome for all we know would have been put in its current state by DNA in ancestors long dead and by DNA no longer existence in the current species line — that is if one accepts universal common ancestry to begin with. The glycome does not have an immediate DNA template.

    A lot of the management of the epigenomic RAM could be from information residing in the glycome. If that’s the case, then ENCODE could be closer to the truth.

    What’s so bad about ENCODE making a premature claim, for all we know Graur is making one too (and there are evolutionary biologists who disagree with him). I stated it’s a better wager to wager on ENCODE being right.

    Some of you guys rather than being excited about the possibility of new scientific frontiers of exploration into undiscovered complexity in biology seem eager to be naysayers. The irony is the creationists are thrilled at the possibility that our exploration into the complexities of life is just beginning.

  23. stcordova: Insisting on the truth of naturalism in the disguise of evolutionary theory could impede scientific progress in the medical sciences if the whims of some evolutionary biologists like Dan Graur are realized. The National Science Foundation (NSF) has invested 170 million dollars in unresolvable evolutionary phylogenies of little or no utility to medical science.ii To date, no therapies based on the 170 million dollar phylogeny project have come to market. By way of contrast, with the help of research like ENCODE, epigenetic therapies are already being delivered to patients with more such therapies in the pipeline. Therefore, a gambler’s epistemology that seeks to maximize reward in the face of uncertainty would seem a superior approach versus blind insistence on impractical naturalism.

    I see a number of problems with this short paragraph. I’ll try to list them here.

    1. Highly inflammatory and non-professional language, just for a start. “Disguise of naturalism”? “Whims of Dan Graur”? “Blind insistence on impractical naturalism”? If you’re trying to write a polemic for your fans (if any), I suppose it’s appropriate. Otherwise, not so much.

    2. Assuming for the sake of argument that investing in phylogenetics doesn’t help medical science, why should we ignore other benefits? Is basic knowledge useless unless it contributes directly to human health? Should NSF be concerned only with medical sciences, and if so, shouldn’t it be folded into NIH?

    3. What do you mean “unresolvable”? NSF grants, the AToL program in particular, have produced great amounts of phylogenetic resolution. My project, Early Bird, for example.

  24. John Harshman,

    Thanks for your comments. I alerted JohnnyB about this discussion. He did chide me for being polemic in the paper and he is the editor-in-chief, so that paragraph might have to get reworked.

    I hope he sees your comments and offers a rewording. However I think the substance of my assertion is correct. 170 million is wasted on a theory of no utility except to those who want to fight creationists. I certainly see no medical value to assuming universal common ancestry since comparative anatomy and comparative genomics could be carried out just as well under the assumption of common design as with common descent, maybe even better since “conserved” does not have to mean “conserved” by common ancestry but “conserved” by common design. One gets the same result as far as medical research is concerned, maybe even better since similarities (aka convergences or horizontal gene transfer or common descent) are more of interest to medical researchers, not the phylogenetic trees.

    Thanks for you comments. FWIW, though I disagree with evolution, good luck on your Early Bird project.

  25. JoeCoder at r/creation offered his criticism and gave me permission to post his comments here at TSZ:

    JoeCoder:

    From the appendix of your paper:

    If ENCODE is right, the functional genome would be on the order of 3 giga base pairs, and given accepted mutation rates, the size of the functional genome would imply on the order of 100 function-compromising mutations per generation per individual.

    They claimed 80% is transcribed, and not every nucleotide in every transcript is necessarily functional. So we can’t say 100 deleterious mutations per generation or even 80. At least not yet.

    my response:

    SAL:

    With your permission can I post that comment at TSZ, I will have to amend my paper in light of what you said. I’ll have to add a qualifier. The 100 mutations is a figure that Graur uses, so Graur is also making a generalization not completely warranted unless he adds a qualifier (which I intend to).

    Otherwise, you can post your comment there yourself.

    Thanks for the input.

    JoeCoder’s response:

    You have my permission to repost this and my previous comment to The Skeptical Zone. I won’t be participating there, but let me know if anyone finds any errors in my thinking.

    In my junk DNA article I try to estimate how much we can confidently say is nucleotide-specific functional, which I put at 24-46%. But I’d like to eventually contact the authors of some of these papers to resolve some ambiguities. Read from the New research reveals very little junk DNA up until the “Perspectives Change” sub-heading.

    Followup comment. JoeCoder is correct, and for the 100 mutations to be a valid order-of-magnitude figure the transcripts have to be under some nucleotide level constraint. But I will also say, it’s not the RNA transcripts alone. The genome on the whole needs to be under a constraint that is favorable to coiling into nucleosomes, and this constrains the “grammar” structure of 10 nucletides at a time. This is a pretty heavy constraint.

    Additionally ENCODE and other ChIP-seq (Chormatin Immuno Precipitation Sequencing) ChiRP-seq (Chromatin Isolation by RNA Pruification) and whatever other labs are finding numerous binding regions for molecular machines and microRNAs and other ncRNAs. The evidence of such large scale constraint is generating a dismissal of the once venerated Ka/Ks ratio tests used by evolutionary biologists:

    http://www.ncbi.nlm.nih.gov/pubmed/17508390

    While it has often been assumed that, in humans, synonymous mutations would have no effect on fitness, let alone cause disease, this position has been questioned over the last decade. There is now considerable evidence that such mutations can, for example, disrupt splicing and interfere with miRNA binding. Two recent publications suggest involvement of additional mechanisms: modification of protein abundance most probably mediated by alteration in mRNA stability and modification of protein structure and activity, probably mediated by induction of translational pausing. These case histories put a further nail into the coffin of the assumption that synonymous mutations must be neutral.

    (c) 2007 Wiley Periodicals, Inc.

    The figure of 100 mutations per generation per individual is an order of magnitude estimate. But even 10 mutation/generation/individual would be 10 times beyond the Muller limit of 1.

    Anyway, Gruar’s exact words, I may choose to use his exact figures rather than 100 as an order of magnitude. His “bonkers” comment is based on application of the Poisson distribution which I elaborated here with math derivations:

    Fixation rate, what about breaking rate?

    Studies have shown that the genome of each human newborn carries 56-103 point mutations that are not found in either of the two parental genomes (Xue et al. 2009; Roach et al. 2010; Conrad et al. 2011; Kong et al. 2012). If 80% of the genome is functional, as trumpeted by ENCODE Project Consortium (2012), then 45-82 deleterious mutations arise per generation. For the human population to maintain its current population size under these conditions, each of us should have on average 3 × 1019 to 5 ×
    1035 (30,000,000,000,000,000,000 to
    500,000,000,000,000,000,000,000,000,000,000,000) children. This is clearly bonkers. If the human genome consists mostly of junk and indifferent DNA, i.e., if the vast majority of point mutations are neutral, this absurd situation would not arise.

    https://arxiv.org/ftp/arxiv/papers/1601/1601.06047.pdf

    Ironically, I actually agree with his calculations but not his conclusions. As I said, ironically creationists like myself and John Sanford agree with the mutational load arguments of Muller, Joe Felsenstein and Dan Graur, but for different reasons.

  26. stcordova: comparative anatomy and comparative genomics could be carried out just as well under the assumption of common design as with common descent,

    It is possible to map celestial mechanics to a geocentric model, but it would be fucking stupid to do so.

  27. Well well, look at what got posted at UD. My talk on April 16, 2016!

    The paper I’m submitting is a conversion of Powerpoint slides in my talk. At the time I gave the talk, I didn’t expect AM-NAT 2016 would evolve into such a formalized affair.

    Gambler’s Epistemology

  28. stcordova:
    However I think the substance of my assertion is correct.170 million is wasted on a theory of no utility except to those who want to fight creationists.I certainly see no medical value to assuming universal common ancestry since comparative anatomy and comparative genomics could be carried out just as well under the assumption of common design as with common descent, maybe even better since “conserved” does not have to mean “conserved” by common ancestry but “conserved” by common design.One gets the same result as far as medical research is concerned, maybe even better since similarities (aka convergences or horizontal gene transfer or common descent) are more of interest to medical researchers, not the phylogenetic trees.

    Thanks for you comments.FWIW, though I disagree with evolution, good luck on your Early Bird project.

    Again I ask why, even if we grant the premise that phylogenetics is useless for medical research, you consider money spent on anything that isn’t medical research to be “wasted”. If you think phylogenetics is of value only for fighting creationists you are suffering from severe delusions of grandeur; that’s not the impetus for such research and most of us never think about you at all. (Though it certainly does do a good job of falsifying creationism, and I can see why you don’t want to think about it.)

    And the Early Bird project has been complete since 2008, though there is one more paper in the pipeline. It was a great success and falsifies your “unresolvable” canard. And so do the other projects funded by NSF’s AToL program, and so does all manner of other phylogenetic research funded in various other ways.

    You have failed to respond to anything I said. I presume that was deliberate and suspect it’s because you can’t defend the implicit claims I pointed out.

  29. Joe Felsenstein:
    Of course natural selection in the presence of genetic drift has very similar mathematics to gambling in the presence of a slight advantage to one side.We have been there before:

    1. You argued that in the presence of genetic drift natural selection would be mostly ineffective.Your post (at Uncommon Descent) is here.

    2. I replied, showing that calculations disproved your assertions.My post (at Panda’s Thumb) is here.

    The math is remarkably similar.Let us all learn from that.

    (By the way, many of us who are old enough of course remember Thorp’s book Beat the Dealer.It came out when I was in college and every college student seemed to have heard about it.The relevant point is that natural selection rigs the bets.)

    AMEN. Natural selection is based on the same presumptions of math. Its a line of reasoning. Like math. yet its mot biological science.
    Further it needs to select on something and that constantly to turn fish into fisherMEN.
    By a slight advantage being seleced on one can speculate to any possibility of the imagination. Including turning fisherMEN into fish.
    Just selection/small step[s can do anything however impossible.

  30. Tom English: So I can say, without it being a direct attack on him, that the Christians in the movie revealed themselves to be deplorably “flexible” in their morals.

    ID would certainly be helpful in training anyone to be that way.

    Is there any “worldview” that has so many ways of muddying the facts, and twisting what would be proper inference from them, as ID?

    Glen Davidson

  31. keiths,

    The most credible of Salvador’s claims is that he makes live presentations to Christians. He builds his credentials online. I suspect that he has a greater impact on people than you and I do.

  32. Tom,

    I know that Sal makes presentations, but I haven’t encountered any evidence that he’s considered a “star” in Christian or crestionist circles.

    He clearly craves that sort of recognition. I just wonder if he’s actually getting it anywhere.

  33. In appendix of my paper, I mentioned my alternative to CSI in informal terms. I don’t find much need or benefit to invoke all the formalisms that were constructed to support the CSI concept since so much of what IDists really need to argue a phenomenon is not typical already exists in mathematical literature.

    If I made math errors, I welcome corrections. I may have made some mistakes in the probability of all red cards at the top of the deck. Thanks in advance.

    Appendix 3
    Rube Goldbergesque Designs, Specificity and Complexity

    A typical objection to the probability arguments put forward by Intelligent Design proponents is that the probability calculations they advocate are after-the-fact calculations, therefore illegitimate. For example, any random shuffle of a deck of cards will yield an astronomically rare sequence that occurs 1 out of 52 factorial (approximately 8 x 10^67) times. So each random possible sequence of cards is astronomically remote, hence opponents of Intelligent Design would argue any probability calculation about a sequence of cards cannot be used to argue one sequence is more special than any other. They would extend the same sort of objections to the emergence and complexity of life.

    However the specialness of one sequence is not due to the improbability of a sequence, but how far from mathematical expectation a sequence is. Earlier it was mentioned that 100% fair coins heads is maximally far from the expectation of 50% fair coins heads. Additionally, suppose we found all the cards belonging to the red suits at the top of the deck and all cards belonging to the black suits at the bottom. The expected value for the first 26 cards at the top of the deck is approximately 50% red cards and 50% black cards, whereas 100% red cards at the top of the deck is farthest from expectation with odds of 1 out of 5 x 10^14. [the calculation is 26/52 x 25/51 …. 1/27 = 26!/(52!/26!) = 5 x 10^14]. Based on textbook math, the “all red at the top” configuration is an exceptional configuration. The symbolic properties (red and black suits) are decoupled from the physical properties which leads the possibility that certain special symbolic sequences are not practically explainable by random physical processes but rather by processes that defy natural randomizing tendencies.

    To argue that “we just made all-red special in our minds, but it really isn’t” would seem a last resort attempt to argue the all-red-at-the-top-of-the-deck configuration is not special. But to argue there is ultimately nothing exceptional in the universe, that “special” or “exceptional” is rooted in our imagination, would be to undo the foundations of probability theory and much of science that depends on it.

    Independent of the question of Intelligent Design, the question of the existence of exceptional configurations can still be asserted. Living organisms are exceptional chemical configurations based on theory and experiment (such as Pasteur’s experiments). Our perception of the specialness of life is not consequence of after-the-fact probabilities nor seeing faces in clouds.

    Another objection to the specialness of life is that “there are many ways to make chemical replicators, hence our perception that life is special is based on after-the-fact probabilities that don’t consider the fact there are an infinite number of ways to make life.” But the counter-objection is that even though there are an infinite number of ways to make lock-and-key systems or complex replicating chemical systems (aka life), this does not make them highly probable. And if the replicator demands high specificity of the parts and connections (like a house of cards), it is exceptional as a matter of principle.

    Various system parts can be said to have high specificity if they cannot tolerate much change or perturbation without the nature of the system breaking down. In figure 8, systems with high specificity are illustrated by the highly specific orientation and position of cards required to put them into a house configuration (vs. just lying flat), by dominoes being able to stand on their edges, and by the matching of a key to a lock. Randomly picked orientations and positions (non-specific configurations) of cards will not result in systems such as a house of cards even though there might be an infinite number of ways to make a house or village of cards. Similar considerations apply for the dominoes and lock-and-key systems.

    A system composed of many parts can be said to be complex. Some might call a system possessing both large amounts of specificity and complexity as possessing “specified complexity”, but because the phrase “specified complexity” has so many conflicting definitions (some involving information theory)ii the term is de-emphasized in this paper.

    Instead, the notion of “Rube Goldberesque Design” is suggested as more descriptive of the nature of biological complexity that is in a highly specified state. It is this class of extravagant complexity that bothered both Darwin in the past and evolutionary biologists in the present because such designs would be selected against rather than for owing to the fact greater specificity and complexity (like a greater house of cards) is more vulnerable to failure.

    Darwin argued his theory of natural selection explains the emergence of high complexity in biology, but his theory is not what is observed in nature, and even by his own admission as symbolized by the peackock’s tail, extravagant Rube Goldberg complexity present in life would actually be selected against rather than for.

    Framing the spontaneous generation debate in terms of Natural vs. Supernatural or in terms of Intelligent Designs vs. Mindless Design muddles the more basic scientific question. The basic question is whether life is a typical or exceptional chemical configuration. Life is an exceptional chemical configuration, astronomically so. Impractical naturalism is not comfortable with phenomenon that hints of events so singular they would be indistinguishable from miracles. Thus when faced with the fact of an exceptional phenomenon like the mechanical complexities of life, proponents of naturalism often try to argue something isn’t that complex after all.

    It is no surprise therefore that evolutionary biologists who are also naturalists are often inclined to insist biological systems are not that complex after all, that the complexity is an illusion. They argue the convoluted apparently clumsy ways living things go about their business is evidence against the intelligent design and in favor of natural evolution. But convoluted mechanisms could just as well be interpreted as Rube Goldbergesque designs, and Rube Goldbergesque designs in nature, like the peacock’s tail, could just as easily argue for intelligent design as against it. But in any case, it is very hard to argue any Rube Goldbergesque design, be it God-made or nature made, would be a phenomenon consistent with natural expectation.

    ENCODE and research into epigenomics has uncovered several biological Rube Goldberg machines that epitomize specificity and complexity. One example such a Rube Goldberg machine is described in the Appendix 4.

    Figure 9. i A small house of cards behind dominos standing on a small wooden box behind a lock and key. The house of cards and dominoes illustrate systems of objects that cannot be produced by an uncertainty maximizing process such as a tornado or similar process that affixes random orientations and positions to the objects. The lock and key combination is included in the photo since it is important to understanding that even though there are an infinite number of ways to make lock and key combinations, it doesn’t imply the probability is high that a working lock and key will emerge from random processes – the probability is not high, it is remote.

  34. BruceS:

    Off topic, but fun for people who liked Beat the Dealer
    How Advantage Players Game the Casinos. From NYT magazine so behind metered paywall.

    Actually I think it is on topic since this relates to Thorp’s work and Grosjean extended the scope of Thorp’s scientific approach in casinos. A fundamental theme in all this is the notion of expected values and the law of large numbers which helps understand casinos, wall street and intelligent design.

    I’ve said before the fundamental principle of intelligent design should rest on things like the law of large numbers, not information theory nor the 2nd law thermodynamics.

    Some reports list Grojean as a Harvard math graduate, others as a University of Chicago grad student in economics. It’s possible both things are true. He is very smart. I know some of the theoreticians and they are savants — they put my math skills to shame.

    One of those regarded as the most mentally skilled Advantage Players (at least publicly acknowledged) is Dominic O’Brien:

    Dominic O’Brien had an entry in the Guinness Book of Records for his 1 May 2002 feat of committing to memory a random sequence of 2808 playing cards (54 packs) after looking at each card only once. He was able to correctly recite their order, making only eight errors, four of which he immediately corrected when told he was wrong.[2]

    Perhaps the most legendary group of casino sharks was the MIT Blackjack Team (aka “strategic investments”):

    The MIT Blackjack Team was a group of students and ex-students from Massachusetts Institute of Technology, Harvard Business School, Harvard University, and other leading colleges who used card counting techniques and more sophisticated strategies to beat casinos at blackjack worldwide. The team and its successors operated successfully from 1979 through the beginning of the 21st century. Many other blackjack teams have been formed around the world with the goal of beating the casinos.

  35. stcordova: Figure 9. i A small house of cards behind dominos standing on a small wooden box behind a lock and key. The house of cards and dominoes illustrate systems of objects that cannot be produced by an uncertainty maximizing process such as a tornado or similar process that affixes random orientations and positions to the objects. The lock and key combination is included in the photo since it is important to understanding that even though there are an infinite number of ways to make lock and key combinations, it doesn’t imply the probability is high that a working lock and key will emerge from random processes – the probability is not high, it is remote.

    Isn’t it curious that noone has suggested a random process has been making cardhouses or stacking dominoes?

  36. stcordova: No one knows for sure, certainly not evolutionary biologists who can’t agree even amongst themselves whether it is or isn’t!

    False. Evolutionary biologists (particularly population geneticists) are pretty sure it’s mostly junk, the people who disagree are usually molecular biologists ignorant of evolution.

    stcordova: And neither has Gruar shown it isn’t functional with his mutational load arguments

    The mutational load argument isn’t Graur’s, he merely acted to remind some people about it. And yes, the mutational load argument is one way to show it isn’t functional.

    stcordova: For all we know ENCODE is right and evolution is actually wrong.

    Only if you insist on pretending the last 40 years of genome research did not take place.

    stcordova: But the question of evolutionary theory, naturalism, ID doesn’t have to be settled in order for ENCODE to do their work

    Which raises the question why you keep bringing up naturalism and ID. It is YOU who do this.

    It is patently obvious that to you this is about atheism versus god-belief, while to evolutionary biologists and biochemists this is about doing good science and not making grandiose declarations not supported by facts and data.

    This is why you insist on making Graur the “representative” of all of evolutionary biology, because you can use his hyperbolic ways as a big boogeyman who’s out to stop medical reasearch.

    It’s a fake story you are telling and you know it is.

    stcordova: and what if they find the genome is highly functional, highly constrained (which I think it is) and mutational load theory is blown out of the water.So what?

    Then they’d have to do other kinds of research than mapping transcription factor binding sites and binding frequencies. Because that data doesn’t actually tell you how much of the genome is functional or what it is doing.

    And you know why, because this has been explained to you multiple times.

    stcordova: Personally, I think most heritable features in Eukaryotic multicellular creatures are not in the DNA but all over the zygote cell.The glycome does not have a direct DNA template for starters.

    Even assuming universal common ancestry, the glycome for all we know would have been put in its current state by DNA in ancestors long dead and by DNA no longer existence in the current species line — that is if one accepts universal common ancestry to begin with.The glycome does not have an immediate DNA template.

    A lot of the management of the epigenomic RAM could be from information residing in the glycome.If that’s the case, then ENCODE could be closer to the truth.

    You are entitled to think whatever makes you happy. But you’re lying in your “papers” and presentations.

    stcordova: What’s so bad about ENCODE making a premature claim

    It’s not just premature, it is demonstrably wrong. And it sets a horrible precedent for future researchers who if they are allowed to just make hyperbolic claims in press releases to get attention and grant money. It’s bad science.

    stcordova: for all we know Graur is making one too (and there are evolutionary biologists who disagree with him).

    Cool, there are also evolutionary biologists that believe in the christian god. And still accept evolution by random mutation, genetic drift and natural selection. And universal common descent.

    stcordova: I stated it’s a better wager to wager on ENCODE being right. Some of you guys rather than being excited about the possibility of new scientific frontiers of exploration into undiscovered complexity in biology seem eager to be naysayers.

    False. ENCODE can do their research without making claims not supported by data. Nobody here suggests otherwise. It’s about proper scientific conduct, about correct interpretation of data and about not being ignorant about the history of genome biology.

    stcordova:
    The irony is the creationists are thrilled at the possibility that our exploration into the complexities of life is just beginning.

    Also false. The only thing “thrilling” about this whole thing is that you see an opportunity to say evolution is false. This is all about godbelief and atheism to you, because evolution conflicts with your fundamentalist religious doctrine. If ENCODE consistently argued that most of their transcription maps should not be interpreted to mean any particular stretch of DNA was therefore functional you’d fucking hate it and be blathering all day long about “darwinian paradigm”, “towing the party line” or what have you, if not outright fucking ignoring it.

    Sorry, you don’t get to selll your attempts at revisionist history around here.

  37. False. Evolutionary biologists (particularly population geneticists) are pretty sure it’s mostly junk, the people who disagree are usually molecular biologists ignorant of evolution.

    Thank you for your comment.

    I’ve suggested evolutionary biologists are the modern day version of spontaneous generation advocates — like Ernst Haeckel they are just making closet suggestions biological complexity emerges from ordinary and repeatable mechanisms. We know what happened to Haeckel’s views….

    Graur is unwittingly correct: “If ENCODE is right, evolution is wrong.” ENCODE is right and evolution is wrong. I’ve already said I agree with the math of Graur which is really Muller, Felsenstein, etc.

    That’s what the data show. But so what if I and ENCODErs are wrong. Nothing to lose. But we have much to gain in understanding if we (I, ENCODErs, heck even Richard Dawkins) are right that the genome is highly functional.

    the people who disagree are usually molecular biologists ignorant of evolution.

    Yes, the guys who actually do physical and chemical experiments.

    But I’ll tell you why, just on an intuitive level why I and the guys at the NIH and the ENCODE consortium think the genome is highly functional.

    Say for the sake of argument we use the 8%-10% figure some evolutionary biologists use for functionality of the genome. Given each nucleotide position has a Shannon capacity of 2 bits (4 possible states, log2(4) = 2 bits), that results in the following:

    3.3 gigabases = 3.3 gigabases x 2 bits/base = 6.6 gigabits

    6.6 gigabits x 1 byte/ 8 bits = 825 megabytes

    taking the 10% figure of Graur and friends

    10% x 825 megabytes = 82.5 megabytes

    On an intuitive level that flat doesn’t seem enough capacity to express something as complex as the neural connections in the brain alone, much less an entire human in all its developmental phases, the 210-300 cell types in various cell and developmental phases.

    For this simple intuitive reason, whether they say it explicitly or not, it is easy to believe the genome is highly functional. If indeed the DNA is not only a ROM system but a RAM system because it is part of the Chromatin/complexes (I even provided nice pictures to drive home the point that show man-made RAM and God-made RAM where the DNA are merely wires connecting the memory units called histones), then this changes the perception of DNA being part of a purely static Read-Only-Memory, but also a Random Access Memory (RAM).

    If this is case, the question arises, where is all that other data stored that loads the RAM in 100 trillion cells! No one knows for sure, but I’ve suggested the glycome and organelles for starters. The glycol-proteome is part of the epi-proteome (aka varieties of post translational physical and chemical modifications).

    I at least credit Richard Dawkins and the other evolutionary biologists who will argue and effectively say, “see I told you guys evolutionary theory predicted all the huge amounts of functionality we see in the genome and in the epigenome and glycome.”

    Here is a sampling of the next greatest idea in evolutionary biology:

    Glycans – the third revolution in evolution

    The development and maintenance of a complex organism composed of trillions of cells is an extremely complex task. At the molecular level every process requires a specific molecular structures to perform it, thus it is difficult to imagine how less than tenfold increase in the number of genes between simple bacteria and higher eukaryotes enabled this quantum leap in complexity. In this perspective article we present the hypothesis that the invention of glycans was the third revolution in evolution (the appearance of nucleic acids and proteins being the first two), which enabled the creation of novel molecular entities that do not require a direct genetic template. Contrary to proteins and nucleic acids, which are made from a direct DNA template, glycans are product of a complex biosynthetic pathway affected by hundreds of genetic and environmental factors. Therefore glycans enable adaptive response to environmental changes and, unlike other epiproteomic modifications, which act as off/on switches, glycosylation significantly contributes to protein structure and enables novel functions. The importance of glycosylation is evident from the fact that nearly all proteins invented after the appearance of multicellular life are composed of both polypeptide and glycan parts.
    ….

    If you can’t beat ’em join ’em. Unfortunately for Graur, molecular biologists are discovering layers of information his numbers can’t account for. What will happen when in addition to the genome we have to account for the information in the glycome, epi-proteome, organelles, cytoplasm or whatever else.

    Something I should point out from the NIH:

    http://sigs.nih.gov/GBIG/Pages/default.aspx

    Glycobiology Interest Group

    The NIH Glycobiology Interest Group brings together researchers from over 60 NIH and FDA laboratories who share interest in the glycosciences and are involved in studies of glycans and their binding proteins. Glycans represent one of the three major biomacromolecules (polynucleotides, polypeptides, carbohydrates), responsible for the bulk of information transfer in biological systems. Half of all cellular proteins contain carbohydrates.

    Glycan binding proteins (lectins) bind to specific cellular glycans/ligands and play important roles in cell recognition, motility/homing to specific tissues, signaling processes, cell differentiation, cell adhesion, microbial pathogenesis and immunological recognition. Intramural NIH and FDA laboratories study glycan structure, synthesis, metabolism, function, and lectin biology, shedding light on important biological processes.

    If you would like do a postdoc in the glycosciences at NIH, are looking for a speaker, or wish to find a collaborator, please review our member list. You can also search the CRISP database and the list of NIH intramural labs working in specific areas of Glycoscience.

    The Glycobiology Interest Group coordinator is Pamela Marino.

    Not only do I think Graur will become a non-factor in all this, but one day the gene-centrists too.

    82.5 megabytes does not a functioning human make.

  38. stcordova: 10% x 825 megabytes = 82.5 megabytes

    On an intuitive level that flat doesn’t seem enough capacity to express something as complex as the neural connections in the brain alone, much less an entire human in all its developmental phases, the 210-300 cell types in various cell and developmental phases.

    It is enough capacity for my brain (and these days maybe even more than enough). But your brain is more special.

  39. One thing that has come out of the ENCODE consortium’s work is the work of researchers on the periphery. The notion that DNA sequences are solely used for specifying proteins residues is hopefully getting eroded out of popular understanding.

    Below is a picture of the location molecular machines (represented by bubbles labeled -10k, -20k, -30k, pro). There is the mmp13 gene on the DNA with the black marks representing its exons. The molecular machines have to park on parts of the DNA far away from the mmp13 gene in order to service the mmp13 gene. In this case, the machines are parking on stretches of DNA that contain sequences conforming to a pattern known as the Vitamin-D Receptor (VDR) binding site motif. This would imply a high degree of functionality in the DNA especially the non-coding regions. Binding of molecular machines to the chromatin is not limited to binding on DNA but also the histones, even histones in those despised repetitive DNA elements!

    This VDR motif is spread throughout the geneome in various coding and non-coding regions. At least for the coding regions, the locations of the VDR motifs are poly constrained to code both for functional proteins and provide “parking lots” with signs that say (figuratively speaking) “park molecular machines here that contain the VDR receptor”. But that is only for one class of machines (VDR bearing machines), there are thousands, perhaps tens of thousand of similar machines requiring polyconstrained binding motifs (traffic parking signs if you will) on the DNA spread across the entire genome and across chromosomes.

    The HOTAIR lincRNA for example is not known to have about 832 binding locations in various chromosomes in order to recruit the Polycomb PRC2 repression complex (shown in my paper). All this is the tip of the iceberg.

    Added to this, the RNA transcripts are subject to RNA interference regulation through things like micro RNAs, hence the DNA is poly constrained to provide binding motifs for ncRNA regulatory machines in the transcriptome.

    So not only is DNA constrained to invite parking of molecular machines on the genome, it is also constrained to invite parking (the proper word is “binding”) of ncRNAs in the transcriptome. Oh, that’s the other thing. Not only are we exploring with the Epigenome, there is on the horizon the EPITRANSCRIPTOME!

    What I’ve described are the sort of things of deep interest to medical researchers. It suggests high degrees of complexity and specificity not just in the genome, but now the epigenome, and likely in the future the epitranscriptome, epiproteome and glycome.

  40. I will just note here, as long as Sal is ignoring me, that he has claimed that ID would open up all sorts of areas for research, yet one of his main contributions here is an argument to shut down (or at least stop paying for, which amounts to the same thing) the enormously fertile research area of phylogenetic and evolutionary biology.

Leave a Reply