The Reasonableness of Atheism and Black Swans

As an ID proponent and creationist, the irony is that at the time in my life where I have the greatest level of faith in ID and creation, it is also the time in my life at some level I wish it were not true. I have concluded if the Christian God is the Intelligent Designer then he also makes the world a miserable place by design, that He has cursed this world because of Adam’s sin. See Malicious Intelligent Design.

Jesus prophesied of the intelligently designed outcome of humanity: “wars and rumors of wars..famines…pestilence…earthquakes.” If there is nuclear and biological weapons proliferation, overpopulation, destruction of natural resources in the next 500 years or less, things could get ugly. If such awful things are Intelligently Designed for the trajectory of planet Earth, on some level, I think it would almost be merciful if the atheists are right….

The reason I feel so much kinship with the atheists and agnostics at TSZ and elsewhere is that I share and value the skeptical mindset. Gullibility is not a virtue, skepticism is. A personal friend of Richard Dawkins was my professor and mentor who lifted me out of despair when I was about to flunk out of school. Another professor, James Trefil, who has spent some time fighting ID has been a mentor and friend. All to say, atheists and people of little religious affiliation (like Trefil) have been kind and highly positive influences on my life, and I thank God for them! Thus, though I disagree with atheists and agnostics, I find the wholesale demonization of their character highly repugnant — it’s like trash talking of my mentors, friends and family.

I have often found more wonder and solace in my science classes than I have on many Sunday mornings being screamed at by not-so-nice preachers. So despite my many disagreements with the regulars here, because I’ve enjoyed the academic climate in the sciences, I feel somewhat at home at TSZ….

Now, on to the main point of this essay! Like IDist Mike Gene, I find the atheist/agnostic viewpoint reasonable for the simple reason that most people don’t see miracles or God appearing in their every day lives if not their entire lives. It is as simple as that.

Naturalism would seem to me, given most everyone’s personal sample of events in the universe, to be a most reasonable position. The line of reasoning would be, “I don’t see miracles, I don’t see God, by way of extrapolation, I don’t think miracles and God exists. People who claim God exists must be mistaken or deluded or something else.”

The logic of such a viewpoint seems almost unassailable, and I nearly left the Christian faith 15 years ago when such simple logic was not really dealt with by my pastors and fellow parishioners. I had to re-examine such issues on my own, and the one way I found to frame the ID/Creation/Evolution issue is by arguing for the reasonableness of Black Swan events.

I will use the notion of Black Swans very loosely. The notion is stated here, and is identified with a financeer and academic by the name of Nasim Taleb. I have Taleb’s books on investing entitled Dynamic Hedging which is considered a classic monograph in mathematical finance. His math is almost impenetrable! He is something of a Super Quant. Any way:

https://en.wikipedia.org/wiki/Black_swan_theory

The black swan theory or theory of black swan events is a metaphor that describes an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight.

The theory was developed by Nassim Nicholas Taleb to explain:

1.The disproportionate role of high-profile, hard-to-predict, and rare events that are beyond the realm of normal expectations in history, science, finance, and technology.
2.The non-computability of the probability of the consequential rare events using scientific methods (owing to the very nature of small probabilities).
3.The psychological biases that blind people, both individually and collectively, to uncertainty and to a rare event’s massive role in historical affairs.

Unlike the earlier and broader “black swan problem” in philosophy (i.e. the problem of induction), Taleb’s “black swan theory” refers only to unexpected events of large magnitude and consequence and their dominant role in history. Such events, considered extreme outliers, collectively play vastly larger roles than regular occurrences.[1] More technically, in the scientific monograph Silent Risk , Taleb mathematically defines the black swan problem as “stemming from the use of degenerate metaprobability”.[2]
….
The phrase “black swan” derives from a Latin expression; its oldest known occurrence is the poet Juvenal’s characterization of something being “rara avis in terris nigroque simillima cygno” (“a rare bird in the lands and very much like a black swan”; 6.165).[3] When the phrase was coined, the black swan was presumed not to exist. The importance of the metaphor lies in its analogy to the fragility of any system of thought. A set of conclusions is potentially undone once any of its fundamental postulates is disproved. In this case, the observation of a single black swan would be the undoing of the logic of any system of thought, as well as any reasoning that followed from that underlying logic.

Juvenal’s phrase was a common expression in 16th century London as a statement of impossibility. The London expression derives from the Old World presumption that all swans must be white because all historical records of swans reported that they had white feathers.[4] In that context, a black swan was impossible or at least nonexistent. After Dutch explorer Willem de Vlamingh discovered black swans in Western Australia in 1697,[5] the term metamorphosed to connote that a perceived impossibility might later be disproven. Taleb notes that in the 19th century John Stuart Mill used the black swan logical fallacy as a new term to identify falsification.[6]

The very first question I looked at when I was having bouts of agnosticism was the question of origin of life. Now looking back, the real question being asked is “was OOL a long sequence of typical events or a black swan sequence of events.” Beyond OOL, one could go on to the question of biological evolution. If we assume Common Descent or Universal Common Ancestry (UCA), would evolution, as a matter of principle, proceed by typical or black swan events or a mix of such events (the stock market follows patterns of typical events punctuated by black swan events).

If natural selection is the mechanism of much of evolution, does the evolution of the major forms (like prokaryote vs. eukaryote, unicellular vs. multicellular, etc.) proceed by typical or black swan events?

[As a side note, when there is a Black Swan stock market crash, it isn’t a POOF, but a sequence of small steps adding up to an atypical set of events. Black Swan doesn’t necessarily imply POOF, but it can still be viewed as a highly exceptional phenomenon.]

Without getting into the naturalism vs. supernaturalism debate, one could at least make statements whether OOL, eukaryotic evolution (eukaryogenesis), multicellular evolution, evolution of Taxonomically Restricted Features (TRFs), Taxonomically Restricted Genes (TRGs), proceeded via many many typical events happening in sequence or a few (if not one) Black Swan event.

I personally believe, outside of the naturalism supernaturalism debate, that as a matter of principle, OOL, eukaryogenesis, emergence of multicellularity (especially animal multicellularity), must have transpired via Black Swan events. Why? The proverbial Chicken and Egg paradox which has been reframed in various incarnations and supplemented with notions such as Irreducible Complexity or Integrated Complexity or whatever. Behe is not alone in his notions of this sort of complexity, Andreas Wagner and Joe Thornton use similar language even though they thing such complexity is bridgeable by typical rather than Black Swan events.

When I do a sequence lookup at the National Institutes of Health (NIH) National Center of Biotechnology Information (NCBI), it is very easy to see the hierarchical patterns that would, at first glance, confirm UCA! For example look at this diagram of Bone Morphogenetic Proteins (BMP) to see the hierarchical patterns:

BMP

From such studies, one could even construct Molecular Clock Hypotheses and state hypothesized rates of molecular evolution.

The problem however is that even if some organisms share so many genes, and even if these genes can be hierarchically laid out, there are genes that are restricted only to certain groups. We might refer to them as Taxonomically Restricted Genes (TRG). I much prefer the term TRG over “orphan gene” especially since some orphan genes seem to emerge without the necessity of Black Swan events and orphan genes are not well defined and orphan genes are only a subset of TRGs. I also coin the notion of Taxonomically Restricted Feature (TRF) since I believe many heritable features of biology are not solely genetic but have heritable cytoplasmic bases (like Post Translation modifications of proteins).

TRGs and TRFs sort of just poof onto the biological scene. How would we calibrate the molecular clock for such features? It goes “from zero to sixty” in a poof.

Finally, on the question of directly observed evolution, it seems to me, that evolution in the present is mostly of the reductive and exterminating variety. Rather than Dawkins Blind Watchmaker, I see a Blind Watch Destroyer. Rather than natural selection acting in cumulative modes, I natural selection acting in reductive and exterminating modes in the present day, in the lab and field.

For those reasons, even outside the naturalism vs. supernaturalism debate, I would think a reasonable inference is that many of the most important features of biology did not emerge via large collections of small typical events but rather via some Black Swan process in the past, not by any mechanisms we see in the present. It is not an argument from incredulity so much as a proof by contradiction.

If one accepts the reasonableness of Black Swan events as the cause of the major features of biology, it becomes possible to accept that these were miracles, and if Miracles there must be a Miracle Maker (aka God). But questions of God are outside science. However, I think the inference to Black Swan events for biology may well be science.

In sum, I think atheism is a reasonable position. I also think the viewpoint that biological emergence via Black Swan events is also a highly reasonable hypothesis even though we don’t see such Black Swans in every day life. The absence of such Black Swans is not necessarily evidence against Black Swans, especially if the Black Swan will bring coherence to the trajectory of biological evolution in the present day. That is to say, it seems to me things are evolving toward simplicity and death in the present day, ergo some other mechanism than what we see with our very own eyes was the cause of OOL and bridging of major gaps in the taxonomic groupings.

Of course such a Black Swan interpretation of biology may have theological implications, but formally speaking, I think inferring Black Swan hypotheses for biology is fair game in the realm of science to the extent it brings coherence to real-time observations in the present day.

775 thoughts on “The Reasonableness of Atheism and Black Swans

  1. Mung,

    Me: True enough – my gills are worse than useless, for example.

    Mung: They are useless because you don’t actually have gills.

    Whooosh!

  2. stcordova,

    Here is a data point: [Lynch]

    That’s Creationist Michael Lynch? No, evolutionist Michael Lynch, of course. So maybe you’re missing something. Alleles segregating in human populations extrapolated to the entirety of evolutionary history? You don’t think that a biit of a stretch?

  3. Allan,

    Humans have TP53, it is not unique to elephants. And by the way, the breaking of TP53 in humans is implicated in cancer:

    http://onlinelibrary.wiley.com/doi/10.1002/humu.9457/pdf

    Genetic defects in CHEK2 and TP53 have been implicated in prostate cancer development.

    So why hasn’t the all-powerful force of selection not cleaned out these defects from the human genome? Answer, mutational load problem that Muller, Larry Moran, and lil ole me pointed out.

    Now that we have large databases and cheap sequencing, we will see sequence divergence due to mutation in real time which selection cannot arrest.

    That a mutation may be neutral with respect to immediate reproductive success doesn’t mean it cannot be causing functional damage.

  4. stcordova,

    1. most variation is not creative of function, it is at best of no effect and more damaging than creative. The usage of the words “beneficial” and “deleterious” in evolutionary literature is misleading.

    2. For every bad mutant purged by drift or selection, more new mutants emerge.

    You can’t diss the evolutionary concepts of ‘beneficial’ and ‘deleterious’ in one sentence then use them in the next!

  5. stcordova,

    Humans have TP53, it is not unique to elephants. And by the way, the breaking of TP53 in humans is implicated in cancer:

    I’m well aware of that. That’s why I mentioned it. Elephants have multiple copies; we have one. ergo, they have something we lack – multiple copies.

  6. stcordova,

    So why hasn’t the all-powerful force of selection not cleaned out these defects from the human genome? Answer, mutational load problem that Muller, Larry Moran, and lil ole me pointed out.

    No, it’s for the reasons I mentioned – we have a different reproductive profile to elephants. The selective advantage of multiple TP53’s is not guaranteed to be equally strong in every species, regardless of all other considerations. You’re an inveterate Arbitrary Extrapolator.

    Genetic load has not reduced our TP53s to one copy. Elephants have gained them.

  7. Here’s some crucial context from the same Lynch paper that Sal quoted:

    Dating back to Muller (49), considerable thought has been given to the potential for a cumulative buildup of the deleterious-mutation load in the human population (2, 3, 50, 51). The motivation for this concern is the enormous change in the selective environment that human behavior has induced during approximately the past century. Innovations spawned by agriculture, architecture, industrialization, and most notably a sophisticated health care industry have led to a dramatic relaxation in selection against mildly deleterious mutations, and modern medical intervention is increasingly successful in ensuring a productive lifespan even in individuals carrying mutations with major morphological, metabolic, and behavioral defects. The statistics are impressive. For example, fetal mortality has declined by approximately 99% in England since the 1500s (52), and just since 1975, the mortality rate per diagnosed cancer has declined by approximately 20% in the United States population (53). Because most complex traits in humans have very high heritabilities (54), the concern then is that unique aspects of human culture, religion, and other social interactions with well intentioned short-term benefits will eventually lead to the long-term genetic deterioration of the human gene pool. Of course, a substantial fraction of the human population still has never visited a doctor of any sort, never eaten processed food, and never used an automobile, computer, or cell phone, so natural selection on unconditionally deleterious mutations certainly has not been completely relaxed in humans. But it is hard to escape the conclusion that we are progressively moving in this direction.

    Emphasis added.

  8. .

    You can’t diss the evolutionary concepts of ‘beneficial’ and ‘deleterious’ in one sentence then use them in the next!

    One can if using Proof by Contradiction:
    https://en.wikipedia.org/wiki/Proof_by_contradiction

    The evolutionary definition as defined by population geneticists of “beneficial” is not at all beneficial in the every day sense of the word “beneficial”. The usage is an abuse of language. The correct term would be “reproduction-increasing mutations” or something like that. The word “beneficial” is being equivocated.
    https://en.wikipedia.org/wiki/Equivocation

    I’m dissing the abuse of language by evolutionists, not the word “beneficial” itself. Another abuse of language is “Natural Seleciton” when “Conceptual Selection” is what Darwin really meant, but since “Conceptual Selection” doesn’t happen in nature, to sell the idea as what really happens in nature, Darwin gave his idea the false advertising label of “Natural Selection.” What really happens in nature is “Elimination of Species by Means of Natural Selection”.

    Darwin and Dawkins were concerned with resemblance of design in biology, not ability of things to simply replicate. Salt crystals replicate. Replication isn’t the real question, it’s the emergence of Rube Goldberg complexity.

  9. stcordova,

    The choice between neologism and existing language is always a tricky one. See ‘code’ for instance, haha. But you know exactly what the words mean as they are used. There is no equivocation involved in using ‘beneficial’ and ‘detrimental’ in the reproductive sense, if both parties are fully aware of the meaning.

    When you say ‘for every bad mutant purged by drift or selection …’, the only valid sense of those words is the evolutionary one. Bad Mutants as adjudicated by S Cordova esq. are not guaranteed to be visible to selection.

    Meanwhile the kind of ‘breakage’ you envision cannot be breakage of massive selective effect. If it was, it would not get anywhere in the population.

  10. a dramatic relaxation in selection against mildly deleterious mutations,

    Ok, so we increase selection pressure , that doesn’t remove the mutational load problem.

    If each child has 130 more slight defects than the parent, how can increasing selection pressure help? Muller/Moran give around 1 mutation as the limit, we could be 130 times beyond that.

    And it’s not just technology it is the Y-chromosome. Oxford Geneticist Bryan Sykes who was a pioneer in Y-chromosomal Adam and mitochondrial Eve research:

    http://abcnews.go.com/Health/story?id=4725121

    Imagine a world without men: Lauren Bacall but no Bogie, Hillary Clinton but no Bill, no Starsky or Hutch.

    This isn’t just an unlikely sci-fi scenario. This could be reality, according to Bryan Sykes, an eminent professor of genetics at Oxford University and author of “Adam’s Curse: A Future Without Men.”

    “The Y chromosome is deteriorating and will, in my belief, disappear,” Sykes told me. A world-renowned authority on genetic material, Sykes is called upon to investigate DNA evidence from crime scenes. His team of researchers is currently compiling a DNA family tree for our species.

    Sykes believes there will be a phase where there will be 10 women competing for each man.

  11. cubist,
    “Yes: The earliest signs of life on Earth date back to around 3.5 billion years ago.”

    The age of the universe is 13.7 billion years old yet we can create mathematical models of it origin and expansion.

    “Perhaps you can firm up your definition, so that there’s some way to tell the difference between a “grey swan” and a “promising area of research”?”

    If you call a grey swan a potentially promising area of research we have agreement.

  12. stcordova: Ok, so we increase selection pressure , that doesn’t remove the mutational load problem.

    Increasing the selection pressure increases the genetic entropy.

  13. stcordova: If each child has 130 more slight defects than the parent, how can increasing selection pressure help? Muller/Moran give around 1 mutation as the limit, we could be 130 times beyond that.

    ?????? WTF?

  14. stcordova,

    Lynch via Dave Carlson: […] a dramatic relaxation in selection against mildly deleterious mutations,

    Sal: Ok, so we increase selection pressure , that doesn’t remove the mutational load problem.

    So relaxing selection pressure in humans allows deleterious mutations to spread, but increasing it again would have no effect? Shome Mishtake Shurely?

  15. Petrushka:

    ?????? WTF?

    WTF = Wednesday, Thursday, Friday 🙂

    I remember my barber putting that on her business card telling me what days she worked, WTF.

    Let’s look at the coding regions first. We know there are disputes about the specificity of proteins regarding their amino acid residues and how much change a protein can tolerate on average. I’ve seen numbers from functional space from 1 in 25,000 to 10^77. Let’s be generous, and say one gets 1 functional protein out of 1000 possible combinations for a specific interactome path. That means a mutation on average will be function compromising in the coding region. Like strands on a rope, loss of one strand will not be catastrophic, but it compromises function ever so slightly. We know that to be the case with transgenic experiments.

    With non-coding regions, the problem is more subtle, but again like strands on a rope, function is compromised when there is a mutation. This is borne out in GWAS studies that indicated 90% of defects associated with SNP (Single Nucleotide Polymorphisims) appear in non-coding regions. In some of the GWAS studies I’ve seen, the correlation is weak, but detectable. That is to say, there is not inevitability of a medical break down, but an increased probability. That indicates functional compromise.

    I sent the following data set to John Sanford regarding the 50-90% figure of functional compromise being from mutations in non-coding regions.

    Dr. Sanford then sent me 13 free copies of his book Genetic Entropy ! I guess he liked what I found. 🙂

    The first citation below is from Dr. Stammatoyanopolous who spoke at ENCODE 2015.

    One can hear the ENCODE 2015 speakers for free at this website:
    https://www.encodeproject.org/tutorials/encode-users-meeting-2015/

    No wonder Larry hates ENCODE, they’ve invested hundreds of millions of dollars in a theory that sort of trashes his claims.

    …..
    http://www.stamlab.org/topic/regulatory_genomics_of_disease

    “To date, hundreds of genome-wide association studies have been conducted, spanning diverse diseases and quantitative phenotypes. However, the vast majority of disease- or trait-associated variants emerging from these studies fall within non-coding sequence, complicating their functional evaluation.”

    The one paper that was explicit was this one, and not only 50% but 90%!

    http://www.sciencedirect.com/science/article/pii/S0925443914000714

    ” What is emerging from these GWAS, however, is that > 90% of disease-associated SNPs are located in non-coding regions of the genome for example in promoter regions, enhancers, or even in non-coding RNA genes [1] and [6]. ”

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2687147/

    If one really really believes (like Larry Moran) that most of the genome is junk, say 80%, does he really think we can mutate 80% of that junk and we have a viable human?

    The other thing that people don’t appreciate is that the transcriptome is composed of RNAs that have biophysical folding properties. What we may think of DNA junk may have meaning in the RNA transcripts.

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3073324/

    We can’t judge DNA solely by DNA and proteins, but its effect on RNA structure! The sequences are important to folding, and how are RNAs folded? We know proteins are involved, but are other RNAs involved. This paper highlights the role of proteins, but I suspect some day we’ll see RNAs also being important in helping RNAs fold each other based on the ceRNA hypothesis.

    RNA molecules play a central role in virtually all cellular processes. To exert their function RNAs have to fold into specific three-dimensional structures. The process of folding describes how an RNA molecule undergoes the transition from the unfolded, disordered state to the native, functional conformation. In vitro RNA folding has been intensely studied, mostly using catalytic RNAs as model systems. Measuring formation of the native structure as a function of catalysis provides an immense advantage for investigating ribozyme folding. Hitherto several folding paradigms have been discovered.1–10 In principle, RNA encounters two major folding problems:11 (i) RNA molecules are prone to misfold, thereby becoming trapped in inactive, often long-lived conformations, the escape from which becomes rate-limiting during the folding process; (ii) the native, functional RNA conformation might not be thermodynamically favored over other intermediate structures, thus requiring the assistance of a specific RNA-binding protein (or high salt) for stabilization of the tertiary structure.

    So let’s say 80% of the 130 mutations (130 is the figure Moran himself gives) are function compromising, that’s still 104 mutations that are function compromising. The issue is that the functional compromise of each mutation in isolation might have very weak effect like the amount of wear on a tire on car when it is driven 10 miles.

    The simplified version of the math is laid out here:
    http://www.uncommondescent.com/genetics/fixation-rate-what-about-breaking-rate/

  16. GlenDavidson:

    Sykes believes there will be a phase where there will be 10 women competing for each man.

    Well, he can dream.

    Hmm. Doesn’t sound like a pleasant dream for the man (in my sexist opinion) because I extrapolate from the fact that men in modern society are increasingly useless anyways for the typical “male” advantage of larger size and strength. As women are emerging from millennia of oppression, and with the help of technology which we have invented, we are finding we don’t miss “having a man around the house”.

    Sex for fun? Well, yeah, that would be sadly missed. But sex with a man is always less reliable in terms of female pleasure, so you guys won’t be missed quite as much as you wish you would be.

  17. hotshoe_
    Hmm.Doesn’t sound like a pleasant dream for the man (in my sexist opinion) because I extrapolate from the fact that men in modern society are increasingly useless anyways for the typical “male” advantage of larger size and strength.As women are emerging from millennia of oppression, and with the help of technology which we have invented, we are finding we don’t miss “having a man around the house”.

    Sex for fun? Well, yeah, that would be sadly missed.But sex with a man is always less reliable in terms of female pleasure, so you guys won’t be missed quite as much as you wish you would be.

    Where’s the evolutionary perspective?

    Oh, right, when it comes to channeling sexist bullshit, no science needed.

    Glen Davidson

  18. Although Sykes is undoubtedly cleverer than me, I’m not so sure of his evolutionary grasp. There seems to be a huge inconsistency between these two statements, made within a couple of paragraphs of an interview with “Mr DNA” himself.

    By my estimate, in about 5,000 generations — 125,000 years — male fertility will be roughly 1 percent of what it is now. Mutations in Y chromosomes are already known to reduce male fertility. So I see a slow decline in men’s fertility until, eventually, men can no longer breed naturally.

    yet…

    On purely genetic grounds, I never liked the idea of a ”gay gene” since it is very hard to see how such a gene could have survived and spread among our ancestors since it is bound to have been eliminated if homosexual men had fewer children than their heterosexual contemporaries.

    How do genes reducing male fertility spread, where a ‘gay gene’ cannot?

  19. stcordova,

    So let’s say 80% of the 130 mutations (130 is the figure Moran himself gives) are function compromising, that’s still 104 mutations that are function compromising. The issue is that the functional compromise of each mutation in isolation might have very weak effect like the amount of wear on a tire on car when it is driven 10 miles.

    Rather than ‘maybe the data will eventually prove me right’, can you point to a single function compromising mutation anywhere – other than clearly pathological ones? There are 104 in each of us, should not be hard to find.

    ENCODE found 80% of DNA gave a result in an array of assays. That is a long way from being functional – DNA has to interact with cellular machinery on a grand scale in order to determine what to process. Very few people (beyond Creationists) support ENCODE’s definition of function.

  20. Had a lot spare, did he … ?

    Yes, tragedy. Boy you have a clever way of rubbing it in too, I didn’t think of it that way. 🙂 A sharp wit you have. Touche. Ouch.

    3rd edition (at least).

    I think he wants me to distribute them. I’m giving you some of the stuff in this discussion for free. The stuff about equivocation and abuse of language however was not in his book.

    However, there is a book that explores Darwinian theory in terms of its abuse of language by MIT cognitive scientist Jerry Fodor:

    http://www.amazon.com/gp/product/B00779MVMM/ref=dp-kindle-redirect?ie=UTF8&btkr=1

    What Darwin Got Wrong is a remarkable book, one that dares to challenge the theory of natural selection as an explanation for how evolution works—a devastating critique not in the name of religion but in the name of good science. Combining the results of cutting-edge work in experimental biology with crystal-clear philosophical arguments, Fodor and Piattelli-Palmarini mount a reasoned and convincing assault on the central tenets of Darwin’s account of the origin of species. This is a concise argument that will transform the debate about evolution and move us beyond the false dilemma of being either for natural selection or against science.

    The writing style was downright awful, but I think it has some good points whenever I could actually understand what Fodor was saying!

  21. The issue on the Y chromosome seems at odds with its history. It’s been around for 166-300 million years. And it’s only got 125,000 years left? Presumably that’s across all species using the SRY system, a large chunk of Mammalia. I am very doubtful.

    125,000 years is, incidentally, the coalescence time to Y chromosome Adam, from Sykes’s own work. Backwards is a much smaller population than forwards. The Last Y must already be well on its way. And yet, how can it be?

  22. A bit unfortunate in a book attacking abuse of language!

    What did I say about you having a sharp wit! Touche again. Ouch.

  23. hotshoe_: Hmm. Doesn’t sound like a pleasant dream for the man (in my sexist opinion) because I extrapolate from the fact that men in modern society are increasingly useless anyways for the typical “male” advantage of larger size and strength. As women are emerging from millennia of oppression, and with the help of technology which we have invented, we are finding we don’t miss “having a man around the house”.

    I ,for one ,welcome that technology. H/T Churchill.

  24. hotshoe_: Sex for fun? Well, yeah, that would be sadly missed. But sex with a man is always less reliable in terms of female pleasure, so you guys won’t be missed quite as much as you wish you would be.

    All you need is a boy and his dog.

  25. Dave Carlson: the concern then is that unique aspects of human culture, religion, and other social interactions with well intentioned short-term benefits will eventually lead to the long-term genetic deterioration of the human gene pool.

    Dave Carlson quoting Michael Lynch.

    Hmm. Touchy subject.

  26. Restoring the context of the question I answered…

    colewd:

    Is there a reason that we are ignorant to the cause of so many inflection points in evolution OOL Eukaryotic Multi cellular etc?

    Yes: The earliest signs of life on Earth date back to around 3.5 billion years ago.

    The age of the universe is 13.7 billion years old yet we can create mathematical models of it origin and expansion.

    I apologize; I thought that a person as interested in “why is that?”-type questions as you, might look at the bare assertion of it’s old and go on to think about why the age of Event X might be relevant to the question of why we are or are not ignorant of the causes of that Event X. I see that I was wrong about that.

    We don’t know exactly when any of those “many inflection points in evolution” occurred, but of the three “inflection points” you made specific reference to, the evidence we have suggests that one (the origin of life on Earth) occurred at least 3.5 gigayears (billions of years) ago; a second (the origin of eukaryotes) occurred somewhere between 1.6 gigayears ago and 2.1 gigayears ago; and a third (origin of multicellular life) occurred somewhere between 3 gigayears ago and 3.5 gigayears ago. I don’t know what other “inflection points” you might have had in mind when you mentioned “so many inflection points in evolution”, but looking at the three “inflection points” you did mention, we’re talking about events which occurred billions of years ago, okay?

    A billion years is a long time. And the Earth is a pretty active environment, what with chemical reactions and earthquakes and hurricanes and volcanoes and lightning and wildfires and all. Whatever evidence may have been left by any Event X which occurred billions of years ago, the fact that it did occur billions of years ago means that there has been billions of years’ worth of opportunities for earthquakes, hurricanes, etc etc, to have mangled that evidence to the point where we contemporary humans cannot recognize it any more.

    So for any particular piece of gigayears-old evidence that one might be interested in, there’s a fairly high probability that that particular piece of evidence just isn’t there any more, on account of it’s suffered one of the many destructive fates which the Earth’s environment provides (i.e., it got burned up by lava, or it’s irrecoverably crushed under miles and miles of rock, etc). It’s possible that that particular piece of evidence might actually have managed to survive all those billions of years, sure, but it sure ain’t likely. Oh, and even of that particular piece of evidence <has survived to the present day, there’s no guarantee that it will now be situated in such a way that we humans can actually get at it…

    And that is the “reason that we are ignorant to the cause of so many inflection points in evolution OOL Eukaryotic Multi cellular etc”: Too much of the evidence for those “inflection points” just isn’t available for study at this time.

    I have deliberately declined to address your side-note about the age of the universe, as it really isn’t all that relevant here.

    Perhaps you can firm up your definition, so that there’s some way to tell the difference between a “grey swan” and a “promising area of research”?

    If you call a grey swan a potentially promising area of research we have agreement.

    Then why bother with that ‘grey swan’ metaphor in the first place? Yes, there are questions we don’t have answers to at the present time. So what? What’s the point of calling them ‘grey swans’ rather than ‘unanswered questions’? Why are you so enamored of that specific phrase? Are there any scientists who would only investigate an area of research if someone calls it a “grey swan” rather than an “unanswered question” or “promising area of research”? Somehow, I doubt it…

  27. Alan Fox,

    Hmm. Touchy subject.

    Far too many biologists have hitched their wagon to eugenics. I’m not saying Lynch does. But people who know how unpredictable and non-obvious evolutionary trajectories are should know better than to think they can defeat Orgel’s Rule. Eugenics in dogs and horses is producing all manner of defects at an accelerated rate. More well-meant tinkering would not be an answer to the consequences of well-meant tinkering.

  28. Allan Miller: Far too many biologists have hitched their wagon to eugenics. I’m not saying Lynch does. But people who know how unpredictable and non-obvious evolutionary trajectories are should know better than to think they can defeat Orgel’s Rule.

    Indeed. The fact we only have a limited understanding of how variation in the genome results in morphological (and behavioural) differences in the phenotype should caution us against unintended results.

    Eugenics in dogs and horses is producing all manner of defects at an accelerated rate.

    Not to mention the banana.

    More well-meant tinkering would not be an answer to the consequences of well-meant tinkering

    “Evolution is cleverer than you are.”

  29. Eugenics in dogs and horses is producing all manner of defects at an accelerated rate.

    Didn’t know that. I was aware of the inbreeding problem of associated with livestock eugenics.

    Eugenics = WEASEL on steroids with real genomes

    I suspect part of the problem is the inadvertent fixation of bad mutations plus making all the bad become homozygous to boot.

    The irony is by forcing selection on some traits it interferes with selection on others and hence the bad by neutral drift get fixed into the genome gene pool permanently. WEASEL doesn’t model the problem of selection interference.

    Example of selection interference: One individual is fast but dumb, the other is smart but slow. Lion eat smart guy, dumb fast guy lives. That’s one reason why WEASEL on steroids doesn’t work when dealing with real genomes — the shear vastness of traits that can create selection interference.

    Increase in selection pressure may result in bottlenecks that fix undesirable malfunctions.

    Here is an abstract by the way, not directly related, but posting it before I forget:

    http://www.ncbi.nlm.nih.gov/pubmed/7475094

    Abstract

    It is well known that when s, the selection coefficient against a deleterious mutation, is below approximately 1/4Ne, where Ne is the effective population size, the expected frequency of this mutation is approximately 0.5, if forward and backward mutation rates are similar. Thus, if the genome size, G, in nucleotides substantially exceeds the Ne of the whole species, there is a dangerous range of selection coefficients, 1/G < s < 1/4Ne. Mutations with s within this range are neutral enough to accumulate almost freely, but are still deleterious enough to make an impact at the level of the whole genome. In many vertebrates Ne approximately 10(4), while G approximately 10(9), so that the dangerous range includes more than four orders of magnitude. If substitutions at 10% of all nucleotide sites have selection coefficients within this range with the mean 10(-6), an average individual carries approximately 100 lethal equivalents. Some data suggest that a substantial fraction of nucleotides typical to a species may, indeed, be suboptimal. When selection acts on different mutations independently, this implies too high a mutation load. This paradox cannot be resolved by invoking beneficial mutations or environmental fluctuations. Several possible resolutions are considered, including soft selection and synergistic epistasis among very slightly deleterious mutations.

    “Several possible resolutions are considered, including soft selection and synergistic epistasis among very slightly deleterious mutations.” These scenarios fail experimentally in bottleneck scenarios as evidenced by problems in dog eugenics. Interference selection ovewhelms the possible other mechanisms.

    Conceptual GAs written by software developers don’t model the problems of bilogical GAs. That’s why the ANNIHILATOR model is closer to reality than WEASEL.

    Creator God is smarter than mindless evolution.

  30. Sal, you can be depended on to keep putting your hand in the fire, regardless of how many times burned.

  31. stcordova

    Creator God is smarter than mindless evolution.

    Evolution is a whole lot smarter than Sal Cordova.

    I see Sal absolutely refuses to deal with the real world data that show extant genomes aren’t “degraded” from the genomes recovered from paleolithic animals. He keeps avoiding the real world data that fast breeding populations like mice haven’t gone extinct but are doing just fine with their “degraded” genome.

    This is the guy who wants to teach his lies to children.

  32. stcordova,

    I think Kondrashov may have forgotten the Law of Large Numbers. One mutation may behave as effectively neutral because Ne is small, and so drift to fixation. Then another, then another, then … hang on – this can’t be right indefinitely. You are increasing the number of trials. That must have an effect on variance.

    Suppose all new mutations had selection coefficient against of 0.00001. It can’t be right (or doesn’t seem so to me) that 100 such mutations hitting a single population of Ne of x, in which 0.00001 was selectively neutral, would behave substantially differently to a single mutation with that selection coefficient in a large population which is 100 times bigger.

    Once fixed, such a ‘nearly neutral’ detrimental mutation is at higher risk than the truly neutral over the long term for similar reasons.

  33. Allan Miller: Once fixed, such a ‘nearly neutral’ detrimental mutation is at higher risk than the truly neutral over the long term for similar reasons.

    The observed fact is that species do not go extinct due to genetic entropy.

    Not bacteria, not mice, not crocodiles, not algae, not cockroaches. Nothing.

    One can build models to explain why, but the evidence demands an explanation.

    Attempting to argue that a mountain does not exist because it doesn’t appear on the map is simply insane.

  34. that 100 such mutations hitting a single population of Ne of x, in which 0.00001 was selectively neutral, would behave substantially differently to a single mutation with that selection coefficient in a large population which is 100 times bigger.

    To clarify notation:

    | S | less than 1/(4Ne)

    to define nearly neutral in Kondrashov’s paper.

    For smaller and smaller populations, |S| can be bigger and bigger and still nearly neutral. 100 deleterious mutations with S= -0.000001 have a much higher probability of getting accidentally fixed in a small population than a single mutation in with S=-0.00001 in a population 100 times larger since probability of fixation of a neutral is 1/(2Ne), and it will be even more improbable because S is deleterious, so I think the single mutation is at least 10,000 times more improbable to fix than 1 of 100 mutations in a small population.

    nearly neutral has various definitions

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2707937/

    Given the importance of his contribution, it is not an exaggeration to say that Kimura was the most important evolutionary biologist since Darwin. What is perhaps surprising in Kimura’s case, given the impact of his work on the biological sciences, is that its significance has so far been little appreciated by the general educated public or by philosophers and historians of ideas. To take one example, a recent book entitled Evolution: the History of an Idea (Larson 2004) includes not a single mention of Kimura. To my mind, this is rather like writing a history of physics without mentioning Einstein.

    The philosopher Daniel Dennett, in his book Darwin’s Dangerous Idea, proposes that Darwin’s key (and “dangerous”) insight was that evolution is an “algorithmic process” (Dennett 1995). By “algorithmic,” one gathers that Dennett means essentially deterministic. But determinism was hardly a bold or “dangerous” idea in Darwin’s time, having been a familiar concept in Western thought since at least the Stoics. Rather, one might suggest that the truly new idea in evolutionary biology is that of Kimura (building on the work of Sewall Wright), which along with Heisenberg’s Uncertainty Principle and Gödel’s proof of the incompleteness of mathematics, formed part of a Twentieth Century revolution in thought that for the first time revealed the universe as non-algorithmic.

    One prediction of the Neutral Theory is that the effectiveness of natural selection depends on the effective population size. Kimura (1983) suggested that the behavior of an allele is controlled mainly by genetic drift when its relative advantage or disadvantage, measured by the selection coefficient (s), is less than twice the reciprocal of the effective population size (Ne); i.e. |s| < 1/(2Ne). Such an allele is referred to as “almost neutral” or “nearly neutral.” Li (1978) proposed a more relaxed definition of near-neutrality; namely, when |s| < 1/Ne. Nei (2005), taking into account the random variation among individuals with respect to numbers of offspring, proposed |s| < 1/√2Ne as a statistical definition of neutrality. Whichever of these criteria is more appropriate, the theory predicts that, in a very small population, genetic drift becomes such a powerful force that natural selection (whether positive or purifying) cannot overcome it unless the selection is very strong.

    WEASEL doesn’t model these issues. ANNIHILATOR does.

  35. stcordova: WEASEL doesn’t model these issues. ANNIHILATOR does.

    Do you have a point?

    WEASEL doesn’t model very much. One cannot derive much from WEASEL except that cumulative selection works.

    Your problem is that ANNIHILATOR apparently does something equally limited.

    Are you ever going to address your problem with bacteria, algae, crocodiles, cockroaches and such?

  36. stcordova,

    I think the single mutation is at least 10,000 times more improbable to fix than 1 of 100 mutations in a small population.

    Yes, but you missed the point. One mutation is more likely to fix in the smaller population, bringing the population a notch down. But you can’t just multiply this up, so 100 such ‘effectively neutral’ small mutations are collectively going to fix with the same probability. By doing 100 trials of separate mutations with the same coefficient, below the nearly neutral threshold, the entirety of that collection is not going to behave as effectively neutral, because of the LLN. You cannot multiply up an approximation.

    And it’s not as if nothing beneficial ever happens. You just count the things that support your thesis, and ignore those that don’t. Effectively neutral detrimental mutations can still be purged from the population by selection. It’s population dynamics.

  37. But you can’t just multiply this up, so 100 such ‘effectively neutral’ small mutations are collectively going to fix with the same probability.

    But no one is claiming this — not me, not Kondrashov.

    For neutrals, the fixation rate equals mutation rate independent of population size. See the derivation here:

    https://en.wikipedia.org/wiki/Fixation_%28population_genetics%29#Probability_of_fixation

    2N mu *1 / (2N) = mu

    for nearly neutral that are deleterious, IIRC, it’s about 1/2 mu.

    If mu_bad = 100 for humans, that’s 100 that would fix in small population per generation. But that’s not the only problem, even if most of the get purged out by drift, for every bad mutation that is purged, new ones emerge. The net result is each generation has on the order 100 function compromising mutations on average per individual whether these mutation fix or not.

    Finally given the size and geographical spread of human populations, how many “beneficials” do you think on average are going to fixation for the human race on average per person per generation? I think zero is a good estimate.

  38. stcordova,

    The net result is each generation has on the order 100 function compromising mutations on average per individual whether these mutation fix or not.

    I’ve seen you make this claim (or something similar) a couple times, but I’ve missed the evidence for it. Could you please repost or link to it?

  39. a new mutation that eventually fixes will spend an average of 4Ne generations as a polymorphism in the population.

    https://en.wikipedia.org/wiki/Fixation_(population_genetics)#Probability_of_fixation

    Human Ne size around 1 billion give or take. Generation time, say 20 years.

    Time to fixation of neutrals on order of 4 Ne = 4 billion generations = 80 billion years

    What it is for selectively favored, I don’t know, but it can’t be any time soon.

    Given the mutation rates, in 80 billion years so many nucleotide position will be mutated so it is pointless to even think of fixation on those time scales!

  40. Dave Carlson:
    stcordova,

    I’ve seen you make this claim (or something similar) a couple times, but I’ve missed the evidence for it.Could you please repost or link to it?

    I know where he gets the number. It’s from Larry Moran.

    How he managed to misunderstand Moran and miss everything associated with Moran’s argument, is up for grabs.

  41. stcordova: Human Ne size around 1 billion give or take.Generation time, say 20 years.

    Time to fixation of neutrals on order of 4 Ne = 4 billion generations = 80 billion years

    What it is for selectively favored, I don’t know, but it can’t be any time soon.

    Given the mutation rates, in 80 billion years so many nucleotide position will be mutated so it is pointless to even think of fixation on those time scales!

    Sal, the effective population size of humans is vastly smaller than 1 billion people. It’s on the order of 10,000 or slightly less. I don’t have time to dig up a citation for that at the moment, but there is a rather large literature on the topic that you can look up quite easily.

Leave a Reply