The Reasonableness of Atheism and Black Swans

As an ID proponent and creationist, the irony is that at the time in my life where I have the greatest level of faith in ID and creation, it is also the time in my life at some level I wish it were not true. I have concluded if the Christian God is the Intelligent Designer then he also makes the world a miserable place by design, that He has cursed this world because of Adam’s sin. See Malicious Intelligent Design.

Jesus prophesied of the intelligently designed outcome of humanity: “wars and rumors of wars..famines…pestilence…earthquakes.” If there is nuclear and biological weapons proliferation, overpopulation, destruction of natural resources in the next 500 years or less, things could get ugly. If such awful things are Intelligently Designed for the trajectory of planet Earth, on some level, I think it would almost be merciful if the atheists are right….

The reason I feel so much kinship with the atheists and agnostics at TSZ and elsewhere is that I share and value the skeptical mindset. Gullibility is not a virtue, skepticism is. A personal friend of Richard Dawkins was my professor and mentor who lifted me out of despair when I was about to flunk out of school. Another professor, James Trefil, who has spent some time fighting ID has been a mentor and friend. All to say, atheists and people of little religious affiliation (like Trefil) have been kind and highly positive influences on my life, and I thank God for them! Thus, though I disagree with atheists and agnostics, I find the wholesale demonization of their character highly repugnant — it’s like trash talking of my mentors, friends and family.

I have often found more wonder and solace in my science classes than I have on many Sunday mornings being screamed at by not-so-nice preachers. So despite my many disagreements with the regulars here, because I’ve enjoyed the academic climate in the sciences, I feel somewhat at home at TSZ….

Now, on to the main point of this essay! Like IDist Mike Gene, I find the atheist/agnostic viewpoint reasonable for the simple reason that most people don’t see miracles or God appearing in their every day lives if not their entire lives. It is as simple as that.

Naturalism would seem to me, given most everyone’s personal sample of events in the universe, to be a most reasonable position. The line of reasoning would be, “I don’t see miracles, I don’t see God, by way of extrapolation, I don’t think miracles and God exists. People who claim God exists must be mistaken or deluded or something else.”

The logic of such a viewpoint seems almost unassailable, and I nearly left the Christian faith 15 years ago when such simple logic was not really dealt with by my pastors and fellow parishioners. I had to re-examine such issues on my own, and the one way I found to frame the ID/Creation/Evolution issue is by arguing for the reasonableness of Black Swan events.

I will use the notion of Black Swans very loosely. The notion is stated here, and is identified with a financeer and academic by the name of Nasim Taleb. I have Taleb’s books on investing entitled Dynamic Hedging which is considered a classic monograph in mathematical finance. His math is almost impenetrable! He is something of a Super Quant. Any way:

https://en.wikipedia.org/wiki/Black_swan_theory

The black swan theory or theory of black swan events is a metaphor that describes an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight.

The theory was developed by Nassim Nicholas Taleb to explain:

1.The disproportionate role of high-profile, hard-to-predict, and rare events that are beyond the realm of normal expectations in history, science, finance, and technology.
2.The non-computability of the probability of the consequential rare events using scientific methods (owing to the very nature of small probabilities).
3.The psychological biases that blind people, both individually and collectively, to uncertainty and to a rare event’s massive role in historical affairs.

Unlike the earlier and broader “black swan problem” in philosophy (i.e. the problem of induction), Taleb’s “black swan theory” refers only to unexpected events of large magnitude and consequence and their dominant role in history. Such events, considered extreme outliers, collectively play vastly larger roles than regular occurrences.[1] More technically, in the scientific monograph Silent Risk , Taleb mathematically defines the black swan problem as “stemming from the use of degenerate metaprobability”.[2]
….
The phrase “black swan” derives from a Latin expression; its oldest known occurrence is the poet Juvenal’s characterization of something being “rara avis in terris nigroque simillima cygno” (“a rare bird in the lands and very much like a black swan”; 6.165).[3] When the phrase was coined, the black swan was presumed not to exist. The importance of the metaphor lies in its analogy to the fragility of any system of thought. A set of conclusions is potentially undone once any of its fundamental postulates is disproved. In this case, the observation of a single black swan would be the undoing of the logic of any system of thought, as well as any reasoning that followed from that underlying logic.

Juvenal’s phrase was a common expression in 16th century London as a statement of impossibility. The London expression derives from the Old World presumption that all swans must be white because all historical records of swans reported that they had white feathers.[4] In that context, a black swan was impossible or at least nonexistent. After Dutch explorer Willem de Vlamingh discovered black swans in Western Australia in 1697,[5] the term metamorphosed to connote that a perceived impossibility might later be disproven. Taleb notes that in the 19th century John Stuart Mill used the black swan logical fallacy as a new term to identify falsification.[6]

The very first question I looked at when I was having bouts of agnosticism was the question of origin of life. Now looking back, the real question being asked is “was OOL a long sequence of typical events or a black swan sequence of events.” Beyond OOL, one could go on to the question of biological evolution. If we assume Common Descent or Universal Common Ancestry (UCA), would evolution, as a matter of principle, proceed by typical or black swan events or a mix of such events (the stock market follows patterns of typical events punctuated by black swan events).

If natural selection is the mechanism of much of evolution, does the evolution of the major forms (like prokaryote vs. eukaryote, unicellular vs. multicellular, etc.) proceed by typical or black swan events?

[As a side note, when there is a Black Swan stock market crash, it isn’t a POOF, but a sequence of small steps adding up to an atypical set of events. Black Swan doesn’t necessarily imply POOF, but it can still be viewed as a highly exceptional phenomenon.]

Without getting into the naturalism vs. supernaturalism debate, one could at least make statements whether OOL, eukaryotic evolution (eukaryogenesis), multicellular evolution, evolution of Taxonomically Restricted Features (TRFs), Taxonomically Restricted Genes (TRGs), proceeded via many many typical events happening in sequence or a few (if not one) Black Swan event.

I personally believe, outside of the naturalism supernaturalism debate, that as a matter of principle, OOL, eukaryogenesis, emergence of multicellularity (especially animal multicellularity), must have transpired via Black Swan events. Why? The proverbial Chicken and Egg paradox which has been reframed in various incarnations and supplemented with notions such as Irreducible Complexity or Integrated Complexity or whatever. Behe is not alone in his notions of this sort of complexity, Andreas Wagner and Joe Thornton use similar language even though they thing such complexity is bridgeable by typical rather than Black Swan events.

When I do a sequence lookup at the National Institutes of Health (NIH) National Center of Biotechnology Information (NCBI), it is very easy to see the hierarchical patterns that would, at first glance, confirm UCA! For example look at this diagram of Bone Morphogenetic Proteins (BMP) to see the hierarchical patterns:

BMP

From such studies, one could even construct Molecular Clock Hypotheses and state hypothesized rates of molecular evolution.

The problem however is that even if some organisms share so many genes, and even if these genes can be hierarchically laid out, there are genes that are restricted only to certain groups. We might refer to them as Taxonomically Restricted Genes (TRG). I much prefer the term TRG over “orphan gene” especially since some orphan genes seem to emerge without the necessity of Black Swan events and orphan genes are not well defined and orphan genes are only a subset of TRGs. I also coin the notion of Taxonomically Restricted Feature (TRF) since I believe many heritable features of biology are not solely genetic but have heritable cytoplasmic bases (like Post Translation modifications of proteins).

TRGs and TRFs sort of just poof onto the biological scene. How would we calibrate the molecular clock for such features? It goes “from zero to sixty” in a poof.

Finally, on the question of directly observed evolution, it seems to me, that evolution in the present is mostly of the reductive and exterminating variety. Rather than Dawkins Blind Watchmaker, I see a Blind Watch Destroyer. Rather than natural selection acting in cumulative modes, I natural selection acting in reductive and exterminating modes in the present day, in the lab and field.

For those reasons, even outside the naturalism vs. supernaturalism debate, I would think a reasonable inference is that many of the most important features of biology did not emerge via large collections of small typical events but rather via some Black Swan process in the past, not by any mechanisms we see in the present. It is not an argument from incredulity so much as a proof by contradiction.

If one accepts the reasonableness of Black Swan events as the cause of the major features of biology, it becomes possible to accept that these were miracles, and if Miracles there must be a Miracle Maker (aka God). But questions of God are outside science. However, I think the inference to Black Swan events for biology may well be science.

In sum, I think atheism is a reasonable position. I also think the viewpoint that biological emergence via Black Swan events is also a highly reasonable hypothesis even though we don’t see such Black Swans in every day life. The absence of such Black Swans is not necessarily evidence against Black Swans, especially if the Black Swan will bring coherence to the trajectory of biological evolution in the present day. That is to say, it seems to me things are evolving toward simplicity and death in the present day, ergo some other mechanism than what we see with our very own eyes was the cause of OOL and bridging of major gaps in the taxonomic groupings.

Of course such a Black Swan interpretation of biology may have theological implications, but formally speaking, I think inferring Black Swan hypotheses for biology is fair game in the realm of science to the extent it brings coherence to real-time observations in the present day.

775 thoughts on “The Reasonableness of Atheism and Black Swans

  1. The fact that crocodiles and cockroaches (not to mention bacteria) have not succumbed to genetic entropy suggests the model is incomplete.

  2. stcordova,

    I was talking about purifying selection of deleterious mutations, not neutral fixation. Do you agree that the population is much much bigger now than a few thousand years ago, and it’s a lot easier to pass genes around the globe? Because that means that selection has been more and more able to deal with the few deleterious mutations that never get fixed in a sea of fitter alleles.

    Aren’t you happy that you don’t need to worry about our eternal damnation anymore?

    BTW, if you think nowadays things are screwed up, how come the Noah family made it past mutational meltdown? Or even worse… Adam and Eve?

  3. colewd:

    The last line of Dr Moran’s argument is interesting. He is admitting that the genome will degrade with time which is exactly your hypothesis. What he is saying is that the degradation is slower due to non coding DNA being more likely neutral. If it is slowly degrading then information is consistently being lost but at a slower rate. Does this support your black swan thesis?

    Yes.

    And I’ve argued Moran is wrong to model only 2% as functional. For example an some of the junk DNA is called introns that are in the ball park of 25% of the genome (or whatever the figure is). I attach a paper below that I think falsifies Larry’s views, or at least shows promise in falsifying Larry’s views. The information was funded in large part by the NIH 288 million ENCODE project. Larry hates ENCODE. 🙂

    If there is junk in the genome, it’s because the functions have already been compromised. 90% of Single Nucleotide Polymorphisms (SNP) which is but a single mis-spelling of a single DNA molecule that cause heritable diseases appear in the non-coding regions according to PNAS (Proceedings of National Academy of Sciences) and other journals. These were determined by GWAS (Genome Wide Associaiton Studies) projects, many of which the NIH funds.

    PS
    The intron paper is here. It is a tough tough read, but it shows why Larry is wrong.

    Here is the paper on introns, it is the best one I’ve seen so far:

    There are 5 phases where the spliceosomal intron participates in Eukayotes and the paper goes into those. It demonstrates these introns have to be polyconstrained to function in all 5 phases:

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3325483/

    We found it illuminating to divide the life span of an intron to five phases, and to separately refer to the functions that are associated with each phase (Figure ​(Figure1).1).

    The first phase is the genomic intron, which is the DNA sequence of the intron.

    The second phase is the transcribed intron, which is the phase in which the intron is under active transcription.

    The third phase is the spliced intron, in which the spliceosome is assembled on the intron and is actively excising it.

    The fourth phase is the excised intron, which is the intronic RNA sequence released upon the completion of the splicing reaction.

    The final phase is the exon-junction complex (EJC)-harboring transcript, which is the mature mRNA in which the location of the exon–exon junctions is marked by the EJC.

    The paper then elaborates on function in each of the 5 intron phases, falsifying a sizeable part of Larry Moran’s assertion that the genome is mostly junk.

  4. Here we go again. Sal has “proved” that bumblebees cannot fly. So how about all these flying bumblebees we see? Aha, THEREFORE we’re seeing a miracle.

  5. Allan Miller,

    Hi Allan
    Thanks for the explanation. Can you explain a process by which recombination and selection can move a portion of DNA toward a new function? One possible scenario that I have heard of is the mutation of a DNA copy of an existing gene. Is there a limit to the amount of mutations before the code starts to degrade and move to junk?

  6. Why would there be a limit?

    The short answer is that Lenski has been testing this kind of hypothesis for decades in a lab population, and the genome continues to have neutral changes at a constant rate with no degradation of function. In fact, the rate of reproduction has slowly risen, indicating increased adaption to the relatively hostile environment.

  7. I would say that the ID hypothesis is not useful.

    I don’t believe it’s true, but my belief is just an opinion.

    That ID does not suggest any useful research is more than just an opinion. It’s backed up by the fact that ID journals com and go without ever publishing any research that does any more than push god of the gaps reasoning.

    Squirrels don’t mutate into birds in six months, therefore insurmountable gaps.

    More telling is the fact that when other sciences are confronted with insufficient funds, they at least make proposals for research projects. Some of them are pie in the sky, but they are proposals.

    We even have proposals for testing multiverse theories.

  8. dazz: BTW, if you think nowadays things are screwed up, how come the Noah family made it past mutational meltdown? Or even worse… Adam and Eve?

    YECcers like Sal are the most hyper-mutationlists. It’s beyond imagination, really, how many mutations would have had to occur in every kind from the instant of creation 10k years ago. In order to be carrying just the observed allele variance in existing species, — never mind any hypothetical speciation from “kinds” at the family level (which supposedly fit on the ark) — every individual would have to have tolerated thousands of separate mutations.

    How did the hyper mutation rate – which would have been required to show the “wolf” kind/genus as wolves, coyotes, coy-wolves, teacup poodles and bulldogs – work out? How did it work for even 10k years if mutation is as deleterious (in general) as Sal pretends it is? How can it be both hyper-hyper working and bad at the same time?

    You don’t have to believe in the scientific evidence that the Earth is old enough for evolution to have worked, but you cannot believe that the Earth is young without contradicting yourself somewhere in your own hypotheticals.

    Sal crows about some study which shows LCA in cows 5k years ago. (Sorry, can’t be arsed to look it up). Hooray, hooray, that supports a hypothesis that Noah’s flood was real and really did make a bottleneck in animal populations after the ark-pair! But Sal conveniently ignores the fact that the same scientific method which generated that paper he agrees with also generates papers proving the LCA between us and chimpanzees was 6 million years ago. Why doesn’t Sal believe that?

    Because religion poisons everything, that’s why.

  9. Cows are a domestic breed.

    I’ve seen estimates that most purebred dogs have a common ancestor just several centuries ago.

  10. hotshoe_: Because religion poisons everything, that’s why

    Yeah, makes me cringe to know this guy is teaching “ID” to innocent kids. It’s mind blowing because he’s obviously an intelligent person, I wonder if someday he will regret the permanent damage he’s doing to those kids.

  11. petrushka: Cows are a domestic breed.

    I’ve seen estimates that most purebred dogs have a common ancestor just several centuries ago.

    Well, right, a rational person would expect domestic animals to differ in LCA times from any wild organism. But that has not and will not stop Sal from cherry picking those ancestor-times as support for his fantasy Ark and/or support for his fantasy Young Earth.

    What were you saying about excilience? Excilience, thy mascot is Sal. Everything out of context, or else ignored altogether, that’s the only way to YEC.

  12. stcordova,

    Mutation rate of neutrals is equal to fixation rate of neutrals.

    Only in a steady state panmictic population. I dealt with that in my reply to dazz.

  13. stcordova,

    The paper then elaborates on function in each of the 5 intron phases, falsifying a sizeable part of Larry Moran’s assertion that the genome is mostly junk.

    Introns are 25% of the genome. Even if every base of an intron were vital (it isn’t), 25% does not falsify ‘mostly’. Intron sequence can grow, precisely because it is excised, not because every bit matters.

  14. colewd,

    Thanks for the explanation. Can you explain a process by which recombination and selection can move a portion of DNA toward a new function?

    ‘New function’ is somewhat of an extra wrinkle – my arguments were on the general case that mutations are cumulatively degradative and there is nothing any population can do to reverse this trend. Still …

    One possible scenario that I have heard of is the mutation of a DNA copy of an existing gene.

    Yes, I think that’s the favoured hypothesis. One should keep a broad view of mutation, however. It’s not just point mutations. Whole chunks of one protein can be shuffled into others. The portion of genetic space that contains function is tiny. But by recombination of ‘proven’ modules, which fold in one context, different functions can be built from the same basics – the tiny space can be explored from a toehold within it. This is exactly what happens in exon shuffling.

    Is there a limit to the amount of mutations before the code starts to degrade and move to junk?

    Not hard and fast, no, but you could still be talking of a ‘window of opportunity’ measured in the 100’s of thousands to millions of years, given per-base mutation rates. And a piece of sequence can always be ‘rescued’ from degradation by module shuffling. Likewise, a piece of pseudogene (or any other junk) can move into a protein, and make a useful difference.

    There’s constant probing. Failures are barely noticeable; successes appear as if by magic.

  15. petrushka,

    Yep – both intensive selection and inbreeding will produce very recent coalescence (most recent common ancestor) times.

  16. colewd:

    Can you explain a process by which recombination and selection can move a portion of DNA toward a new function?

    Recombination and selection is such a process, exactly and precisely. The recombination creates a new genetic sequence, which may or may not yield a new function, and if it does, said new function may or may not manage to survive the winnowing of selection.

    Yes, it’s something of a crapshoot—but if you want to argue that this particular ‘crapshoot’ is one that absolutely cannot ever have a ‘winner’ (yield a new function), you’re just being silly.

    And if you want to argue that this particular ‘crapshoot’ has such astronomically low odds that it is, as a practical matter, not possible for said ‘crapshoot’ to have a ‘winner’, you’re going to have to demonstrate that the odds really and truly are that astronomically low. Not baldly assert that the odds are that astronomically low; not present a bogus argument which demonstrates nothing other than your inability to construct a valid argument; but, instead, present a valid argument that the odds really are that astronomically low.

    Is there a limit to the amount of mutations before the code starts to degrade and move to junk?

    Are you looking for a hard-and-fast specific figure, a ‘line in the sand’ for an exact number of mutations beyond which DNA code must necessarily “degrade and move to junk”? If you are, I must inform you that there ain’t no such animal.

    Consider the case of point mutations, which alter 1 (one) nucleotide in a genetic sequence; the genetic code being what it is, there’s roughly a 25% chance that any given point mutation will not alter the amino acid sequence which is produced by that genetic sequence. Point mutations of that sort might be called “silent”, and it’s not at all clear that any number of those ‘silent’ point mutations even can “degrade” DNA code.

    On the other hand, there are also point mutations which insert or delete 1 (one) nucleotide in a genetic sequence. Such mutations can seriously alter the amino acid sequence which is produced by that genetic sequence. It’s very possible indeed for even one such mutation to convert a formerly-functional nucleotide sequence into complete junk. But ‘possible’ is not a synonym for ‘mandatory’, you know? I don’t pretend to have an exact figure for the probability that a single-nucleotide insertion/deletion will have the effect of converting a functional genetic sequence into junk—but I am certain that whatever that probability may be, it’s less than 100%.

  17. Allan Miller,

    Only in a steady state panmictic population. I dealt with that in my reply to dazz.

    For anyone interested in the details, try this. It’s a bit math-y – waaaay too math-y for me – but confirms the supposition (which one can reach through intuition by considering extreme cases) that growing populations cannot be fixing alleles in a population-size-independent way.

    “Here we present an analysis of random genetic drift in situations involving neutral fixation in populations of changing size. Such an analysis is required to establish how fundamental results, such as the “4Ne” result for the mean time to fixation, become modified when population size is not constant and whether such results have an impact on neutral effects in a population.”

    Anything that applies to neutral alleles applies with knobs on to mildly deleterious ones.

  18. Allan Miller,

    Waxman’s paper (the one you just cited) shows fixation time for neutral alleles in populations that are changing in size. The paper looks OK (aside from misspelling my former officemate Dan Dykhuizen’s name in the references). Of the examples it uses there is little mention of cases like exponential growth of the population size without limit (it does note that in that case the bounds that it can compute on fixation time are only a lower bound).

    A consideration of that case and computation of the mean and variance of a neutral allele, without further mutation, shows that the variance approaches a limit as time goes to infinity. In that case some fraction of such alleles do not reach either fixation or loss. That is consistent with Waxman’s diffusion approximation results but he does not discuss it. It is also a case where Sewall Wright’s classical formula for effective population size, which is the harmonic mean of N(t), goes to infinity as time increases.

    An interesting parallel is the Polya Urn Model which in effect is a Moran Model of a linearly growing population, It is known that in the Polya Urn Model the frequency approaches an asymptotic distribution with no fixation.

  19. Wagner — by assertion — and Lenski — by experiment — suggest that neutral mutation can continue indefinitely in a population.

    Models that predict degradation must be assuming that nothing else is going on.

  20. The issue regarding the genetic erosion of humans vs. mice vs. baceteria is partly related to the functional genome size.

    The other issue with a multicellular organism is that because so many different cell types and tissues depend on the same DNA there must be more poly constraints or specificity on the DNA as a matter of principle — that is to say, the DNA is far less tolerant to change in a human than in a bacteria. A change in DNA that might not affect the heart might affect the pancreas. Because bacteria do not have organs but only organelles, whereas humans have both organs, organelles and diverse cell types and developmental stages, etc. So much more can go wrong for slight changes in DNA.

    Hence bacteria can tolerate a lot of change without dying. For example, there is only a mere 20% evolutionary conservation in E. Coli, which means 80% variation on it’s genome is tolerated. I seriously doubt the human species can tolerate 80% variation, especially in light of the fact that 90% of the SNP associated with heritable diseases are in non-coding regions. This isn’t proof the ncDNA is functional, but a reasonable extrapolation is that we better be cautious in asserting there cannot be functional compromise at most locations.

    The DNA is also a scaffold and apparently a signalling system for histones and other epigenetic marks, and we really really don’t know much about the epitranscriptome except that our sequencing machines are having a hard time detecting the eppitranscriptomic modifications.

    Even if every base of an intron were vital (it isn’t), 25% does not falsify ‘mostly’

    It doesn’t have to be vital to be functional. I can drill a hole in a wall and it doesn’t compromise the entire wall, but enough drilling and it will be compromised. The intron looks to be optimized for function, but it can suffer slight compromise of function in a few nucleotides. Such slight function losses won’t get easily purged from the genome if for every such loss-of-function that drifts out, several more mutants emerge.

    Even if every base of an intron were vital (it isn’t), 25% does not falsify ‘mostly’

    25% is 12.5 times larger than the 2% Larry was arguing from. He cited a figure of 130 mutations per generation per individual. That’s 32 mutations per individual per generation — way more than the limit of 6 I suggested, 2 than Larry suggested, and 0.5 that Muller suggested.

    And this is just DNA, we don’t know how much heritable information is outside the DNA.

  21. stcordova:
    The issue regarding the genetic erosion of humans vs. mice vs. baceteria is partly related to the functional genome size.

    What is the functional genome size of a mouse sal?

    Humans and mice have a genome roughly the same size, around 2.8 Gb. for mice v. 3.2 Gb for humans. Mice however have a reproduction time 40x faster than humans. That means for 6000 years of human genome “degradation” there have been the equivalent of 240,000 mouse-years’ worth.

    Why haven’t mice gone extinct Sal? Why hasn’t their genome degraded into mush by now?

  22. stcordova,

    The other issue with a multicellular organism is that because so many different cell types and tissues depend on the same DNA there must be more poly constraints or specificity on the DNA as a matter of principle

    Nah. Tissue specific expression was likely pretty much coded in at the base of multicellularity. If an exon is only translated in one tissue, variation in it is only tested in that tissue. If it is a tissue-specific function, it’s not likely to be expressed anywhere else. If it’s not tissue-specific in the first place, there is naturally going to be more latitude anyway on change.

    Genomic evidence is that multicellular lineages regularly mutate, in coding and regulatory sequence – something must be happening that is capable of happening. If you want to prove ‘normal’ mutation incapable, you have go beyond supposition.

    Hence bacteria can tolerate a lot of change without dying. For example, there is only a mere 20% evolutionary conservation in E. Coli,

    Comparing what to what?

    I seriously doubt the human species can tolerate 80% variation

    From what starting point? Obviously you can’t change 80% of a human genome and still expect it to be identifiably ‘human’. Maybe we have reached the end of the road. But there is absolutely no reason to suppose change accounting for 80% of the genome cannot accumulate, over – say – 2 billion prior years.

    , especially in light of the fact that 90% of the SNP associated with heritable diseases are in non-coding regions.

    Sure, regulatory change has a significant correlation with genetic disease. But there is a distinction between non-coding DNA and ‘junk’. Junk is actually defined (per Ohta) as ‘that which cannot suffer a deleterious mutation’. If the SNP is a deleterious mutation, it cannot be junk. But how many bases are we talking about, as a proportion of the whole?

    Even if every base of an intron were vital (it isn’t), 25% does not falsify ‘mostly’

    Sal: It doesn’t have to be vital to be functional.

    It has to be capable of suffering a deleterious mutation to have any relevance to your ‘degradation’ argument. Most intronic sequence, on conservation grounds, evolves neutrally. There are exceptions, but they don’t define the whole.

    Me: Even if every base of an intron were vital (it isn’t), 25% does not falsify ‘mostly’

    Sal: 25% is 12.5 times larger than the 2% Larry was arguing from. He cited a figure of 130 mutations per generation per individual. That’s 32 mutations per individual per generation — way more than the limit of 6 I suggested, 2 than Larry suggested, and 0.5 that Muller suggested.

    You have just ‘bagsied’ the entirety of intronic sequence for your case! Just because there is function in intronic sequence does not mean it is ALL capable of suffering a deleterious mutation. So the figure relevant to ‘meltdown’ is going to be substantially less than 25%.

    And this is just DNA, we don’t know how much heritable information is outside the DNA.

    I think a reasonable approximation beyond about 2 generations is zero.

  23. It has to be capable of suffering a deleterious mutation to have any relevance to your ‘degradation’ argument.

    Actually no and I’ll explain why.

    The heteroygous form of Sickle Cell anemia is “beneficial” in the evolutionary viewpoint. Having extremely high intelligence seem correlated with low reproductive rates in modern society, and hence genius is “deleterious.”

    There is a philosophical issue regarding the concept of function and hence what constitutes genetic erosion.

    I’m not trying to insist who is right or wrong in the way one conceives function, but I am pointing out the notions of what constitutes genetic damage is likely different from a medical vs. evolutionary population genetics standpoint.

    And since the question of design and creation pertain to Rube Goldberg complexity rather than mere survival (as symbolized by the Peacock’s tail that made Darwin sick), then I hope it at least clarifies that IDists certainly would not like to be defining function in terms of reproductive success.

  24. stcordova

    The heteroygous form of Sickle Cell anemia is “beneficial” in the evolutionary viewpoint.Having extremely high intelligence seem correlated with low reproductive ratesin modern society,and hence genius is “deleterious.”

    WTF??? That bit of stupidity needs to go straight to “Fundies say The Darndest Things”.

  25. stcordova,

    The heteroygous form of Sickle Cell anemia is “beneficial” in the evolutionary viewpoint.

    Not really. That is a case where an allele experiences variation in selection coefficient depending on its circumstances. It is strongly negative in genomes where it exists in double copy. It is still negative if you take a broader view of a series of instances including both heterozygotes and homozygotes, mainly due to the reduction of offspring by homozygote production. But the net negativity is diminished in malaria regions – there is a compensation. Therefore it remains in the population at an elevated rate there. But I don’t think this allele could count as adaptive (“beneficial”) in any broader sense.

    Having extremely high intelligence seem correlated with low reproductive rates in modern society, and hence genius is “deleterious.”

    You can’t just assume that low fecundity is deleterious. To a limit, many birds that produce smaller clutches leave more descendants. Evolution is a long game, and the best strategy is not necessarily the most obvious – nor universal.

    IDists certainly would not like to be defining function in terms of reproductive success.

    I guess not, but if you are pointing to ‘degradation’, how is it to be assessed? If it doesn’t affect reproductive success, it’s not going to make a species extinct. Part of the crossed lines is because you incorporate Sanford’s arguments, which are about reproductive success.

  26. Don’t know about high intelligence, but there are a lot of anecdotes about famous “geniuses” that left no descendants.

    Newton, Mozart, Beethoven.

    Einstein seems to have a usual and customary number of descendants.

    Off the top of my head, I can’t think of any genius dynasties. Traits seem to migrate toward the mean.

  27. Joe Felsenstein,

    Yes, I confused fixation time and fixation rate a bit. I was thinking about the problem of present ‘degradative’ alleles getting fixed in a future population. One assumes that the alleles we were Created with were spot on!

    Significant blockers to universal degradation at the present time are presumably a size-dependent narrowing of the effectively neutral zone and the lack of random mating, as well as the long time to fixation if we stay large.

  28. stcordova:
    Having extremely high intelligence seem correlated with low reproductive ratesin modern society,and hence genius is “deleterious.”

    Sal, that’s not how estimating the fitness of a quantitative trait with a strong environmental component works. It’s much more complicated than just measuring IQ, counting babies and then looking at the correlation. See, for example, Rausher 1992 (http://sites.biology.duke.edu/rausher/evgen/reprints/SelectMeas.pdf) for a discussion of the challenges of estimating fitness effects of quantitative traits.

  29. Sal,

    I have concluded if the Christian God is the Intelligent Designer then he also makes the world a miserable place by design, that He has cursed this world because of Adam’s sin.

    You believe “genetic entropy” is a result of Adam’s sin? If so, how specifically was genetic deterioration prevented from happening prior to the Fall?

  30. keiths: You believe “genetic entropy” is a result of Adam’s sin? If so, how specifically was genetic deterioration prevented from happening prior to the Fall?

    There was no second law either. No entropy of any kind before the fall. Not even shannon entropy.

  31. We indicate fitness of a genotype with an S-coefficient. A negative S coefficient for a genotype means it is less fit than some reference genotype.

    But we know in small populations especially, a slightly less fit genotypes can be fixed into the population by just random luck despite selection. If that happens, a genotype that was less-than-fit in prior generations becomes the new standard of excellence in future generations!

    Example a functional gene is lost, and we don’t expect it to recover by back-mutation in the population. The broken form now becomes the standard of functional excellence even though compared to its ancestors, the present form is a functionally damaged mutant.

    The negative S-coefficient was in a competitive environment, but it doesn’t mean that when the competition gets eliminated by luck that absolute fitness necessarily declines once the competition for resources in the struggle for existence is gone.

    Hence this is a scenario where the genome gets trashed from a functional standpoint, but the S-coefficients of functionally compromised genotype become the new standard of excellence. Doesn’t any one here appreciate that this is a problem for evolutionary theory? Fitness of the population can be seen to accidentally increase (since the S-coefficient of the bad trait gets re-normalized to 0), even when the functionality is damaged.

    If I weren’t a creationist, I’d invoke some naturalistic Black Swan. That’s what it looks like to me, any way. If others see it differently, I respect that, but I’ve stated why I don’t think evolution of large scale functional complexity in the past is by any mechanism we see in operation today.

  32. Before the fall I tossed a fair coin 1000 times and it came up heads every time. But after the fall, that all changed. I lost so much money until I read the Bible and figured out what happened. I have a lawsuit pending against this Adam guy.

  33. Sal Allan Cubist
    “Yes, it’s something of a crapshoot—but if you want to argue that this particular ‘crapshoot’ is one that absolutely cannot ever have a ‘winner’ (yield a new function), you’re just being silly.”

    I guess we don’t have universal agreement that the evolutionary events are black swans but at least grey one’s. Fifty shades of… I think no one would argue that a winner happens. How do we better understand how many swans and their shades? Is OOL the darkest swan? What new identified causes could reduce the grey swan population?
    Allan mentioned exon shuffling. What abut HGT or NGE?

  34. Hi Sal
    “Hence this is a scenario where the genome gets trashed from a functional standpoint, but the S-coefficients of functionally compromised genotype become the new standard of excellence. Doesn’t any one here appreciate that this is a problem for evolutionary theory? Fitness of the population can be seen to accidentally increase (since the S-coefficient of the bad trait gets re-normalized to 0), even when the functionality is damaged.”

    Is your point that the probability of this type of event is greater than a new gene that takes the population in a positive direction?

  35. colewd:
    Cubist:

    Yes, it’s something of a crapshoot—but if you want to argue that this particular ‘crapshoot’ is one that absolutely cannot ever have a ‘winner’ (yield a new function), you’re just being silly.

    I guess we don’t have universal agreement that the evolutionary events are black swans but at least grey one’s.

    Well, Cordova’s “black swan” is not a well-defined class of event. Rather, Cordova’s “black swan” seems to be whatever Cordova is pointing at when he mutters the cabalistic phrase “black swan”. Note that Cordova does not provide anything in the general neighborhood of objective criteria by which his “black swans” can be distinguished from non-“black swans”. If you disagree with me—if you think Cordova has provided anything in the general neighborhood of objective criteria by which his “black swans” can be distinguished from non-“black swans”—please do point out where he has done so.

    Fifty shades of… I think no one would argue that a winner happens. How do we better understand how many swans and their shades?

    If actual understanding is what you seek, abandoning Cordova’s “swan” metaphor in its entirety would be a good first step. But if you are (for whatever reason) determined to cleave unto Cordova’s “swan” metaphor, I would recommend dispelling its vagueness, nailing down all the relevant details, in order that you can then identify “black swans” by a methodology more reliable than Cordova’s current yep, that sure looks like a black swan to me! methodology.

    Is OOL the darkest swan?

    Don’t know, and don’t care. If Cordova ever gets his hands dirty with the hard work of identifying an empirical distinction between his “black swans” and non-“black swans”, it will then and only then be time to worry about the metaphorical ‘darkness’ level of various metaphorical ‘swans’.

    What new identified causes could reduce the grey swan population?

    You’re getting ahead of yourself. First establish that there are, in fact, such things as “grey swans”; only after that’s done will your question make sense.

  36. stcordova,

    Hence this is a scenario where the genome gets trashed from a functional standpoint, but the S-coefficients of functionally compromised genotype become the new standard of excellence. Doesn’t any one here appreciate that this is a problem for evolutionary theory?

    Well, the architects of evolutionary theory – the people who did all the work you rely on – didn’t appreciate that it was a problem for evolutionary theory, so you’re unlikely to get much change out of their disciples here … ! 😉

    But let’s look at it. What you’re saying is that, because small populations can promote a fraction of deleterious alleles to fixation (while they remain small), this is a problem for evolutionary theory. This is an incomplete picture. For a start, you can’t assume that all populations ever were small (unless, ironically, they just came off the Ark).

    In a small population, there is indeed a greater proportion of fixation by drift, and if the population remains small, this can be a problem – although the worse problem for small populations is actually loss of genetic diversity. Small populations produce, and can sustain, fewer mutations.

    Once a deleterious allele has fixed, the population has moved down the adaptive hill a notch. But this means that the proportion of new alleles that is beneficial has, on the average, grown, if only by a fraction. There’s not just back-mutation; any allele that can respond to the selective pressure that we can assume still operates will be beneficial relative to the resident deleterious allele – it’s not really a ‘standard of excellence’. So a deleterious allele fixing is not the end of the story – its tenure is shaky. If a beneficial allele fixes, conversely, it’s much harder to shift.

    Kimura’s ‘equation 10’ I referenced a bit back shows the massive differential in effect for a beneficial vs a deleterious allele of any particular distance either side of neutral. The bias against deleterious alleles, compared to beneficial ones, is strong, and continuous.

    And let’s not forget the effect of deleterious alleles on opening up regions of genetic space that would be inaccessible to a purely ‘upward’ system.

    Of course some populations will die out. But the idea that all populations are subject to a net degradation seems statistically unsupportable. For every small population gaining net detriment and dying out, there will be populations both large and small that persist. Evolution is a process of winnowing, at every level.

  37. Hi Cubist

    “If actual understanding is what you seek, abandoning Cordova’s “swan” metaphor in its entirety would be a good first step. But if you are (for whatever reason) determined to cleave unto Cordova’s “swan” metaphor, I would recommend dispelling its vagueness, nailing down all the relevant details, in order that you can then identify “black swans” by a methodology more reliable than Cordova’s current yep, that sure looks like a black swan to me! methodology.”

    I think your point here is right. I use the term grey swan to point out that many evolutionary events are statistically grey meaning hard to nail down the mechanism that caused them by being able to create a mathematical model that accounts for the change. Without a mechanism or cause you are left with observations but you are in search of a theory.

  38. The problem is that the concept is not useful.

    Rare events — say large asteroid hits — can have large consequences. That much seems undisputed.

    What Sal is suggesting is that events for which we simply don’t know the history are improbable.

    That is just silly.

    Take OOL. We simply don’t know the history and we don’t know the conditions.
    Asserting that it is a black swan is just hand waving.

    Same for endosymbiosis. Some well qualified people have suggested it only happened once, and that it is a highly improbable event. But no one knows. For all anyone knows it occurred under favorable conditions, and once it occurred, the resulting organisms quickly changed the competitive situation, making additional occurrences impossible. We don’t know.

    When I was younger, there was a brief mania for pet rocks. they were marketed and sold. Someone made some money selling garden rocks.

    This is unlikely to be repeated. The first happening made future occurrences unlikely.

  39. Hi Cubist

    What new identified causes could reduce the grey swan population?

    You’re getting ahead of yourself. First establish that there are, in fact, such things as “grey swans”; only after that’s done will your question make sense.

    This is a very good point. What if we define grey swan as an event that we cannot model its cause mathematically?

  40. colewd:
    What if we define grey swan as an event that we cannot model its cause mathematically?

    It happens to be the nature of reality as we know it, with all its infinity of independent variables, that nearly every event is unique, and very few of them can be accurately predicted outside of controlled conditions, and then only narrowly. Downstream ramifications usually can’t be predicted even then. By this terminology, then, we live in the midst of an enormous flock of grey swans, in a world where swans of any other color are vanishingly rare.

    I think we CAN produce models that crank out unique unanticipated results indefinitely. The Santa Fe Institute specializes in this. For that matter, so do astrologers.

  41. Keiths:

    You believe “genetic entropy” is a result of Adam’s sin? If so, how specifically was genetic deterioration prevented from happening prior to the Fall?

    I can only offer non-scientific speculations based on theology for your question. My speculation is that since Jesus being God was able to heal a blind man from birth, God would continuously heal or prevent what could go wrong physically.

    Beyond this (and I’m addressing the rest of the commenters now, not just Keiths), there is a fact that is not appreciated, and it relates to the genetic deterioration of multicellular genomes versus small unicellular genomes.

    By accident medical researchers created a new life form, so to speak, the immortalized human cell lines of Henrietta Lacks:

    A HeLa cell /ˈhiːlɑː/, also Hela or hela cell, is a cell type in an immortal cell line used in scientific research. It is the oldest and most commonly used human cell line.[1] The line was derived from cervical cancer cells taken on February 8, 1951[2] from Henrietta Lacks, a patient who died of her cancer on October 4, 1951. The cell line was found to be remarkably durable and prolific — which has led to it contaminating many other cell lines used in research.[3][4]

    https://en.wikipedia.org/wiki/HeLa

    As far as we know, these cells are ageless and are “immortal”. So we have some reason to think a Eukaryote does have the capacity to operate forever given it is provided nutrients, that something is not working as well as it can. After all when a middle aged couple brings a new life into the world, it brings youth and health that the middle aged couple lack. So there is at least the technology in their bodies to bring youth and self-healing but it is not getting recruited in their own bodies.

    Why is this so. It would stand to reason, given the enormous developmental expense of making an adult, that longevity and immortality would be favored by natural selection. The technology to live “immortallly” is there in principle, but not being recruited for the parents. The technology is obviously is there in the form of being able to pro-create, but why not just apply the same technology to the whole adult organism? Why should multi-cellular forms not be immortal like bateria if they have adequate nutrients and a nurturing environment?

    Now regarding HeLa, as I pointed out, a unicelluar organism can tolerate a lot of damage, wheras the multicelluar form cannot. This is a poignant example because Henrietta Lacks (the multicellular form) passed away, but her unicellular components, a cancer cell at that, is able to live immortally.

    That’s why, as I said, I don’t think genetic variation affects unicellular bacteria as severely as multicellular creatures. The multicellular creature is much much more fragile to slight variation in genetic change. The case of the multicellular Henrietta Lacks vs. her own cancerous HeLa cells I think illustrate the fragility of complex multicellular vs. simple unicellular forms. Hence the human genome is much more vulnerable to genetic deterioration.

    Finally, the components of the ANNIHLATOR model of present-day evolution is an observationally and experimentally testable hypothesis. Some of the other creationist stuff I’ve promoted is not science (pure theology in many cases), but the ANNIHILATOR model qualifies as science.

    NOTES:

    https://en.wikipedia.org/wiki/Immortalised_cell_line

    An immortalised cell line is a population of cells from a multicellular organism which would normally not proliferate indefinitely but, due to mutation, have evaded normal cellular senescence and instead can keep undergoing division. The cells can therefore be grown for prolonged periods in vitro. The mutations required for immortality can occur naturally or be intentionally induced for experimental purposes. Immortal cell lines are a very important tool for research into the biochemistry and cell biology of multicellular organisms.[citation needed] Immortalised cell lines have also found uses in biotechnology.

    and

    https://en.wikipedia.org/wiki/Senescence#Cellular_senescence

    Cellular senescence is the phenomenon by which normal diploid cells cease to divide. In cell culture, fibroblasts can reach a maximum of 50 cell divisions before becoming senescent. This phenomenon is known as “replicative senescence”, or the Hayflick limit in honor of Dr. Leonard Hayflick, co-author with Paul Moorhead, of the first paper describing it in 1961.[6] Replicative senescence is the result of telomere shortening that ultimately triggers a DNA damage response. Cells can also be induced to senesce via DNA damage in response to elevated reactive oxygen species (ROS), activation of oncogenes and cell-cell fusion, independent of telomere length. As such, cellular senescence represents a change in “cell state” rather than a cell becoming “aged” as the name confusingly suggests. Although senescent cells can no longer replicate, they remain metabolically active and commonly adopt an immunogenic phenotype consisting of a pro-inflammatory secretome, the up-regulation of immune ligands, a pro-survival response, promiscuous gene expression (pGE) and stain positive for senescence-associated β-galactosidase activity.[7] The nucleus of senescent cells is characterized by senescence-associated heterochromatin foci (SAHF) and DNA segments with chromatin alterations reinforcing senescence (DNA-SCARS).[8] Senescent cells are known to play important physiological functions in tumour suppression, wound healing and possibly embryonic/placental development and paradoxically play a pathological role in age-related diseases.[9] The elimination of senescent cells using a transgenic mouse model led to greater resistance against aging-associated diseases,[10] suggesting that cellular senescence is a major driving force of ageing and its associated diseases.

Leave a Reply