Sandbox (4)

Sometimes very active discussions about peripheral issues overwhelm a thread, so this is a permanent home for those conversations.

I’ve opened a new “Sandbox” thread as a post as the new “ignore commenter” plug-in only works on threads started as posts.

5,830 thoughts on “Sandbox (4)

  1. The pdf attachment did not seem to work.
    I too would love to read Carroll on this subject.

  2. DNA_Jock:
    The pdf attachment did not seem to work.

    Yes, and I am also having problems with To field on messages. I suspect an issue involving one of the Tracker/ad blockers I use, but rather than futz around with those, here is the text and a couple of links:

    Frigg has a couple of papers online that get more into the SM philosophical differences between Gibbs and Boltzmann approaches.

    http://www.romanfrigg.org/writings/What_is_SM.pdf

    http://www.romanfrigg.org/writings/EntropyGuide.pdf
    esp section 4

    Here is the text for SC

    0:23:34 SC: So, sorry, we were not clear about that if that’s the impression that came across. That’s not the right distinction. It’s not classical versus quantum. It’s basically coarse-grained versus knowledge, okay? Those are the two different kinds of entropy. By coarse-grained, what we mean is, we choose to ignore certain microscopic features of the system. So, if you say to yourself, “There’s cream in coffee, and rather than keeping track of every individual molecule or atom of cream or coffee, I keep track of the total number in some little box. A cubic millimeter within the cup of coffee. Every individual cubic millimeter has a certain number of cream molecules, a certain number of coffee molecules or whatever their constituents are, and that’s what I keep track of.” That is coarse-graining. That is keeping some information and throwing out other information. And in that case, a closed system, you can define the entropy. That’s what Boltzmann did, defining the entropy from that coarse-graining. And you can show that it goes up. If it starts low, it will generally go up. The cream will mix into the coffee. The other form of entropy is also classical and was invented…

    0:24:40 SC: I mean, I think Boltzmann talked about it, but it’s usually associated with Gibbs, Josiah Willard Gibbs, the American chemist, and it says, what I keep track of is whatever I know about the system, okay? I have some information that might be incomplete. I can quantify that information by saying “For every possible state that the system might be in, I assign a probability that it actually is in that state.” Okay? And in certain conditions, very, very specific conditions, well, let me finish that sentence before I should say this, you can calculate a separate formula for entropy from this what I know about the system definition. So there’s one formula for entropy engraved on Boltzmann’s tombstone that is associated with the coarse graining definition. There’s another formula for entropy associated with the knowledge definition, and Gibbs talked about that formula and then Shannon reinvented it in the context of information theory. So it’s actually often called the “Shannon entropy.” And in certain very specific conditions these entropies are the same as each other, namely if what your knowledge of the system is, is what macro state it’s in, the total number of molecules of cream and coffee within every cubic millimeter and no other knowledge.

    0:26:01 SC: So you know exactly that that is in the macro state and for every micro state within that macro state you give it equal entropy, then the “Boltzmann formula” and the “Gibbs formula” give you the same answer for what the entropy is? The difference of course is that the Boltzmann entropy can go up in a closed system, the Gibbs or Shannon entropy does not, because your knowledge doesn’t change of a closed system, if you’re not looking at it, okay? Now, there’s still a version of the second law that is true even for the Gibbs or Shannon entropy. It’s just more relevant when you think of rather than a closed system, a system sitting in a heat bath, okay? Your cup of coffee is connected to the outside world, as in, there’s an atmosphere there and it’s interacting, so it’s not a closed system. And we can talk about the heat flow between your system and the rest of the world and of course, it can gain or lose heat and that can make this entropy go up or down. But there’s a formula that says “Once you take into account the heat flow the rest of the entropy in this open system, the rest of the Gibbs or Shannon information entropy goes up.”

    0:27:05 SC: High entropy in this knowledge sense is associated with knowing less about what state the system is in. So, of course for an open system that is not an open system being manipulated by some intelligent observer, but just an open system that is mindlessly interacting with its environment, we will know less and less about what state that system is in over time, because it’s bumping into the heat bath that it’s embedded inside. So that is a version of the second law. It’s a little bit different than the simple moto that entropy increases in closed systems, but it captures the spirit of the second law of thermodynamics.

  3. stcordova: On multiverse explanations for origin of life, background and considerations from statistical mechanics and thermodynamics.

    I don’t understand what you are trying to do with this post, but if you are arguing that entropy must have been low at the start of our universe, then that idea is already part of SM where it is called the “Past Hypothesis”. Sean C discusses in his Time book, esp ch 8 and 15.

  4. BruceS,

    Hi

    BruceS: I don’t understand what you are trying to do with this post, but if you are arguing that entropy must have been low at the start of our universe, then that idea is already part ofSM where it is called the “Past Hypothesis”.Sean C discusses in his Time book, esp ch 8 and 15.

    Hi, I actually wasn’t specifically talking about Sean Carrol, but rather the issue posed by Eugene Koonin:

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1892545/

    propose a direct link between specific models of evolution of the physical and biological universes, with the latter being contingent on the validity of the former (MWO) as illustrated by simple calculations. Importantly, in this context, the validity of MWO is to be understood in a rather generic sense. For the present concept to hold, the only essential assumptions are that the universe is infinite [e.g., any (island) universe under MWO; the multiverse, per se, is not a must] and that the number of macroscopic histories in any finite region of spacetime is finite.

    A final comment on “irreducible complexity” and “intelligent design”. By showing that highly complex systems, actually, can emerge by chance and, moreover, are inevitable, if extremely rare, in the universe, the present model sidesteps the issue of irreducibility and leaves no room whatsoever for any form of intelligent design.

    The reviewer rightly pointed out:

    I am afraid his answer to this problem might open too broad an avenue to the supporters of intelligent design, as it is currently formulated, and thus does not satisfy me as such as an alternative to the theory to the RNA world.

    I’m presently working with University faculty in biochemistry that are ID friendly. They see exactly the problem Koonin sees. We have a violation of normative expectation on many levels, not just the RNA world but many facets of the chemical evolution of cellular life. If one doesn’t accept God, one must accept non-normative mechanism, or alternatively find a way that our understanding of chemistry and physics is fundamentally wrong such that mechanism which we believe are normative actually aren’t.

  5. Here is the first iteration of the intro/abstract to my paper. I sent a copy to VJ Torely and requested his inputs. I’m hoping perhaps we could leverage TSZ to discuss this and my the paper better. I’m not intending it to be published in a formal philosophical journal, but I do want a quality paper that can be freely distributed.

    I’m soliciting VJ’s help because his English and communication skills are better than mine.

    Multiverses or Miracles of God?
    Circumventing metaphysical baggage when describing statistical or physical violations of normative expectation

    Intro/Abstract
    In an attempt to create a framework for clearly describing improbability of phenomenon that may or may not have metaphysical implications, it may be helpful to compartmentalize away the more metaphysical aspects from the actual math. Additionally, the probabilities (which are really statements of uncertainty) can be observer or perspective dependent.

    For example, in a lottery, raffle, or professional sporting league, there is a guaranteed winner. It is therefore normative that there is a winner in such cases from the perspective of the entire system or ensemble of possibilities, however from the perspective of the individual participants (like an individual lottery ticket holder), it is not normative for any arbitrary individual to be a winner.

    With respect to question of the origin of life and fine tuning of the universe, one can postulate a situation where it is normative for life to emerge in at least one of many multi-universes when considering the ensemble of all universes. However, from the perspective of the universe one is in, the fine tuning and origin of life is not normative from the perspective of those in the world we live in. Thus, when one asserts, “it is improbable for a cell to arise” the statement is with respect to what is normative to human and experimental experience, not necessarily normative in the ultimate sense. One might colloquially say that abiogenesis and fine-tuning are miraculous from the human point of view, but whether they are miraculous in the theological or ultimate sense is a question that might be practically if not formally undecidable.

    The major point of this article is help minimize or circumvent the metaphysical baggage of phrases like “natural”, “supernatural”, “intelligently designed” when creating probability descriptions of phenomenon such as fine tuning and origin of life. One can say these phenomenon are not the result of normative mechanisms in the sphere of human experimental or direct observational experience and still be well with realm of empirical science. Whether fine tuning and abiogenesis origin of life are normative in the ultimate sense, whether they are the result of God, multiverses, etc. is a separate question.

  6. I really liked this from the first invited reviewer of Koonin’s work:

    There are a great many issues touched upon in this not so short review, and replying in full would, effectively, take another paper. Furthermore, to write such a paper properly, one might indeed need to be (or closely collaborate with) a professional philosopher.

    This does touch on issues in the philosophy of science, especially fine tuning and the origin of life.

  7. BruceS: Frigg has a couple of papers online that get more into the SM philosophical differences between Gibbs and Boltzmann approaches.

    Thanks for the links!!!!!

    Friggin awesome information in both essays. I downloaded them and am keeping them in my private electronic library. My statistical mechanics course was a semester long course. It would take about that much time to do Frigg’s essays justice as far as appreciating the issues. But I’m keeping them stored as I can see they are really good essays.

    That said, this was the textbook I studied from, and the professor only took us to not even the first half of the book. Our professor like Enrico Fermi’s work on the topic.

  8. stcordova: Thanks for the links!!!!!

    Friggin awesome information in both essays.I downloaded them and am keeping them in my private electronic library.

    The Past Hypothesis is not an SC creation; he just gives a popularized version of it. The Frigg papers get into the math and reasoning. There is a lot of overlap in the two papers I linked; you probably only need read the first one. Both are summaries of his much longer:
    https://arxiv.org/pdf/0804.0399.pdf

    I agree fine tuning is a real issue for cosmology and the multiverse with varying physical parameters plus the anthropic principle is one solution. Others are brute fact, unknown physics which constrains possibilities, and a deistic God (fine tuning alone does not need an Abrahamic God).

    I don’t think we know enough to estimate any probabilities associated with origin of life

    As with QM, I only learn enough TD and SM to understand the associated philosophical work. I could not solve an actual problem such as those you get in a physics course.

    https://newbooksnetwork.com/geraint-f-lewis-and-luke-a-barnes-a-fortunate-universe-life-in-a-finely-tuned-cosmos-cambridge-up-2016/

  9. BruceS:

    I don’t think we know enough to estimate any probabilities associated with origin of life

    I think we know enough to give a good guess, and the guesses are in line with observation and experiment. I might not have been so forthright 15 years ago, but now that I’m actually working with biochemists and cellular biologists (some of whom were not ID friendly at first and then changed their minds), I feel more forthright about the issue. That is actually the point of the class I and my colleagues are trying to assemble for college-level consumption — “what is a good best guess given what we know and accept as normative chemistry?”

    I prefer this approach to the issue of ID, that is, “take the ID out of the ID argument” and focus really on the statistics, chemistry and physics. I say that since I don’t think there is practical if not formal resolution to the question.

  10. stcordova: “take the ID out of the ID argument”

    What about Winston’s ‘dependency graph of life’? Seems like if life is intelligently designed there are a number of potentially useful implications. For example, if the genome is a digital code, then we can maybe use what we know about reverse engineering humanly designed codes to reverse engineer the genome, i.e. derive the different modules that are implicit in the DNA sequences.

    I’ve also noticed that bioinformatics has an enormous amount of Darwinian evolutionary assumptions baked into the algorithms. If instead ID is true, then that can lead to significant improvements in bioinformatic algorithms, such as the BLAST search algorithm.

    I’ve run a very simplistic bioinformatics analysis on some mammal DNA sequences, and it creates a pattern unlike what we’d expect if evolution were true. On the other hand, if life is designed by grabbing bits and pieces from various unrelated organisms, just like human designers invent things, then the pattern makes much more sense.

    https://biology.stackexchange.com/questions/85943/does-the-minimum-spanning-tree-tell-us-anything-useful-about-evolutionary-ancest

    Also, merely saying ‘life is improbable’ doesn’t say anything about what produced life. It may well be a natural process, and we are just very lucky, i.e. the anthropic principle. And ‘life is improbable’ doesn’t lead to the above list of applications. In my mind, at least, ID seems to be a pretty scientifically tractable hypothesis.

  11. EricMH:

    I’ve run a very simplistic bioinformatics analysis on some mammal DNA sequences, and it creates a pattern unlike what we’d expect if evolution were true. On the other hand, if life is designed by grabbing bits and pieces from various unrelated organisms, just like human designers invent things, then the pattern makes much more sense.

    Sorry, phylogeneticists. You’ve been doing it wrong for years, and Eric has just overturned your field.

  12. keiths: Sorry, phylogeneticists. You’ve been doing it wrong for years, and Eric has just overturned your field.

    Indeed! But, they have an excuse, since they’ve been trying to be Darwinian fundamentalists and shoehorn the data into their creed.

    If ID is true, then it should be trivial to overturn the Darwinian orthodoxy.

  13. EricMH:
    If ID is true, then it should be trivial to overturn the Darwinian orthodoxy.

    1. ID is not true.

    2. “Darwinian orthodoxy” was overturned a long time ago. It wasn’t trivial though, and the overturning had nothing to do with ID, and everything to do with examining data leading to better understanding about how evolution works (there’s still a lot to learn though).

    3. If ID were true, at least in the way you’re thinking about it, rather than in a way where the designers were deceitful, then evolutionary thinking, which is not the same as “Darwinian orthodoxy” should have never taken off.

  14. Entropy: If ID were true, at least in the way you’re thinking about it, rather than in a way where the designers were deceitful, then evolutionary thinking, which is not the same as “Darwinian orthodoxy” should have never taken off.

    How many bioinformatics books have you read?

    I’m a 3rd of the way through: https://www.amazon.com/gp/product/B0144NZ2EC

    And it is full of things they can’t explain through random Darwinian mechanisms or even common descent, so it’s a constant slew of fudge factors and ‘paradoxes’ and other bandaids to make their common descent based algorithms work.

    Whereas if you adopt a design perspective it is trivial to account for what they are seeing, and you can probably develop much better algorithms.

    For example, BLAST would probably be designed to run much faster and be more accurate if it wasn’t so tightly coupled to the similarity = homology paradigm. We could use Winston’s modular idea and decompose genomes into modules, which are much smaller and don’t have to be precisely aligned to signify similarity.

  15. EricMH:
    If ID is true, then it should be trivial to overturn the Darwinian orthodoxy.

    But it isn’t trivial, so …

  16. EricMH:
    For example, BLAST would probably be designed to run much faster and be more accurate if it wasn’t so tightly coupled to the similarity = homology paradigm.

    BLAST is, quite literally, a basic local alignment search tool. It does an algorithmic score of sequence similarity. Whether or not that ‘equals homology’ is not part of the program, and doesn’t slow it down any.

  17. EricMH:

    Seems like if life is intelligently designed there are a number of potentially useful implications.

    Hi, what I meant is the facts speak for themselves if one is willing to look at them impartially. James Tour showed the way. He didn’t have to argue ID to make the case for ID.

    Saying “this structure violates normative expectation” is well within science.

    For that matter, look at the un-intended consequence of Koonin pointing out that multiple-universes were needed to explain the origin of life. One of the invited reviewers wrote:

    REVIEWER:

    I am afraid his answer to this problem might open too broad an avenue to the supporters of intelligent design, as it is currently formulated,

    we can of course, talk about God and scripture separately, but we can compartmentalize the feasibility of normative mechanism to explain origin of life without bringing in ID or God or creation. Such claims are submissible to peer-reviewed journals. That is indisputably science. I can say, for example:

    Accepted cell TEXTBOOK cell theory says, “cells come only from pre-existing cells.” This is supported mathematically for the following reasons….

    I’m intending a college course on the topic will compartmentalize ideas. Conflation allows critics of ID to add red herring to discussions about basic discussions of ACCEPTED chemical behaviors relevant to abiogenesis and evolutionary theory.

    If people in a religion course want to talk about religion, that’s fine. When John Sanford spoke at the NIH, he demarcated pure science from his personal opinions about heaven being the true hope of the human race.

    ID is a derivative of Paley’s natural theology. I believe ID is true. I’m hearing reports there is a lot of pro-ID sentiment at the NIH and that’s one reason Dr. Sanford was welcomed to speak there in 2018.

    The facts speak for themselves. If people want to invoke multiple universes as an explanation for the origin of life, it’s at least an honest admission that life looks like the result of something that looks miraculous.

    FWIW, my role in the college course is to gather materials and build courseware/software and find ways to get it emplaced in Christian colleges, seminaries, etc. It will have a religious component, but the religious components are compartmentalized from the scientific components.

  18. FWIW, I’m working with a cellular biologist, a biochemist, a biomedical engineer, and a few others who teach at the graduate level. They are far and beyond my seniors in the field of science, I’m just merely a data clerk.
    They have a lot of information that needs to be translated and made accessible. This will be a long difficult project. The suggested paper (above) is a start.

    It’s really really hard to start discussing the requisite basic science to understand the arguments if we throw ID and creation from the get go.

    It is, perfectly fine, and desirable in a Christian course, to have a parallel discussion about the theological implications. But that is formally separate from the science.

    I myself was not interested that much in biology until I began to believe it was Intelligently Designed. I’ve noticed even people from various non-science disciplines began to be fascinated by biology once it was made apparent a miracle was needed to make life possible.

  19. Phylogenetics does not explain the origin of Orphan and Taxonomically Restricted Proteins from normative principles. It’s a bogus non-sequitur to represent phylogenetics as an explanation for origin certain novelties in terms of normative mechanisms.

    Those hidden markov models fail to demonstrate universal phylogenies of proteins/gene with no ancestor. They sort of “poofed” onto the scene. Phylogeny is in the unenviable position of needing miracles to make universal common descent feasible.

  20. Allan Miller: But it isn’t trivial, so …

    It seems that when ID proponents try it is pretty easy. However, the big guns seem to go after very difficult approaches, so it takes them awhile to get results. I think the idea is to show that ID succeeds in really sophisticated settings. I don’t have that kind of expertise, so I just randomly go after the low hanging fruit I happen across.

  21. Nothing in what you wrote addressed my points. You’ll be all right in the ID “movement.” You’re already very good at ignoring the points and raising a Gish-Gallop

    EricMH:
    How many bioinformatics books have you read?

    I’d say quite a bit.

    EricMH:
    I’m a 3rd of the way through: https://www.amazon.com/gp/product/B0144NZ2EC

    Oh. Impressive!!!!!

    EricMH:
    And it is full of things they can’t explain through random Darwinian mechanisms or even common descent, so it’s a constant slew of fudge factors and ‘paradoxes’ and other bandaids to make their common descent based algorithms work.

    I doubt you’re understanding what you’re reading. Maybe you’re missing a lot of context. Maybe both.

    For example, “Darwinian mechanisms” are not random. They have a random component, but they’re not pure randomness. So, if you’re expecting that evolution, specifically the Darwinian one, should equal pure randomness, then you’re not reading for comprehension.

    There’s no reason why common descent should explain everything. So, if you think that we believe that common descent explains everything, then you’re not reading for comprehension.

    EricMH:
    Whereas if you adopt a design perspective it is trivial to account for what they are seeing, and you can probably develop much better algorithms.

    It’s trivial to blame all on magic. How did this happen? magic. How about that? Magic, But that doesn’t actually explain anything. That doesn’t actually account for anything. Algorithms? Nobody needs algorithms to claim everything was done by magic Eric.

    EricMH:
    For example, BLAST would probably be designed to run much faster and be more accurate if it wasn’t so tightly coupled to the similarity = homology paradigm.

    BLAST doesn’t run under such paradigm. It just finds similarities and scores them. We use the scores to decide on homology, but BLAST doesn’t assume such thing, which is why we have to correct for artifacts that produce high similarity for reasons other than homology. Again. You’re not reading for comprehension.

    EricMH:
    We could use Winston’s modular idea and decompose genomes into modules, which are much smaller and don’t have to be precisely aligned to signify similarity.

    This has nothing to do with what BLAST does. You cannot substitute a tool for driving screws with a tool for cleaning up carpets.

  22. Allan Miller: Whether or not that ‘equals homology’ is not part of the program, and doesn’t slow it down any.

    It is a big part of the algorithm. The heuristic used to score alignment was originally based on evolutionary assumptions. That ended up not working very well, so they came up with a more effective scoring matrix, and that works well but is no longer based on evolutionary assumptions but just empirical data, like scordova recommends.

    And, the fact an alignment search tool exists is partly due to the desire to find homologous DNA sequences, based on the assumption that similarity is homology.

    However, if we throw the similarity = homology assumption away then more effective search approaches open up.

  23. Entropy: For example, “Darwinian mechanisms” are not random. They have a random component, but they’re not pure randomness. So, if you’re expecting that evolution should equal pure randomness, then you’re not reading for comprehension.

    There’s no reason why common descent should explain everything. So, if you thunk that we believe that common descent explains everything, then you’re not reading for comprehension.

    This demonstrates my point. If you start with Darwinian evolution then all you have is random mutation and environmental selection. Selection can provide some sorts of order, but not all orderly patterns can be provided by selection, in fact very little order comes from selection, if you’ve ever messed with evolutionary algorithms. Certainly not the amount of order we see in the genome.

    Additionally, the concept of common ancestry does enforce certain assumptions. For instance, you would expect a very consistent branching tree when you compare DNA sequences, not the mix and match that we actually see.

    Yes, part of the benefit of assuming ID is that we are not locked into one specific pattern and instead we can identify a very wide range of patterns, as scordova mentions. You may call this magic, but it is just how science works. The broader the range of possible hypotheses and patterns we can look for, the more effectively science can proceed.

    And finally, this broadness in pattern searching is the key innovation of Dembski’s explanatory filter. Naturalistic science is locked into a priori patterns due to Fisherian hypothesis testing that requires hypotheses be stated before examining the data. So, bioinformatics is restricted by evolution theory to a priori assume maximum randomness and treelikeness. However, Dembski points out in “Specification: the pattern that signifies intelligence” Fisher’s formulation is an overly restrictive requirement. The reason why Fisherian hypothesis testing works is not because the hypothesis is stated prior to the experiments, but because the hypothesis is independent of the experiments. Prior statement (sort of) ensures independence, but it is not the only way. For example, algorithmic information theory provides other methods of ensuring independence by measuring compressibility.

  24. EricMH:
    It is a big part of the algorithm.

    No it isn’t.

    EricMH:
    The heuristic used to score alignment was originally based on evolutionary assumptions.

    As I said, you’re not reading for comprehension. Heuristics plays a role in how the best sequences to align are chosen. The scoring matrix is predetermined, and thus it doesn’t slow down the program at all.

    EricMH:
    That ended up not working very well, so they came up with a more effective scoring matrix, and that works well but is no longer based on evolutionary assumptions but just empirical data, like scordova recommends.

    Again, you’re not reading for comprehension. I don’t even know what you’re mistaking here. It’s too far from the way BLAST was developed.

    EricMH:
    And, the fact an alignment search tool exists is partly due to the desire to find homologous DNA sequences,

    And proteins and RNA.

    EricMH:
    based on the assumption that similarity is homology.

    Again you’re not reading for comprehension. It’s not based on the assumption that similarity is homology. The reason to develop alignment algorithms is that similarity beyond what would be expected from random sequences might indicate homology. See the huge difference there? And, again, the assumption doesn’t slow down the program one bit.

    EricMH:
    However, if we throw the similarity = homology assumption away then more effective search approaches open up.

    The assumption doesn’t exist, and new programs have been developed that run in a fraction of the time BLAST uses. They still find similarities, and use the very same scoring matrices.

    You need to learn to read for comprehension.

  25. stcordova: If people in a religion course want to talk about religion, that’s fine.

    I think you misunderstand me, and this seems to be a common misconception about ID, both within and outside of the movement. ID is not about religion or god. It is the inference of an alternate causal mechanism called ‘intelligent agency’ and its pattern of behavior is extremely divergent from stochastic processes. As such, we can both identify the activity of ‘intelligent agency’ and extrapolate further implications if intelligent agency is involved vs if it is not.

    For example, at the scene of a death, if we infer intelligent agency was involved, then that broadens the range of items that we look for, such as DNA, murder weapons, motive, etc. If, on the other hand, we rule it purely an accident, then that ceases our search for any further explanatory elements.

    The same applies to empirical science. If we identify a pattern, and decide it is a stochastic accident, that stops our search for any further explanatory elements. However, if we identify the pattern as non accidental, then that implies a prior cause, and we broaden our search for the explanation.

    In the context of bioinformatics, some of the implications of ID are pretty straightforward. If the genome is a code, then there will be software design principles that come into play, like the modularization that Winston has discovered. Additionally, as I’ve mentioned, it broadens the kinds of patterns we can look for in the genetic data, so we don’t have to shoehorn the data into a tree structure, but can look allow the graph structure to speak for itself.

    So, ID is nothing at all to do with religion, evangelization, or any such ideological thing. It is entirely to do with scientific advancement.

  26. EricMH,

    Don’t try and pretend to talk authoritatively about things you don’t understand. You don’t know who might be reading. You should rather be modest. Talk with caution. Try things like “as far as I’m understanding this …”

    That would help you quite a bit in life.

    ETA: some corrections of style

  27. Entropy: It’s too far from the way BLAST was developed.

    BLAST itself is just a heuristic alignment search algorithm, but the heuristic is based on a scoring matrix.

    First, there was a scoring matrix called PAM that was explicitly based on evolutionary theory. It did not do so well.

    Then, there was BLOSUM, which made evolution assumptions more ‘implicit’, so they relied more on the empirical data and inferred substitution frequencies from more conserved regions.

    https://en.wikipedia.org/wiki/Substitution_matrix#Differences_between_PAM_and_BLOSUM

    Finally, they found out that BLOSUM works even better if it is miscalculated.

    “Surprisingly, the miscalculated BLOSUM62 improves search performance.”
    https://en.wikipedia.org/wiki/BLOSUM#An_example_-_BLOSUM62

    So, my take away from these scoring matrices is that evolutionary theory is more a hindrance than a help when constructing algorithms.

    As for a list of the shoehorning I’m noticing in the bioinformatics book, here is what I remember off the top of my head after just 3rd of the way through:
    – C value paradox: length of the organism’s genome is unrelated to its apparent bodily complexity
    – the very large proportion of functional transcripts in the genome
    – the great difficulty in constructing coherent phylogenetic trees from the data
    – part of the previous point is due to the assumption that similarity = homology
    – the need to extrapolate a bunch of exotic mechanisms such as horizontal gene transfer and genetic drift to construct these trees
    – highly conserved regions tend to not be functional vs highly functional regions tend to not be conserved

  28. EricMH:

    So, my take away from these scoring matrices is that evolutionary theory is more a hindrance than a help when constructing algorithms.

    My sentiments exactly. I think Kirk Durston’s K-Modes will pave the way for better algorithms, and that is an ID-friendly conception of the meaning of the patterns of similarity and diversity.

  29. Shit Eric, really?

    EricMH:
    BLAST itself is just a heuristic alignment search algorithm, but the heuristic is based on a scoring matrix.

    Nope. The heuristic is based on the assumption that sequences with many similar words in “common” would reveal the sequences worth aligning.

    EricMH:
    First, there was a scoring matrix called PAM that was explicitly based on evolutionary theory. It did not do so well.

    There still is PAM matrices. They did do less well than the BLOSUM ones in terms of detecting things already known to be homologous because of other experimental, structural and other methods. But they did exactly as well in terms of how quickly the programs run. Try it yourself, you can still choose PAM matrices with BLAST.

    EricMH:
    Then, there was BLOSUM, which made evolution assumptions more ‘implicit’, so they relied more on the empirical data and inferred substitution frequencies from more conserved regions.

    BLOSUM used very well aligned blocks of sequence to decide what is plausible and given some amount of divergence between sequences Eric. Still a very evolutionary “assumption.” Again, remember your claim, it did not speed things up. It just worked a bit better than PAM in detecting known homologs,

    EricMH:
    Finally, they found out that BLOSUM works even better if it is miscalculated.

    What the authors sy is that it is a bit miscalculated, but that such miscalculation worked in its favour. But that wasn’t an extraordinary improvement Eric, it was a slight one. Also, this is not true for every alignment algorithm. Some algorithms work better with one kind of matrix, some with other kinds of matrices.

    Yet again, this “miscalculated” BLOSUM is better for what Eric? For finding homologs! The improvement is not on speed.

    EricMH:
    So, my take away from these scoring matrices is that evolutionary theory is more a hindrance than a help when constructing algorithms.

    Your take away message should be that you’re not understanding very well what you’re reading. The evolutionary goals of these programs don’t make them slower at all. They don’t constitute a hindrance. Programs without evolutionary assumptions have to incorporate them later or else they end up accepting a lot of crap into protein families.

    EricMH:
    As for a list of the shoehorning I’m noticing in the bioinformatics book, here is what I remember off the top of my head after just 3rd of the way through:

    More examples that you cannot read for comprehension:

    EricMH:
    – C value paradox: length of the organism’s genome is unrelated to its apparent bodily complexity

    So?

    EricMH:
    – the very large proportion of functional transcripts in the genome

    This is unknown Eric.

    EricMH:
    – the great difficulty in constructing coherent phylogenetic trees from the data

    We expect such difficulty Eric. Evolutionary history is bound to involve lots of selection sweeps, carry-overs from such, random walks/genetic drift. We expect that it should be very hard because evolution is not intelligent Eric. We expect messiness because it’s natural phenomena behind life’s diversity Eric. An intelligent designer would leave much better evidence than a messy divergence pattern. Right?

    EricMH:
    – part of the previous point is due to the assumption that similarity = homology

    Oh my fucking god. Again, the assumption is that similarity beyond random expectations might reveal homology. How many times before this enters your head? No wonder you’re so confused.

    EricMH:
    – the need to extrapolate a bunch of exotic mechanisms such as horizontal gene transfer and genetic drift to construct these trees

    Exotic? They happen all the time. How’s that exotic?

    EricMH:
    – highly conserved regions tend to not be functional vs highly functional regions tend to not be conserved

    Were you drunk while reading?

    Again, I’d advice you to be a tad more modest before making these claims. I can assure you that you’re no expert and that you’re not doing yourself any favours by writing as if you knew what you’re talking about.

  30. EricMH,

    It’s clear that you’re not reading very carefully what I wrote. So I’ll leave you now. I insist, for your own good, that you should be much more modest about your understanding of bioinformatics, biology, and phylogenetics, but it’s up to you.

  31. I’ll await Eric’s new-improved BLAST with interest. I mean, who’s got 2 minutes to wait for a result these days? Low hanging fruit indeed.

  32. EricMH: Then, there was BLOSUM, which made evolution assumptions more ‘implicit’, so they relied more on the empirical data and inferred substitution frequencies from more conserved regions.

    So your approach to speeding up BLAST would be to discard ‘substitution’ matrices? 🤔 Here’s a question though – if it’s not really evolutionary substitution, what is it? Why would Design result in chemical biases – transition vs transversion, and chemically similar acids favoured? These are empirical observations, but they aren’t evolutionary observations. Why is the data like it is?

    – C value paradox: length of the organism’s genome is unrelated to its apparent bodily complexity

    Solved, if you buy the junk DNA argument. A rich avenue of research if you don’t.

    – the very large proportion of functional transcripts in the genome

    That’s not been demonstrated, however much IDIsts may slaver over an ENCODE press release.

    – the great difficulty in constructing coherent phylogenetic trees from the data

    Easy at some nodes, harder at others, but then it is a stochastic process, so you wouldn’t really expect perfect fit. Remarkable it can be done at all though isn’t it? There must be some level at which you accept there is a real phylogeny.

    – part of the previous point is due to the assumption that similarity = homology

    Not so. They need to be ever vigilant to the spectres of homoplasy, gene transfer etc.

    – the need to extrapolate a bunch of exotic mechanisms such as horizontal gene transfer and genetic drift to construct these trees

    Exotic? Anyhoo, how was gene transfer discovered? By phylogenetic methods. A useful contribution, valuable in epidemiology for example.

    – highly conserved regions tend to not be functional vs highly functional regions tend to not be conserved

    Really?

  33. Allan Miller:
    I’ll await Eric’s new-improved BLAST with interest. I mean, who’s got 2 minutes to wait for a result these days? Low hanging fruit indeed.

    There’s several tools improved for speed that run often in a fraction of the time as BLAST runs. They miss some hits normally found by BLAST, but they allow for quick analyses and are constantly improving. We don’t need to wait for someone as incompetent as Eric to produce anything. We have some good programmers who understand what they’re doing instead.

  34. https://www.livescience.com/bernie-sanders-would-reveal-alien-information-if-elected.html

    Bernie Sanders Pledges to Release Any Info About Aliens If He’s Elected in 2020

    Will space aliens become an election issue in 2020?

    Presidential candidate Bernie Sanders (I-VT) says he’s prepared to disclose any government information about unidentified flying objects (UFOs) — but only if he wins, and mainly because his wife, Jane, asked him to.

    “Well I tell you, my wife would demand I let you know,” Sanders told podcast host Joe Rogan on Tuesday (Aug. 8), according to Fox News, even promising he would announce the findings on the podcast. (You can see the full podcast here.)

    Rogan asked if Jane was a “UFO nut”, which Sanders denied. Jane, however, has been pressing the candidate about what information he might have right now, as a senator. “She goes, Bernie, ‘What is going on [that] you know? Do you have any access?'” Sanders said.

    Related: UFO Watch: 8 Times the Government Looked for Flying Saucers

  35. The worst week for Revenue in Las Vegas history was when the 4000 physicists descended on Las Vegas:

    http://physicsbuzz.physicscentral.com/2015/09/one-winning-move.html

    How 4,000 Physicists Gave a Vegas Casino its Worst Week Ever

    hat happens when several thousand distinguished physicists, researchers, and students descend on the nation’s gambling capital for a conference? The answer is “a bad week for the casino”—but you’d never guess why.

    The year was 1986, and the American Physical Society’s annual April meeting was slated to be held in San Diego. But when scheduling conflicts caused the hotel arrangements to fall through just a few months before, the conference’s organizers were left scrambling to find an alternative destination that could accommodate the crowd—and ended up settling on Las Vegas’s MGM grand.
    ….
    It was an unmitigated disaster for the Grand. Financially, it was the worst week they’d ever had. After the conference was over, APS was politely asked never to return—not just by the MGM Grand, but by the entire city of Las Vegas 1.

    HT Dr. Joshua Swamidass

  36. https://onenewsnow.com/politics-govt/2019/08/07/rise-fall-of-harris-down-to-1-of-black-vote

    Rise & fall of Harris – down to 1% of black vote

    Sen. Kamala Harris (D-Calif.) – …. has fallen from her self-proclaimed “top-tier” candidate status to the bottom of the 20-Democratic presidential primary candidate bucket, dropping to just 1 percent of support from black Democrats.
    ….

    “Compare Harris’s recent polling numbers to Quinnipiac’s same poll from a month ago, where she stood at 27 percent among black Democrats – putting her in second place to Biden with black primary voters,” Breitbart News recalled.

    If true, this describes the knockout blow Tulsii Gabbard delivered to Harris in the 2nd Democratic Presidential debate:

    https://youtu.be/bm5_LUneIbc

  37. stcordova:
    https://onenewsnow.com/politics-govt/2019/08/07/rise-fall-of-harris-down-to-1-of-black-vote

    If true, this describes the knockout blow Tulsii Gabbard delivered to Harris in the 2nd Democratic Presidential debate:

    https://youtu.be/bm5_LUneIbc

    The question was who will you vote for, for most Democratic voters the question comes down to who you think will have the best chance to defeat Trump.

    On the question who do you think would be the best leader, Harris is the choice of 7 percent of white voters and 6 percent of black voters. She is polling at 7 percentage.

    As for the lead in the article that black voters do not support her because she is not really black, Booker polled at 0 on the first question.

  38. The front runners are Joe “Poor kids are as bright and talented as white kids” Biden, Sanders, Warren and Harris. All of them have multiple albatrosses.

    The polls are more worthless than usual.

  39. petrushka:
    The front runners are Joe “Poor kids are as bright and talented as white kids” Biden, Sanders, Warren andHarris. All of them have multiple albatrosses.

    The polls are more worthless than usual.

    Worthless because the polls do do accurately capture the present voter preferences?

  40. Worthless for predicting either primaries or elections.

    I will grant that late polls are generally within their margin of error, and are thus scientific, but this isn’t good enough to predict winners and losers.

    In retrospect, there are always polls that were correct.

    I have no idea whether Trump will be reelected. The rule of thumb is presidents with a good economy get reelected, but the market could crash.

    I think the news services have pretty well been neutralized. Russiagate polarized the voters. I watch both sides, and both have encased themselves in teflon. The news isn’t going to budge anyone.

    My gut feeling is the Democrats have succeeded in rallying their troops, but do not yet have any unifying issues. And the ones they have tried to push do not have majority support. Note that “the Squad” has been silenced.

  41. dazz: Thanks Bruce. Is this the book you’re referring to?

    What Is Real?: The Unfinished Quest for the Meaning of Quantum Physics

    Sean C interviews Beck on this book in latest Mindscape (transcript available):
    https://www.preposterousuniverse.com/podcast/2019/08/12/59-adam-becker-on-the-curious-history-of-quantum-mechanics/

    Ismael and List (again) on free will:
    https://www.the-tls.co.uk/articles/public/free-will-problem/

    http://philosophyofbrains.com/category/books/christian-list-why-free-will-is-real

  42. dazz: BTW, here’s a youtube video t

    Thx — that video seems fine. The book spends more time with that scenario, but with photons, and also gives the counting argument which shows how Bell’s proof works. No math though, not even cosines!

    The interview gives a good summary of the books scope and contents.

Leave a Reply