Philosophy and Complexity of Rube Goldberg Machines

Michael Behe is best known for coining the phrase Irreducible Complexity, but I think his likening of biological systems to Rube Goldberg machines is a better way to frame the problem of evolving the black boxes and the other extravagances of the biological world.

But even before going to the question of ID in biology, I’d like to explore philosophical and complexity questions of man-made Rube Goldberg machines. I will, however, occasionally attempt to show the relevance of the questions I raised regarding man-made Rube Golberg machines to God-made Rube Goldberg machines in biology.

Analysis of man-made Rube Goldberg machines raises philosophical questions as to what may or may not constitute good or bad design and also how we make statements about the complexity of man-made systems. Consider this Rube Goldberg machine, one of my favourites:

Is the above Rube Goldberg machine a good or bad man-made design? How do we judge what is good? Does our philosophical valuation of the goodness or badness of a Rube Goldberg machine have much to say about the exceptional physical properties of the system relative to a random pile of parts?

Does it make sense to value the goodness or badness of the Rube Goldberg machine’s structure based on the “needs” and survivability of the Rube Goldberg machine? Does the question even make sense?

If living systems are God-made Rube Goldberg machines, then it would seem to be an inappropriate argument against the design of the system to say “its poor design for survivability, its fragility and almost self-destructive properties imply there is no designer of the system.”

The believability of biological design is subjective to some extent in as much as some would insist that in order to believe design, they must see the designer in action. I respect that, but for some of us, a system that is far from physical expectation, design is quite believable.

But since we cannot agree on the question of ID in biology, can we find any agreement about the level of specificity and complexity in man-made Rube Goldberg machines? I would hope so.

What can be said in certain circumstances, in terms of physics and mathematics, as far as man-made systems, is that certain systems are far from what would be expected of ordinary non-specific processes like random placement of parts. That is, the placement of parts to effect a given activity or structure is highly specific in such circumstances — it evidences high specificity.

My two favourite illustrations of high specificity situations are:

1. a domino standing on its edge on a table.

2. cards connected together to form a house of cards. The orientation and positioning of the cards is highly specific.

We have high specificity in certain engineering realms where the required tolerances of the parts is extremely narrow. In biology, there are high specificity parts (i.e. you can’t use a hemaglobin protein when an insulin protein is required to effect a chemical transaction). I think specificity of individual interacting parts can be occasionally estimated, but one has to be blessed enough to be dealing with a system that is tractable.

In addition to specificity of parts we have the issue of the complexity of the system made of such high specificity parts. I don’t think there is any general procedure, and in many cases it may not be possible to make a credible estimate of complexity.

A very superficial first-pass estimate of complexity would be simply tallying the number of parts that have the possibility of being or not being in the system. This is akin to the way the complexity of some software systems is estimated by counting the number of conditional decisions (if statements, while statements, for statements, case statemets, etc.)

In light of these considerations we might possibly then make statements about our estimate for how exceptional a system is in purely mathematical and physical and/or chemical terms — that is providing the system is tractable.

I must add, if one is able to make credible estimates of specificity and complexity, why would one need to do CSI calculations at all? CSI is doesn’t deal with the most important issues anyway! CSI just makes an incomprehensible mess of trying to analyze the system. CSI is superfluous, unnecessary, and confusing. This confusion has led some to relate the CSI of a cake to the CSI of a recipe like Joe G over at “Intelligent Reasoning”.

Finally, I’m not asserting there are necessarily right or wrong answers to the questions I raised. The questions I raise are intended to highlight something of the subjectivity of how we value good or bad in design as well as how we estimate specificity and complexity.

If people come to the table with differing measures of what constitutes good, bad, specified, complex and improbable, they will not agree about man-made designs, much less about God-made designs.

I’ve agreed with many of the TSZ regulars about dumping the idea of CSI. My position has ruffled many of my ID associates since I so enthusiastically agreed with Lizzie, Patrick (Mathgrrl), and probably others here. My negative view of CSI (among my other heresies) probably contributed to my expulsion from Arrington’s echo chamber.

On the other hand, with purely man-made designs, particularly Rube Goldberg machines, I think there is a legitimate place for questions about the specificity of system parts and the overall complexity of engineered systems. Whether such metrics are applicable to God-made designs in biology is a separate question.

230 thoughts on “Philosophy and Complexity of Rube Goldberg Machines

  1. ERRATA:

    It is figure d above that involves histones. Apparently a, b, c are direct interactions with mRNA!

  2. stcordova: Sorry about that.Here ya go, I made the tiny url link just for you.

    Page 7, figure 1.7:

    http://tinyurl.com/j4rka6v

    I’m happy to find out other people think biology is like a Rube Goldberg machine like I do, even a Nobel Prize winner.Great minds think alike.

    So do dullards.

    More importantly, I wish you did think like him, and not in your hackneyed, nuance-free, and literalistic incomprehension.

    Glen Davidson

  3. There is something that I should point out with HOTAIR. Notice that the HOTAIR ncRNA is expressed from chromosome 12.

    That means the histone memory machinery on chromosome 12 somehow knew to help express and regulate the HOTAIR linc/lncRNA for another chromosome. That means writers/erasers/readers went to Chromosome 12 to get the histones regulating HOTAIR to do their thing.

    Then the HOTAIR ncRNA gets expressed, then somehow magically navigates to another chromosome, finds the region it needs to get to in that soup of molecules, attaches to the Polycomb protein complex that then reads the histones on that other chromosome, and then regulates the gene.

    Now that’s Rube Goldbergesque design! Amazing!

  4. stcordova: So much for your nay-saying about histone epigenetic memory in non-coding regions.

    I still haven’t at any point denied that histones are involved in regulation. What I have questioned is whether their mere presence and activity implies the entire genome is functional.

    Is there no end to these misrepresentations?

    Just like with pervasive transcription, much of this histone-related methylation activity might just be noise.

    Much =/= all

  5. John Harshman: When you don’t use scare quotes, there’s always a chance that some creationist will fail to understand that you’re being metaphorical. That’s the problem.

    ok, but who wants to always have to write cell “wall” when cell wall works just as well?

    Imagine having to write abstracts like this all the time:

    The mitotic “spindle” is the macromolecular “machine” that segregates chromosomes to two “daughter” cells during mitosis.

    http://www.ncbi.nlm.nih.gov/pubmed/18275887

  6. Mung: ok, but who wants to always have to write cell “wall” when cell wall works just as well?

    But that’s the point.

    When we use metaphorical language, we assume and expect our readers or interlocutors to accept the conventions of discourse, and not go apologetic.

  7. Well I got a bonus beyond the scope of the OP, I showed that biology can be reasonably likened to Rube Goldberg machine because it has Rube Goldbergesque designs.

    I also showed some Rube Goldbergesque designs like the histones that regulate expression genes on other chromosomes through the regulation of non-coding linc/lncRNAs like HOTAIR.

    So now moving on a bit I’d like to analyze some man-made Rube Goldberg machines.

    As I said there is not a general procedure, and most systems might not even be amenable to analysis. We have to go on a case by case basis of stating typical vs. exceptional states.

    The domino is a nice example. We can state the domino in terms of position and orientation coordinates like

    X,Y, Z, pitch, yaw, roll

    and velocity of translation and rotation like

    X’, Y’, Z’ , pitch’, yaw’, roll’

    One can make at least in principle statements of what range of acceptable values will allow the domino to be in one of two states:

    1. standing up on edge
    2. lying down flat

    The standing up state is highly specific, and we can say then it is an exceptional state because the coordinates necessary to effect that state relative to the range and space of plausible coordinates occupy a small subspace of possible coordinates. If we wish we state the specificity as an improbability. That statement is an reasonable empirical and theoretical fact. No need to get into obscure discussions about specification as laid out by Bill Dembski in CSI V2, this analysis will suffice to establish specificity.

    Whether it requires intentionality to achieve that specificity is perhaps a matter of some faith, but a high specificity system does seem to inspire thoughts that the specific configuration was intentional, that this perception of is not an after-the-fact perception.

    We might do a comparable analysis for 2 cards leaning on each other in a house of cards. The specificity estimate should be generally unassailable.

    If we wish we can add to the specificity estimate, some estimate of the probability of a mechanism existing to achieve that specificity. Is it probable earthquakes, winds, falling objects, explosions will achieve that specificity even if such mechanisms were available? After all there has to be a presumed source of energy to assemble the configuration assuming the configuration for dominos and cards was originally one of lying down.

    If we went on to analyze a full house of cards or a Rube Goldberg machine of dominos we can state the collective specificity necessary to achieve the system. When we have so many parts connecting we might say the system is complex. That complexity might be expressed as the joint improbability or specificity of each part. The point is, there are situations where we can reasonably assert a system is highly specified in its parts and complex as a whole and provide some estimate of how exceptional the whole system is.

    We can then extend the idea of specificity and complexity to molecular systems provided the analysis is tractable.

    So we have a soup of amino acids. Surely we can make a guess a the likelihood of spontaneous polymerization. Maybe the figure is close to zero.

    Obviously they can be polymerized, but it doesn’t make the configuration highly probable. We can make statements about their synthesis. Same for RNA and DNAs. These numbers then might inform us of the probability of how highly specified a given complex Rube Goldberg machine made of amino acids, RNAs, DNAs might be.

    Whether it is evolvable is a separate but related question, but we might be able to state how improbable a system is based on the specificity of how the parts must be arranged in order to interact.

    We can do this specificity analysis for dominos, and if the situation is tractable, it seems to me we can extend this to the molecular level. If a system has an insulin protein, how specific must the tyrosine kinase receptor be that recognizes it. That’s is not an-after-the-fact probability estimate of evolvability, that is merely a statement of the specificity of the two parts — like the specificity of a lock and key. That level of specificity may help inform us of evolvability, but it is merely an objective fact provided we can make the estimate.

    So it seems reasonable to me we can assert specificity and complexity of systems and provide some sort of estimate of how exceptional as a matter of principle such a system is.

  8. To get an idea that the specificity of two parts is sufficient to give a reasonable estimate that those two parts as a system are collectively improbable consider two decks of cards.

    What is the probability that the two decks of cards are in identical order from a random process? 52 factorial, which is ironically the number of possible configurations for a single deck of cards. But the point is, we don’t have to be accused of after-the-fact probability calculations if we find two decks of cards in identical order. This fact was used by the FBI in a case I mentioned here:

    Coordinated Complexity — the key to refuting postdiction and single target objections

    So if we have a protein-to-protein interaction that requires a high level of specificity by both parts, it seems reasonable to me we can do a probability calculation of the specificity of such a match. The improbability of achieving such a specificity should be able to stand on its own.

    That probability should not have to factor in the possibility that there are an infinite number of ways to construct a protein to protein reaction. We should not have to factor those other possibilities any more than we have to account for the buzzillion number of viable login password combinations for a system to assert any random login/password pair is improbable to find.

    So the complaints of single target probabilities can be successfully contested in light of stating the problem of specificities of a given order.

    A system of two decks of cards with each deck having the same order has a specificity of 52! We might hypothetically find a protein protein interaction that has a comparable specificity. Just because we can find and infinite number of phenomenon with that order of specificity doesn’t make systems with that level of specificity more probable!

    The point being we can, if the system is amenable to analysis, make statements about the specificity of certain interacting parts in man-made Rube Goldberg machines be they macroscopic objects like dominos or molecular objects.

    It would seem, in light of this, we should also be able to make estimates of specificity of God-made objects. Whether one really believes they are God-made or not, it would seem they do have an empirical property of specificity that can in some cases be estimated.

  9. I’m not saying that specificity necessarily implies unevolvability.

    Cleary the human combinatorial immune system develops antigen specificity on the fly by evolving the DNA in it’s immune system during B Cell maturation and V(D)J hypermutation and mechanism of internal natural selection among somatic cells.

    The question of whether a given specificity is evolvable is a separate question.

  10. stcordova: So if we have a protein-to-protein interaction that requires a high level of specificity by both parts, it seems reasonable to me we can do a probability calculation of the specificity of such a match. The improbability of achieving such a specificity should be able to stand on its own.

    Wouldn’t any probability calculations have to take into account factors not present in stacks of cards ? Chemical bonds etc?

  11. Mung,

    And science. And biology.

    No, just God. No need to believe in science and biology; they demonstrably exist.

  12. stcordova,

    the HOTAIR ncRNA gets expressed, then somehow magically navigates to another chromosome, finds the region it needs to get to in that soup of molecules, attaches to the Polycomb protein complex

    So magic controls binding energy? It’s a matter of amazement that sequence on one chromosome influences sequence on another? Jeez. I’d have been gobsmacked if it didn’t.

    PCR primers. RNA probes. Promoters, repressors, cofactors. How in heck do they work? Binding elves?

  13. stcordova,

    But, I’m not the one asserting a number for non-functionality like Rumraket and other are. They claim to KNOW in the absence of direct experiment.

    I’m content to say, “I don’t know, I have a hunch. Let’s see what future experiments uncover.” I guess 50%.

    50% is still a lot of junk. I guess 90%. I don’t claim to know, and neither does anyone else in this debate. The assessments are based on knowledge of what the elements are, the ability to see that one functional instance does not make the entire class functional, and an understanding of the genetic load argument and sequence conservation patterns.

    But I ask again: why does it matter? For what purpose would 50% be better than 90%?

  14. petrushka:
    When you’re backed into a corner, redefine it as the winner’s circle.

    Why not?

    The Japanese celebrated their great victory at Midway, and the many other fantastic wins that left their ships and planes resting peacefully under the Pacific Ocean.

    Once science becomes redefined as word-gaming, IDist efforts will at last be rewarded as (they think) they should be.

    Glen Davidson

  15. Allan Miller,

    Do you have any thoughts on how/why these mechanisms are transcribed from different chromosomes and still work together for critical function?

  16. Allan:

    But I ask again: why does it matter? For what purpose would 50% be better than 90%?

    It is an academic question, it’s relevance to ID and creation may or may not be peripheral.

    I don’t agree that ID predicted high function. I don’t promote ID as science, and I don’t think ID makes a lot of predictions, and ID has nothing to say about the genome being functional or not functional despite the official line from the Discovery Institute, etc.

    As a creationist who as argued genetic entropy, I think there has to be junk. The fact we have people with photographic memory tells me that maybe at one time most humans had such powerful memories and now we’ve lost that ability in general.

    But my personal issue on this academic (not really theological issue) is this. If we take the 2% figure of functionality, that is:

    2% x 3.3 giga bases x 2 bits/base = 132,000,000 bits = 16,500,000 bytes

    That does seem enough info to make a human.

    If we take 10% figure of functionality
    10% x 3.3 gigabases x 2 bits/base = 660,000,000 bits = 82,500,000 bytes

    That seems a little too little to code something as complex as a human brain much less an entire human.

    Even 6.6 gigabits for the entire genome, which is 825,000,000 bytes, about a CD or DVD worth of data — that seems a bit too little unless there some incredible well-designed compression algorithm that can manage the 100 trillion cells in an adult. So I expect a high degree of specificity and function and integration.

  17. colewd:
    Allan Miller,

    Do you have any thoughts on how/why these mechanisms are transcribed from different chromosomes and still work together for critical function?

    They structurally fit better with certain sequences of DNA. Thats how basically all transcription factors find their target, they physically fit with a stretch of bases.

    By chance other stretches in the chromosomes will look similar because they have almost the same sequence, these will also in turn be bound by TFs, though the frequency will depend on the degree of similarity. If the fit is lesser, the binding will be weaker and the TF will let go again more easily.

    In direct experiments, random sequence DNA was bound almost as often as functional by TFs.

  18. stcordova,

    But my personal issue on this academic (not really theological issue) is this. If we take the 2% figure of functionality […]

    That does seem enough info to make a human.

    No-one takes such a low figure. But equally no-one bases their assessment on how much is ‘enough to make an X’. 2% is coding. That’s a simple fact from inspection. And it was a huge surprise. But of course there’s plenty of non-coding that still makes something biologically relevant.

    If we take 10% figure of functionality […] That seems a little too little to code something as complex as a human brain much less an entire human.

    The human brain is modular. The cells in a brain are but little different from any other eukaryotic cell. The interconnectedness between neurons does not require a vast amount of unique protein, nor an enormous amount of fine control. Indeed, it is a fact that all the protein in all human tissues comes from 2% of the genome. Sure, there needs to be some tissue-specific switching. But why would there be (on 50%) 25 times as much sequence dedicated to control as the thing controlled? Is significantly more of the 2% translated in the brain than in (say) the skin? What of organisms without brains?

    Even 6.6 gigabits for the entire genome, which is 825,000,000 bytes, about a CD or DVD worth of data — that seems a bit too little unless there some incredible well-designed compression algorithm that can manage the 100 trillion cells in an adult. So I expect a high degree of specificity and function and integration.

    You manage 100 trillion cells by having most of them respond in the same way to the same developmental signals, with a gradient in effect. Simple chemical signals tell a cell what it is and where it is. There are not 100 trillion expression patterns.

    And you haven’t really addressed the difference between how many bits you must ‘need’ to make one non-human organism vs another closely related one. If not onions, pick another. If your intuition leads you to expect so many bases to make an A, how do you account for those similar organisms with substantially more or fewer?

  19. colewd,

    Do you have any thoughts on how/why these mechanisms are transcribed from different chromosomes and still work together for critical function?

    Chromosomes can break and fuse, so chromosomes are not fixed independent units over evolutionary time. I don’t see any more difficulty in a molecule going between chromosomes than to another location on the same one. The distinction is between ‘trans-acting’ and ‘cis-acting’ regulation. Cis-acting is always on the same chromosome – it’s a switch near the gene. Trans acting involves a molecular intermediate, made by one gene and bound elsewhere.

    So it’s just down to binding affinity. RNA finds is complementary sequence with remarkable ease. Likewise, a protein can target a particular molecular configuration, whether nucleic acid, protein or something else, with remarkable specifity.

    The basic driver is entropy, acting through hydrogen bonding. The bound configuration has a lower energy than the unbound, so like any free system it will shed energy and move from the latter to the former, through molecular forces alone.

    Think of PCR. You are looking for a specific sequence of DNA in an unknown sample. You need to find the beginning and end of this sequence, so you design 2 RNA primers that match a ‘sense’ sequence on one strand, and an ‘antisense’ sequence at the other end of the target on the other. You give them a few cells’ worth of DNA to play with, and the primers ‘magically’ locate their 20 or so specific bases within that 6 billion plus. Polymerase creates a copy of each strand, then when you permit annealing these short stretches of DNA themselves ‘magically’ locate each other, and bind amongst all the mess. Repeat for a few cycles and you have enough to assay.

    Trans acting regulation works the same way. It does not matter whether the factor is on the same or a different chromosome, a specific factor will ‘find’ its target.

  20. stcordova: As a creationist who as argued genetic entropy, I think there has to be junk. The fact we have people with photographic memory tells me that maybe at one time most humans had such powerful memories and now we’ve lost that ability in general.

    Or maybe not

  21. A couple of things about human memory.

    We have people with near perfect recall. See Aspergers. I’ve encountered some on the web. The seem to pay for their gift by having deficient social skills.

    Second, there are association techniques for remembering facts. When writing became popular, the older generation complained that people would no longer have to remember anything. It’s hard work.

  22. I think the association of histone methylation with human memory has been misunderstood. Histone modification affects the expression of genes (ie, not-junk) involved in memory retention. Histone methylation is not the means by which we remember. Organisms with bigger genomes don’t remember more stuff.

  23. Allan Miller: I think the association of histone methylation with human memory has been misunderstood. Histone modification affects the expression of genes (ie, not-junk) involved in memory retention. Histone methylation is not the means by which we remember. Organisms with bigger genomes don’t remember more stuff.

    I’ve already put the Onion test in this context to Sal, he basically responded by saying it’s an argument from ignorance and that other species might use their DNA in another way we just haven’t figured out yet.
    The corollary of that response is that he must either think countless other species really DO have much better memory, or that for every species there’s some unique reason why. As in he now has to invent tens of millions of unique, taxon-tailored hypotheses to explain the variations in genome size.

    He also seems to think histone-mediated methylations are involved in extremely fine-grained cell differentiation such that basically every one of the trillion cells in the human body has it’s own, unique, sliiiiightly different expression pattern compared to a similar one sitting right next to it. A proposition that’s basically extracted from his ass. Which again fails the onion test. Particularly when you look at closely related species, such as the water-fleas I showed pictures of in the other thread.

    One water flea has three times the genome size of humans, at almost 10 billion base-pairs. Another species of water-flea, looking pretty much identical, has a genome of about 1 billion base-pairs. Does one have ten times the numer of unique-cell differentiations, or ten times the memory capacity and three times that of humans? Repeat these questions for MILLIONS of species and realize you’d have to ad-hoc millions of unique taxon-specific answers to explain it away.

    Or you could just accept that the junk-hypothesis is the only one that explains these enormous variations in genome size, both within and between taxons, families and so on. And it explains why histone-mediated methylations is following basically the same type of genome-wide pattern as pervasive transcription.

  24. colewd,

    Let me give a slightly different perspective. I agree provisionally with this comment:

    Rumraket:

    In direct experiments, random sequence DNA was bound almost as often as functional by TFs.

    But if that is the case, then it shouldn’t be surprising there has to be a fairly complex apparatus to inhibit transcription rather than really initiating it. A lot of gene regulation is about preventing accidental transcription.

    You were probably referring to the HOTAIR ncRNA. The issue is that HOTAIR goes through a lot of trouble to inhibit transcription. It is a Rube Goldbergesque design.

    From the wiki entry on HOTAIR:

    https://en.wikipedia.org/wiki/HOTAIR

    The HOTAIR gene contains 6,232 bp and encodes 2.2 kb long noncoding RNA molecule, which controls gene expression. Its source DNA is located within a HOXC gene cluster. It is shuttled from chromosome 12 to chromosome 2 by the Suz-Twelve protein.

    The 5′ end of HOTAIR interacts with a Polycomb-group protein Polycomb Repressive Complex 2 (PRC2) and as a result regulates chromatin state. It is required for gene-silencing of the HOXD locus by PRC2. The 3′ end of HOTAIR interacts with the histone demethylase LSD1.

    It is an important factor in the epigenetic differentiation of skin over the surface of the body. Skin from various anatomical positions is distinct, e.g. the skin of the eyelid differs markedly from that on the sole of the foot.

    This is an extra-ordinary Rube Goldbergesque design.

    I’ll give some of the gory details because that is how one will appreciate the specificity of each part of this moderately complex system. Since I can only give one picture at a time per comment I’ll spread this over several comments.

    DNA is wound like a wire around a histone. The part of the DNA that is wound around the histone make the DNA inaccessible to transcription especially if the promoter is in that stretch of DNA. Hence the gene is inhibited from transcription because there is no room for promoter molecules to work with the DNA.

    Something has to get the histone to budge, to slide, or whatever so that the DNA can allow a transcription factor to attach (bind) to it.

    What makes the histone budge is a chromatin remodeller. There are many chromatin remodellers, and each of these remodellers have readers that read the histone memory markings.

    If a histone has a mark that a given remodeller will recognize and if the mark indicates that the histone needs to be moved or acted on, the remodeller will do its thing and make the histone slide or whatever so that the DNA is now exposed and accessible to transcription factors.

    To see how intricate and specific this is, note the following diagram of the histone tail. Individual amino acids on the tail are capable of being written to, erased, or read. Even for a single amino acid there are a set of different writing mechanisms, erasing mechanisms, and reading mechanisms! This depicts the memory marks on the histone tail which are read by things like chromatin remodellers and who knows what else.

    Even the nomenclature of each of these memory positions shows just how specific everything is. In the diagram below, look at the histone tail at the top, it is the tail of one of the two histone-3 histones. There is a marking of “27” on the tail with some symbols. The “K” means this is a lysine amino acid residue. The symbols mean that memory can be marked on it via Acetylation or Methylation, and in fact there are various levels of methylation 1,2,3.

    So this single amino acid can be in 5 states : non-marked, acetylated, mono-methylated, di-methylated, tri-methylated.

    ln2(5) = 2.3 bits information

    But the point is, lysine at position 27 can retain a memory state that provides information to enabling or blocking of chromatin remodellers. It is not trivial task to move read/write/erase heads to the histone memory marks, btw.

    When they say H3K27me3 – that means the tri-methylated memory mark on one of the Histone 3 histones at the 27 position where there is a lysine.

    H3 – histone 3 (not specified which of the 2 histone-3s since there are two in each nucleosome)

    K – lysine
    27 – position 27
    me3- try methylated state

    Now that is just a setup for understanding the HOTAIR/PRC2 interaction:

  25. OMagain: Can you make predictions using this power? Or do you just look at things and say, yes, that is as expected?

    Yes. I would predict that we should see a progression from the horizontal position towards a vertical, bipedal stance through the evolution of animal classes from amphibia, through reptiles to mammals and birds. We see the beginning of this in the way that frogs hold themselves further from the horizontal than newts. Some lizards take this further in their bipedal locomotion. We see true bipedalism within therapods, birds and mammals and true vertical stance in the human.

    Another prediction concerns the individual expression of the pentadactyl limb. If we study the development from embryo to adult in a specialist such as a bird if my thinking is anywhere near correct we would expect to see the forelimb changing from a human-like arm at the beginning of development to the specialist structure we know the wing to be. This is indeed what we see.

    Now if a structure like the forelimb of a bird began with the potential to become equivalent to a human forelimb but did not reach that potential but instead was adpted to suit the primary use as an aerodynamic lift producing device then it might appear to some as a sort of Rube Goldberg machine. Even although it performs its task extremely well its internal structure and development is overly complicated for its purpose.

    On the other hand (sorry about the pun) the structure of the human forelimb is ideal for the way we use it. It needs to be that complex for us to use it as we do.

    From scienceblogs.com

    The unique capabilities of the human hand enable us to perform extremely fine movements, such as those needed to write or to thread a needle. The emergence of these capabilities was undoubtedly essential in human evolution: a combination of individually movable fingers, opposable thumbs and the ability to move the smallest finger and ring finger into the middle of the palm to meet the thumb gives us dexterity that is unparalleled in the animal kingdom.

    These fine motor skills are according to John Harshman nothing but course examples of how we use our forelimbs.

  26. CharlieM: These fine motor skills are according to John Harshman nothing but course examples of how we use our forelimbs.

    They are, you know.

    Anyway, your predictions about forelimbs are not borne out by evidence. There is no universal move toward bipedalism, which is in fact quite rare in evolution. Frogs are not some kind of incipient bipedalism. Sometimes there’s a move from bipedal to quadrupedal, as happened in, for example, sauropods and many groups of ornithischians. The human arm is not the archetype of a forelimb, and is specialized in many ways. The primitive phalangeal formula for amniotes is for example 2-3-4-5-4, but the primitive formula for mammals is 2-3-3-3-3, a major modification. And of course the opposable thumb is a fairly recent development.

    So, based on your own predictions, your theory is falsified.

  27. HOTAIR acts as a recruiter for PRC2. It would be helpful to show just how complex the PRC2 machine is.

    PRC2 stands for Polycomb Repressive Complex 2.

    PRC2 is a writer of the H3K27me3 mark and likely the H3K27me2 mark, and maybe some other marks. But to appreciate how delicate this process is, I just present a diagram from this paper which actually goes in to all the known details (what little we actually know, that is):

    http://www.nature.com/nature/journal/v469/n7330/full/nature09784.html

    Note all several parts of the PRC2 complex, it is not just one protein or domain. It is made of proteins and ncRNA (like HOTAIR).

    Note the arrow that says “ncRNA binding” and the ladder attached. The ladder represents a variety of possible ncRNAs that can be attached there of which HOTAIR is one (there are others like XIST, KCNQ1OT1, probably other “junk” DNAs we’ve yet to find function for).

    The EZH1/2 bubble is basically then a plug-and-play port for an ncRNA to dock such as HOTAIR. It maybe worth mentioning, that EZH2 also has memory like the histones — its memory location on the Threonine amino acid reside at position 350 has to be flipped on via phosphorylation to allow or provent docking of ncRNAs like HOTAIR. So the Rube Goldberesque design seems almost limitless in its complexity because the more one looks and understands the more questions are raised!

    The SUZ12 bubble is the shuttle bus that was used to move the HOTAIR ncRNA from chormosome 12 to chromosome 2. It is now parked at the PRC2 parking lot in the diagram.

    Btw, I should point out, it’s rather amazing the shuttle bus can do this without any active propulsion. It has to sail on the winds of brownian motion to bring HOTAIR from choromosome 12 to the PRC2 on chromosome 2 and bumping into a buzzillion other molecules along the way in a molecular traffic jam. Fenomenal Black Magic (FBM).

    It would not surprise me if the SUZ-12 shuttle bus also has little memory marks on it (aka epiproteomic post-translational modifications) to enable or prevent passengers like HOTAIR from boarding the bus.

  28. John Harshman: They are, you know.

    So you are trying to tell us that fine is synonymous with course

    John Harshman
    Anyway, your predictions about forelimbs are not borne out by evidence. There is no universal move toward bipedalism, which is in fact quite rare in evolution. Frogs are not some kind of incipient bipedalism. Sometimes there’s a move from bipedal to quadrupedal, as happened in, for example, sauropods and many groups of ornithischians.

    And what happened to those sauropods and ornithischians? They certainly aren’t an example of progressive evolution. In fact they regressed to the point of dying out.

    The move towards bipedalism is rare in amphibians, less rare in reptiles, ubiquitous in birds and extremely common in primates. How many individual primates exist on the planet? What percentage of those primates are fully bipedal? Can you see the trend?

    John Harshman
    The human arm is not the archetype of a forelimb, and is specialized in many ways. The primitive phalangeal formula for amniotes is for example 2-3-4-5-4, but the primitive formula for mammals is 2-3-3-3-3, a major modification. And of course the opposable thumb is a fairly recent development.

    So, based on your own predictions, your theory is falsified.

    You only think it is falsified because of your narrow perspective. Do you agree that humans have evolved from primal tetrapods? We were once reptile like. Do you know the phalangeal formula for these ancient ancestors of our?

    You may or may not know that I subscribe to the Blakean mantra, “to see the world in a grain of sand”. The whole is reflected in the parts.

    The human fetus has over 300 bones, the adult just over 200 bones. As we harden into our physical form it is natural for the number of bones to diminish.

    If you are going to talk about the human arm as an archetype then you have to consider it throughout its evolution and development in the individual. Every single instance of it are expressions of the archetypal form. The Goethean archetype is mobile and dynamic.

  29. HOTAIR not only attaches to PRC2. It attaches to LSD1 (lysine specific demethylase 1). LSD1 is an eraser, PRC2 is a writer! It attaches a writer on one end and an eraser on another!

    http://science.sciencemag.org.mutex.gmu.edu/content/329/5992/689.full
    Long Noncoding RNA as Modular Scaffold of Histone Modification Complexes

    Long intergenic noncoding RNAs (lincRNAs) regulate chromatin states and epigenetic inheritance. Here, we show that the lincRNA HOTAIR serves as a scaffold for at least two distinct histone modification complexes. A 5′ domain of HOTAIR binds polycomb repressive complex 2 (PRC2), whereas a 3′ domain of HOTAIR binds the LSD1/CoREST/REST complex. The ability to tether two distinct complexes enables RNA-mediated assembly of PRC2 and LSD1 and coordinates targeting of PRC2 and LSD1 to chromatin for coupled histone H3 lysine 27 methylation and lysine 4 demethylation. Our results suggest that lincRNAs may serve as scaffolds by providing binding surfaces to assemble select histone modification enzymes, thereby specifying the pattern of histone modifications on target genes.

    A writer and eraser connected together simultaneously? I think God has a wonderful sense of humor. Reminds me of this device that has a writer on one end and eraser on the other:

  30. stcordova,

    Thanks for all these details you are providing Sal. I prefer looking at nature in its directly sensible macro form, but these micro views that were are just beginning to get a grip of are fascinating and I’ll need to take the time to study your posts in more detail.

  31. CharlieM: So you are trying to tell us that fine is synonymous with course

    No, not even if you spell “coarse” right.

    And what happened to those sauropods and ornithischians? They certainly aren’t an example of progressive evolution. In fact they regressed to the point of dying out.

    Wow. It’s been a while since I saw such a blatant claim that extinct species must be successively inferior to anything that came later. And “regressed”? This sort of thing went out of style with orthogenesis. You should at least progress (and science, unlike life, definitely progresses) to 20th Century biology.

    The move towards bipedalism is rare in amphibians, less rare in reptiles, ubiquitous in birds and extremely common in primates. How many individual primates exist on the planet? What percentage of those primates are fully bipedal? Can you see the trend?

    Wow again. There is no move toward bipedalism in amphibians. There are no extant bipedal reptiles unless you count birds. There is no “ubiquitous” move toward bipedalism in birds, just retention of the ancestral dinosaurian condition that evolved once in that common ancestor. And as for fully bipedal primates, there is one such species at present. And you say there’s a trend? This is a shameless, Procrustean attempt at a scala naturae.

    You only think it is falsified because of your narrow perspective. Do you agree that humans have evolved from primal tetrapods? We were once reptile like. Do you know the phalangeal formula for these ancient ancestors of our?

    Yes, I do. As I said, 2-3-4-5-4. If there’s a tetrapod archetype, that would be it. No, it’s your perspective that’s narrow. You see a linear advance from primitive to human, and I see the full diversity of life.

    You may or may not know that I subscribe to the Blakean mantra, “to see the world in a grain of sand”. The whole is reflected in the parts.

    That’s a fine bit of empty phrasing, and there’s certainly much about the world in a grain of sand. But hardly the whole world; sorry, but only part of the whole is rejected in any given part.

    The human fetus has over 300 bones, the adult just over 200 bones. As we harden into our physical form it is natural for the number of bones to diminish.

    If you are going to talk about the human arm as an archetype then you have to consider it throughout its evolution and development in the individual. Every single instance of it are expressions of the archetypal form. The Goethean archetype is mobile and dynamic.

    No idea what any of that meant.

  32. On one end of the HOTAIR pencil is the writing component called PRC2 which I detailed above. PRC2 is a non-trivial device.

    The other end of HOTAIR pencil connects to an eraser. The eraser is not trivial either. Here is a diagram of the erasing machine called the LSD1/coREST/REST complex.

    To understand it’s operation I’ll have to go into some details, but first a diagram of this wonderful Rube Goldbergesque design. Note the bubbles called HDAC1 and HDAC2. I will provide a video of the role of the HDAC part in a subsequent comment, but for now, perhaps a picture is worth a thousand words:

  33. For the LSD1/coREST/REST eraser to work it has to first loosen up the DNA from around the histone. Amazingly there is yet another Rube Goldbergesque design feature on the DNA itself that makes this possible.

    There are 3 major molecular classes in the translation cycle:

    1. DNA polyemers
    2. RNA polymers
    3. proteins (amino acid polymers)

    Each class of polymers can have specific locations on them chemically marked and thus act like a memory device that can be written/erased/read.

    In the case of DNA we call the marks epigenetic marks, for RNA epitranscriptomic marks, and proteins we might call the epiproteomic marks but the term post-translational modification (PTM) is the more common term.

    This video shows how an HDAC parks itself on the memory mark on DNA called a cytosine methylation in order to remove memory marks (acetylations).

    Now in the above LSD1/corREST/REST complex, the goal of the complex is to get the a memory mark erased from a lysine on a histone.

    So get this:

    HDAC reads a memory mark on DNA and parks on it so that it can erase an acetyl memory mark on one histone in order to allow the LSD1/coREST/REST comlex to erase yet another memory mark on the histones.

    Oddly, the process should increase the likelihood the chromatin will compact, but here is the HDAC part of the LSD1/corREST/REST complex in action. Note the epigenetic cytosine methylation memory mark on the DNA that makes this possible:

    https://www.youtube.com/watch?v=29doT6Hf2MI

  34. stcordova,

    Allan
    Rumraket
    Thanks for the comments. This will take me a little time to sort through. Very interesting comments and Sal thanks for the diagrams and video’s.

  35. dazz: No need for faith when there’s evidence

    Science is a faith-based enterprise. What counts as evidence is faith-based.

  36. newton: Wouldn’t any probability calculations have to take into account factors not present in stacks of cards ? Chemical bonds etc?

    Do you actually believe there are no chemical bonds in a deck of cards?

  37. I’m going through some painful details to drive home the point that there is too much specificity and orchestration going on.

    This is in stark contrast to claims the DNA is 90% junk. A lot of this has to be orchestrated carefully, it can’t be willy nilly and still work. The system is very fragile in certain critical points.

    It may look like a silly Rube Goldbergesque design, but that is exactly the sticking point: does the silliness and round-about way of doing things negate the stagering complexity and specificity required to assemble such an intricate cascade of molecular events? The more extravagant the Rube Goldbergesque the design is, the more complex it is.

    I really don’t think random mutation and natural selection would be inclined to create Rube Goldbergesque designs, Darwin instinctively knew this when he saw the Rube Goldbergesque design of a peacock’s tail and was sickened by its implications. In contrast such Rube Goldbergesque extravagances put me in awe of the Creator’s genius:

  38. Mung,

    By that that logic, God exists.

    Wrong wrong. God is not being observed to exist, you simply attribute phenomena to him. Accompanied, or not, by rilly rilly complicated diagrams.

  39. stcordova,

    It would not surprise me if the SUZ-12 shuttle bus also has little memory marks on it (aka epiproteomic post-translational modifications) to enable or prevent passengers like HOTAIR from boarding the bus.

    A reasonable hypothesis given the nuclear pore complex uses a similar technique to “manage” proteins moving in and out of the nucleus.

  40. colewd,

    A reasonable hypothesis given the nuclear pore complex uses a similar technique to “manage” proteins moving in and out of the nucleus.

    This is conflating two very different mechanisms. In general, specificity is a long established fact in biology. This does not make every hypothesis regarding specificity, its mechanism or role, equally valid.

  41. Mung,

    Do you actually believe there are no chemical bonds in a deck of cards?

    Yeah, newton, he’s gotcha there alright!

Leave a Reply