Dembski: “Conservation of Information Made Simple”

At Uncommon Descent, Bill Dembski has announced the posting of a new article, Conservation of Information Made Simple, at Evolution News and Views.  Comments are disabled at ENV, and free and open discussion is not permitted at UD. I’ve therefore created this thread to give critics of ID a place to discuss the article free of censorship.

74 thoughts on “Dembski: “Conservation of Information Made Simple”

  1. One really nice feature of the surfing metaphor is it combines change and homeostasis. I’m kind of excited about that.

  2. Yes, kairosfocus’s “island” is the “wave” and it moves.

    In this analogy, our surfer could transit to another “wave” if he got close enough to one.

     

  3. Or wipe out.

    I offer a possible blog post title from a childhood book. Surfing The Web Of Life.

  4. Another problem with the ‘islands of function’ metaphor is that it encourages people to think in just three dimensions and to apply the intuitions they’ve acquired from a lifetime of operating in three-dimensional space. This can be very misleading.

    In our three-dimensional world an island only has to be surrounded by water in two dimensions. That is, if you move far enough north, south, east or west, or any combination of these directions, you’ll find yourself underwater.

    In an n-dimensional space an ‘island’ has to be surrounded by water in n-1 dimensions. If there is an isthmus along any one of those n-1 dimensions, or any combination of those n-1 dimensions, then you don’t have an island. Thus ‘islands’ become less likely as the dimensionality increases.

    Real fitness landscapes have thousands of dimensions. How likely is it that each of those dimensions, plus every combination of those dimensions, leads underwater?

    The weaknesses raised in this thread have all been brought to KF’s attention at UD, and more than once. Not surprisingly, he’s chosen each time to ignore the challenge instead of confronting it.

  5. I brought up the thousands of dimensions point with gpuccio when he argued a designer could use targeted selection. He just waved it off. I argued that natural selection sees all dimensions simultaneously. To me this is a major argument against design. Unless the Designer is You Know Who.

  6. Joe:”In Dembski’s scenario we can get the final target of a 6 without securing the “6″ machine. And we would never know if the 6 we got was from the “6″ machine or any of the other 5 machines. “

    Why would Dembski even bring something as mathematically conceptual as this to an argument that concerns itself with physical realities?

    That would be like actually calculating the odds of where in your cup, a half cup of water will end up.

     

  7. Dembski’s description of the problem is completely hosed. It’s interesting Joe would answer on UD the question I asked him on DiEblog. The target is a 6. You never know if you picked the machine rich in 6s, regardless of the outcome. You could, of course, run thousands of trials and figure out how the machines are configured, but that’s not allowed under the rules of this game. If it were allowed, then the search for a search could easily find a rule that would outperform a blind search.

  8. Yes, kairosfocus’s “island” is the “wave” and it moves.

    Indeed. The environment, the designer, is dynamic and modelling attempts that don’t account for the multi-dimensionality of the environment are doomed to failure.

  9. kairosfocus: “And, sooner or later, the dirty objector tactics are going to backfire bigtime. “

    The ID side has taken the role of objector since evolution has a mechanism and Dembski won’t even accept that ID has to lower itself to that level of detail.

    Until ID works out a mechanism, they have nothing to teach.

    As for fine-tuning of the universe, I consider that one of ID’s weakest arguments.

     

  10. petrushka quoth (about an adaptive landscape that changes through time):

    That seriously changes my understanding of punctuated equilibrium.

    Indeed.  This was raised in the debates in the 1980s between advocates of punctuated equilibrium and theoretical population geneticists.  There are rather ordinary population-genetic mechanisms for punctuated patterns of change, so the observation of the pattern does not establish that this requires a new mechanism. There’s a really nice paper on punctuated patterns arising from changes in an adaptive landscape by Mark Kirkpatrick in American Naturalist in 1982

    (petrushka again):

    One really nice feature of the surfing metaphor is it combines change and homeostasis. I’m kind of excited about that.

    Careful.  I suspect you are referring to C.H. Waddington’s image of “canalization” in development as little balls rolling down surfaces.  But that is in developmental time in the development of one individual, and this is in evolutionary time.  Those are different.

    (petrushka):

    Or wipe out.

    OK, my brain has failed.  How does one collapse into the adaptive landscape and darned near drown? Or get hit on the head by your surfboard?

    For anyone who thinks that adaptive landscapes are fanciful analogies of no use to actual evolutionary biologists — not so.  I just finished last month co-teaching a weeklong workshop course at the National Evolutionary Synthesis Center on Evolutionary Quantitative Genetics, and the notion is central to that work.   You can all listen to audio recordings of our lectures here (and the slides are there as PDF or PPT files too).  And you can run the lab exercises too.

  11. Well one difference between this site and UD is that we can disagree without being accusatory, and I can accept correction. Wipe out was a fanciful term for extinction. I suppose I’m wrong about stasis, but I was conjecturing that as long as a body type is adapted there will negative selection for the kinds of somatic changes that would show up in fossils.

    I assume that molecular evolution continues during equillibrium periods.

  12. I started listening. Some lectures seem to be unavailable. And I am obviously in over my head on tbis.

    I just assumed that long stretches with no obvious physical changes indicated some contstraint. I guess I had in the back of my mind the possibly erronious thought that feral dog populations tend to revert to one form, even though they might carry alleles of various breeds. Fossils wouldn’t indicate the hidden potential for variety. My thinking on this is obviouusly unencumbered by actual knowledge.

    I seem to have read that large populations re slower to evolve and founder populations faster. But a large population seems indicative of success. A significant change in environment or predation might reduce the population.

  13. petrushka wrote (about the online postings of audio files and slides for my NESCENT summer course):

    I started listening. Some lectures seem to be unavailable.

    I am embarrassed.  They are almost all missing!  I just wrote to the person who should have put them there and asked that the proper links be made available there.  I hope that his can happen in a day or two.  A couple of the guest lectures (Mackay and Losos) were not recorded, as I had not asked them for permission to record and they contain unpublished research trsults.  Also the computational lab sessions were not recorded as there were no lectures then.  Thanks for pointing this out.

  14. Given that Dembski wisely does not dirty his hands dealing with objectors, perhaps his principal representatives on earth, Joe and Kairosfocus, can address these problems with an ‘islands of function’ approach to the exploration of the phase space of a particular n-length v-variants polymer (total loci v^n)

    1) Define ‘function’. Many amateur theoreticians are convinced that the only functional protein, for example, is one with catalytic activity. This is but one role that proteins play in organisms. Many of those functions – including catalysis itself – are not nearly as rigidly sequence-dependent and non-substitutable as design theorists appear to think.

    2) Overall function richness. If you don’t know how the space of ‘well-formed strings’ is structured wrt function (however defined), you can’t simply declare its structure to be islanded. All attempts to analogise (eg from English strings) are completely redundant. Words don’t fold up into tertiary structures with the capacity to affect thermodynamic potential.

    3) ‘Local’ function richness. Evolutionary exploration is from current established ‘bridgeheads’ of viability, not a series of madman’s leaps in the dark. So the structure of the distribution of function in the local region that functional strings in living organisms tend to occupy is more important than the overall structure, except at OoL (and the probably separate OoP – Origin of Protein).

    4) There is a non-static underlying fitness landscape. It is not a simple case of string changes probing points of fixed fitness. The adaptive landscape undulates and ripples continually, due to both external and internal changes to the ‘environment’ which is the ultimate arbiter of whether a change is beneficial, neutral or deleterious (and includes conspecifics). A shifting landscape can ‘capture’ points currently at sea.

    5) The interconnectedness of functional space. A 2- or 3-dimensional space can easily be mentally divided up into islands surrounded by seas – we inhabit such a space in ‘3D-world’. But increasing the dimensionality increases the number of potential pathways from point to point, which affects the availability of directional selective ‘improvement’ or pathways of drift.

  15. Your point 1 there is one which has been reiterated time and time again, but which has never been acknowledged, let alone refuted, by ID “theoreticians”

    Even Axe, although he has in the past published at least one paper pointing out just that “substitutability”, seems just to skate over its import for ID 

  16. I have a dumb question about the size of the search being done by evolution. My ignorant understanding sees, let’s say, 30,000 genes, maybe 2000 bases per gene,  four possible values per base. I see that as 120 million possible point mutations (or 90 considering they already have a value).

    So if I’m off by two orders of magnitude, that could be 9000 million possible changes.

    A large number but not astronomical, and trivial compared to populations of bacteria over millions of years.

    How stupid is this?

    How difficult is it to do a blind, exhaustive search?

  17. Well, there has to be viability of intermediates. Exclusive point mutation might find itself a little stumped by blind alleys. But there are plenty of grosser mechanisms to shake things up – including the ID-er’s favourite, the transposon, whose occasional donation of beneficial sequence they regard as ‘the’ reason over half of our genome is stuffed with their debris.

    Replicating genomes are like a ‘superfluid’. Any viable crack or crevice in ‘search space’ and they can trickle through it. ID insists that function is sealed and watertight. But there is no reason to suppose that it is. And huge leaps in the dark can be made with ease – it does not have to be strictly stepwise across a tottering bridge of “dad to dam to dum to mum” (see – there are answers in Genesis, prog-rock fans!). Most will be detrimental but – as with any other kind of mutation – evolution proceeds via the ones that aren’t.

    It really depends on the structure of the space – strictly, the structure of historic space. Every species on earth may have nowhere left to go, but historic evolution could still have proceeded ‘naturally’ up to this point.

    One shell-game is constructing a 20^n space, because there are 20 amino acids now. But for the organism that first stitched together a peptide bond, there may have been as few as 1. Such a poly-amino-acid could still be ‘functional’, even if it could not catalyse a reaction. Then the first catalytic peptide … what’s the minimum specification for a catalytic peptide? Dembski? KF? Joe? Anyone?  

    Once the protein system is in place, the real turbo charger is recombinant sex. This takes proven viable half-genomes (though really, we are double-genomes), hooks them up for a life then, at the end, produces a whole bunch of gametes from stitched-up amalgams of the parental chromosomes. This gives a much more effective ‘search’, by probing regions that would take many single amendments (which may hit impassable ‘roadblocks’), and by recombining each ‘successful’ gene into the population independently of all the others and likewise dropping the deadweights. Without recombination, a gene’s selective advantage/disadvantage is subsumed within that of the collective it finds itself bound to. 

    Turning crossover on in GAs likewise often speeds up ‘search’ no end.

  18. But leaps into the dark are precisely what feeds ID delusions. I’m rather ignorant of details, but it is my understanding that most nonfatal leaps occur in noncoding sequences, or create redundant sequences through duplication, which are then available for stepwise modification.

    I’m looking for some specific range of numbers to counter the ten to the google search space claimed by ID

  19. OK, but those are going to sound suspiciously like search for a search. I’m looking for the size of the search space, the local vicinity. Something to counter the 10^500 argument. But it would be interesting to know what percentage of successful mutations are other than point mutations. I was under the impression it is low.

  20. Ummm … it’s complicated!

    http://www.genetics.org/content/148/4/1667.full

    I think there’s a danger of getting sucked into a game played by ‘their’ rules. They assert that the haystack is enormous and the number of needles tiny without doing any work to back that up, and we have to scuttle round to refute that assertion.

    It is also frequently hard to determine if they are talking of DNA space, protein space, a fitness landscape, a single sequence, a bacterium, a human …

    The papers linked elsewhere which found significant function in a tiny corner of the overall space by, essentially, massive ‘leaps in the dark’ certainly call the assertion into question – we are no more computationally equipped to assay the structure of the space than they are to make their assertions about it, but we do have empirical sampling data.

    One consideration is the mechanistic basis to the various transformations. This is not a ‘search for a search’, but simply stuff that happens in an imperfect replicative world (including meiotic recombination, which even without errors can be viewed as a kind of mass-mutation, with generally benign consequences).

    A genome will probe its ‘point-mutation neighbourhood’ readily, will hit a damaging fraction but can uncover any local benefit. But the logic of ‘leaps in the dark’ is the same. Each transformation occurs at a particular (if variable) rate, and a particular (if variable) fraction may prove to be beneficial. That fraction may be zero for really wild leaps, such as trisomies in humans (distinct from broken chromosomes, which are quite common). But we can see that such leaps have actually happened (if, that is, one accepts Common Descent! :D).They can shake genomes out of local fitness maxima, in much the same way that drift does at population level. People do much the same with GAs.

    If you look at DNA space, you can see how much closer regions really are, in terms of the probability that the commonest types of error will get there easily from a given point. Many leaps will fail; evolution proceeds courtesy of the ones that don’t. AGTCAGCTT is a simple step away from – for example – AGTGCAGCTT (1 insertion), AAGCTGCACT (antisense complement), and AGTCAAGTCAGCTT (short 5-base duplication). The peptides produced, should the sequence happen to frame-align within a translated region, are Ser-Gln-Leu, Ser-Ala-Ala, Lys-Leu-His and Ser-Gln-Val-Ser respectively. These are a significant distance apart in peptide space, but a short step mechanistically in DNA space. Then there are ‘chemical space’ and ‘folding space’ to consider – primary sequence determines it, but what ultimately matters are the chemical properties of the folded protein – trying to ‘digitise’ an analogue space is really rather misleading.

    If one considers a hypothetical early scenario with few amino acids, the space is fairly small. Every time you add an acid, you increase the space exponentially – but if you already had function in the smaller space, you aren’t going to make it RARER simply by adding an acid to your toolkit! Within 5 or 6 broad categories, the acids aren’t all that different from each other, but the variants increase the subtlety available in fine tuning.

Leave a Reply