Searching for a search

Dembski seems to be back online again, with a couple of articles at ENV, one in response to a challenge by Joe Felsenstein for which we have a separate thread, and one billed as a “For Dummies” summary of his latest thinking, which I attempted to precis here. He is anxious to ensure that any critic of his theory is up to date with it, suggesting that he considers that his newest thinking is not rebutted by counter-arguments to his older work. He cites two papers (here and here) he has had published, co-authored with Robert Marks, and summarises the new approach thus:

So, what is the difference between the earlier work on conservation of information and the later? The earlier work on conservation of information focused on particular events that matched particular patterns (specifications) and that could be assigned probabilities below certain cutoffs. Conservation of information in this sense was logically equivalent to the design detection apparatus that I had first laid out in my book The Design Inference (Cambridge, 1998).

In the newer approach to conservation of information, the focus is not on drawing design inferences but on understanding search in general and how information facilitates successful search. The focus is therefore not so much on individual probabilities as on probability distributions and how they change as searches incorporate information. My universal probability bound of 1 in 10^150 (a perennial sticking point for Shallit and Felsenstein) therefore becomes irrelevant in the new form of conservation of information whereas in the earlier it was essential because there a certain probability threshold had to be attained before conservation of information could be said to apply. The new form is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled “Conservation of Information Made Simple” (go here).

 

As far as I can see from his For Dummies version, as well as from his two published articles, he has reformulated his argument for ID thus:

Patterns that are unlikely to be found by a random search may be found by an informed search, but in that case, the information represented by the low probability of finding such a pattern by random search is now transferred to the low probability of finding the informed search strategy.  Therefore, while a given search strategy may well be able to find a pattern unlikely to be found by a random search, the kind of search strategy that can find it itself commensurably improbable i.e. unlikely to be found by random search.

Therefore, even if we can explain organisms by the existence of a fitness landscape with many smooth ramps to high fitness heights, we have are left with the even greater problem of explaining how such a fitness landscape came into being from random processes, and must infer Design.

I’d be grateful if a Dembski advocate could check that I have this right, remotely if you like, but better still, come here and correct me in person!

But if I’m right, and Dembski has changed his argument from saying that organisms must be designed because they cannot be found by blind search to saying that they can be found by evolution, but evolution itself cannot be found by blind search, then I ask those who are currently persuaded by this argument to consider the critique below.

First of all, I think Dembski has managed to mislead himself by boxing himself into the “search” metaphor, without clarifying who, or what, is supposed to be doing the searching.  When I am struggling to understand an argument, and unclear as to whether the laws lies in my own understanding or in the argument being made, I like to translate the argument into E prime, and see whether, firstly, still makes sense, and secondly, leaves out some crucial information (information that has been “smuggled out” of the argument, as it were :)).  In A Search for A Search, Marks and Dembski write:

A search’s difficulty can be measured by its endogenous information defined as

I=−log2p

where p is the probability of a success from a random query. When there is knowledge about the target location or search space structure,

Translated into E-prime (avoiding the verb to be and the passive voice), this becomes:

We can measure the difficulty of a search by its endogenous information, which we define as

I=−log2p

where we represent the probability that the searcher will find the target using a random query as p. When the searcher knows the location of the target, or a way to find the location of the target, the probability of finding it will increase, and we define this increase in probability as the active information [possessed by the searcher].

See what I did there? E-prime forces the writer to specify the hidden doer of each action, and, in this case, reveals that the “active information” is that possessed by the searcher at the start of the search. But who is the searcher?  And how does the information transfer take place?

Now, the searcher doesn’t need to be an Intelligent (i.e intentional) Agent.  It could be a mechanical algorithm, or physical system, that results in something that is special (if not formally specified) in some way (a cool pattern, a functioning organism, a novel feature), and also something that is unlikely to just turn up (“blind search”).  So in keeping with the mathiness of the paper, well denote the Searcher as S, and the special result (which could be one of many possible results that we’d consider Special) as R (I want to avoid T for Target, which I think is another siren that may lure us to the rocks instead of the deepwater channel).

So what would it actually mean for S to be in possession of Active Information that would make R more likely?  Well, let’s take some concrete examples.  Let’s say S is  Lizzie looking for her car keys, and the Search Space is her house.  If Lizzie pats every square inch of surface in her house with her eyes closed, and with no clue on which surface she might have left her keys, her probability of finding them on any given pat will be 1 divided by the number of square inches of space in her house. But if she knows she left them on the kitchen table (i.e. knows the location, or a subset of possible locations that must contain the location), or knows that if she thinks back and remembers what she did when she last came in from the car (knows how to acquire knowledge of the location), hee probability of finding them will go up considerably, in other words, 1 divided by the number of places she has to pat will be quite small.

But let’s say R is not an object, like keys, but some kind of physical pattern or configuration with some rare property, for example, a run of 500 coin tosses in which the product of the runs-of-heads is large (i.e. one of a rare and specified subset of all possible runs of 500 coin tosses), as in my thread here. We can compute (as we do in that thread) just how large I is for patterns of coin tosses with a given magnitude of product-of -runs-of-heads by computing how rare those patterns are when generated by real tosses of a fair coin, and can regard R as any patterns with an I value over some threshold.  So what would we have to do to make R more likely?  Dembski and Marks quite reasonably, say that anything we do to make R more likely will itself be something less likely than a simple coin-tosser (and coin-tossings are fairly common, therefore fairly likely). Well, we could get a human being to sit down and work out a few runs that had high I values, and manually place them on the table.  In which case, presumably Dembski and Marks would argue, the human “searcher”, S, would now be in possession of Information commensurate with, or greater than, the Information I .  Which I am happy to accept (whether such an agent is rarer than a coin-tosser, I don’t know, but probably, and in any case in this scenario we are positing something – an intelligent human being, which may itself by much less probable than say some simple physical process that by which coin-like objects regularly fall off cliffs).

But let’s now say that instead of  Lizzie using her intelligence to work out a good run, and then lay it down manually, Lizzie wants to set up a system that will, all by itself, with high probablity, result in – find – a run of coin-tosses with high I.  To do this, she decides (as I did) to write an evolutionary algorithm, in which the starting population consists of a population of runs of 500 coin-tosses generated by a quasi-coin-tossing method (each successive coin toss independent of the previous one, with .5 probability of each being heads), i.e runs that are the result of “blind search”, but on each iteration, the members of the population of runs “reproduce”, with random mutations, and those runs with the lowest product-of-runs-of-heads are culled, leaving the higher performers in the game for the next round.

Clearly this is an informed search.  Lizzie has constructed a “fitness landscape” in which runs that have more of the desired feature (high product-of-runs-of-heads) are “fitter” (more likely to breed) than ones with less of it.  So we can picture this “fitness landscape” as a histogram, in which there are a great many short bars, representing runs with smallish, products-of-runs-of-heads; a few very short bars, representing runs with extremely small products-of-runs-of-heads, and a range of taller bars, with the tallest bar representing the run with the maximum possible product-of-runs-of-heads.  However, that is not all she has to do.  So far, the fitness landscape has no specified structure. The bars are all jumbled up, with high ones next to low ones, next to medium sized ones.

This is what the fitness “landscape” would look like if the randomly mutated offspring of each of successful run had a product-of-runs-of-heads that was unlikely to resemble that of the parent run. The fitness landscape is “rugged”, and R will remain improbable

Note that in this example, we have both a genotype – which is the run of coin-tosses itself – and a phenotype – which is the product-of-runs-of-heads.  The fitness criteria only applies to the phenotype, and it turns out that in this system that quite similar-looking genotypes can have very different products-of-runs-of-heads, resulting in a very rugged fitness landscape.

This means that original Searcher, Lizzie, the Intelligent Designer, needs not only to Design a fitness function (a system in which the closer a phenotype is to R, the more likely it is to reproduce), i.e. the fitness histogram, but also something that will arrange the bars of the fitness histogram in such a way that the population of runs-of-coin-tosses is can “move” from the lower bars to the higher – make it into a smoother, less rugged, “landscape”. To do this, she must ensure that the ways in which offspring can differ genetically from their parents includes ways in which they can inherit not just the phenotype but the genotype.

And it turns out that point-mutations are not very good at doing this.  So, being an Intelligent Designer, Lizzie thinks again, and adds some a different kind of mutations – she includes adding an extra coin-toss to random positions in the run, and then trimming the end to keep it the same length.  Now, it turns out, offspring are much more likely to resemble their parents phenotypically as well as genotypically, and the fitness landscape histogram has arranged itself so that similar height bars tend to be adjacent to each other, and the “landscape” is quite smooth.

However, there are still deep valleys between peaks, and populations tend to get “stuck” on these local maxima – they find themselves on a high-ish histogram bar, but the only route to a yet higher bar is across a valley.  In other words, a given genotype may be quite fit, but the only way its descendents can ultimately be fitter is if some of them are less fit, and with the culling of the unfit being fairly ruthless, this is a low probability event.  In practice, in this example, this is because runs with lots of three-head and four-head segments are quite fit, but to convert a genotype with lots of threes and fours into much fitter one with mostly fours, by point mutation or insertion only, too often involves first breaking up some of the fours into a one and a three, which lowers the fitness.

So she thinks yet again, and now she includes snipping out pairs of segments and swapping their positions; and duplicating segments, where a segment is repeated, over-writing another segment, and deleting a segment entirely and replacing it with random heads or tails.

And lo and behold, this new system system tends to produce R much more readily (with higher probability), and not only that, the very highest possible peak is reliably achieved. This set of mutational methods has resulted n a fitness landscape in which there are at least some sets of bars in the histogram that form a series of steadily ascending steps, from the low bars to the very tallest bar of all.  The peaks are high, the landscape is smooth, and the valleys are shallow.

So the take home message for me was: my successful fitness landscape, consisted of three Designed elements –

  1. the fitness criterion, by which fitness is defined, which is the same as defining R;
  2. the relationship of genotype to phenotype, which ensures that fit parents tend to have fit offspring, and makes the landscape smooth,
  3. the variance generation mechanisms are such that the valleys are shallow.

Now, Dembski and Marks would say, presumably, that in my final set up, with several variance-generating mechanism, and which reliably produced R, i.e. making R a high probability result, itself contains at least as much information as that represented by R patterns when R could only be generated by old-fashioned coin-tossing runs.

And we know that that Active Information came from Lizzie, an Intelligent Designer.  I was the original Searcher, possessing Information as to how to find R, and I transferred that Information into my fitness landscape, which in turn became the Searcher, and which reliabley led to R.

But is an Intelligent Designer the only possible source of such information?

Let’s imagine that some future OOL researcher, let’s call her Tokstad, discovers a chemical reaction, involving molecules known to be around in early earth, and conditions also likely to be present in early earth, that results in a double chain polymer of some sort, that tends to split into two single chains under certain cyclical temperature conditions, whereupon those single chain atracts with monomers in the soup to become double chains again, but now with two identical double chain polymers where before there were one.  And let’s say, moreover (as we have some clues here), that this soup also contains lipids that form into vesicles that tend to expand, become unstable, then divide, and which, moreover, it being soupy and all, enclose some of these self-replicating polymers.  Let’s further suppose that the polymers don’t replicate with absolute fidelity – bits get added, chopped off, shorter chains sometimes join up to form longer chains, etc, and finally, let’s suppose that certain properties of some of these varied chains (length, constituent monomers) affect osmotic pressure differences between the vesicle and the soup, and/or the permeability of the vesicle to monomers in the ambient soup, affecting the vesicle’s chances of dividing into two, and of its enclosed polymers self replicating successfully.

It’s quite a big suppose, and possibly impossible, but not beyond the bounds of chemically plausible science fiction.

But here is the point:  IF such a system emerged from a primordial soup, it would be a system in which:

  1. There was a fitness function (some polymer containing vesicle variants replicate more successfully than others).
  2. There is a link between genotype and phenotype (similar polymers have similar effects on the properties of the vesicle)
  3. There are several different ways in which genetic variance can arise (duplicating, adding, deleting, replacing)

In other words, we would have, potentially, a system in which is located, according to Dembski, Active Information in the form of a smooth fitness landcape, shallow valleys, and high fitness peaks representing vesicles with high I (unlikely to emerge spontaneously were the chemistry to be something that did not provide these parameters).

So we have a Informed Searcher and an R, but no Lizzie, just the fitness-landscape itself, spontaneously arising from primordial chemistry.  So how do we measure how much Information that Search contains?  Well, that depends on how improbable the conditions that generated the components of the fitness landscape: the polymers, the vesicles, cyclical temperature changes, the chemical properties of the atoms that make up the molecules themselves, actually are.  In other words: in how many, of all possible worlds might such conditions exist?

AND WE HAVE NO WAY TO CALCULATE THIS.

We do not know whether they are the result of an extraordinarily fluke (or Intelligent Design)  by which, out of all possible universes, one in which this could happen was the one that eventuated, or whether this is the only possible universe, or whether an infinite number of universes eventuated, of which only those that have properties that give rise to polymer chemistsry result in intelligent life capable of asking how intelligent life itself originated.

But that’s not an argument for ID from probability and statistics, it’s an argument for ID from metaphysics.

Alternatively, if Dembsk and Marks are relying on Tokstad not discovering conditions in from which can emerge fitness landscapes in which ever-fitter self-replicators are the result, then they have backed a perfectly falsifiable horse.

But the important point is that observing effective fitness landscapes in the natural world does not, and cannot, tell us that there is an external source of Information that must have been transferred into the natural world. All it tells us is that the world that we observe has structure. There is no way of knowing whether this structure is probable or massively improbable, and therefore no way, by Dembski’s definition, of knowing whether it contains Information.  It seems to me it does, but that’s because I don’t define Information as something possessed by an event with low probability, and therefore don’t attempt to infer an Intelligent Designer from data I don’t, and can’t, possess.

 

153 thoughts on “Searching for a search

  1. This is a nice example of the fact that processes in nature have underlying rules.

    If we are to model those processes in nature, Dembski and Marks have no rationale that would forbid us from incorporating those rules in our computer simulations. It is not cheating or “putting in the answer” to simulate on a computer the mechanisms we have learned from the study of nature.

    Furthermore, what possible justifications can Dembski and Marks offer for calling sample spaces “endogenous information” when we don’t know some rules and have to use a uniform random sampling scheme on it? Why the use of “exogenous information” when we discover some rules that narrow the search? And why is the difference between “endogenous information” and “exogenous information” called “active information” when we employ the rules?

    What does this add to our understanding of a search? If we learn that a 4-digit combination lock is opened by odd numbers, why not simply say the sample space has been cut in half? Why load up the issue with terms like “information?”

    I would suggest that the slathering on of such terminology obfuscates the problem and jerks it onto a playing field where it doesn’t belong.

    Even further – and let’s use a much simpler example of a limp rope with each end tied to separate suspension points in a gravitational field – why are we not allowed to use a known fact about such systems; namely, that they will take on a configuration that minimizes potential energy?

    According to Dembski and Marks, such “exogenous information” is effectively cheating if we incorporated the second law of thermodynamics into a computer simulation of the problem. Dembski and Marks would have the computer program randomly regenerating the rope’s configurations over and over and discovering that the program never converged to anything. In other words, they would be constructing a problem in which there was no second law of thermodynamics in the universe.

    Elizabeth’s example is analogous to incorporating something that is known about how the universe behaves into a computer simulation.

    Are we not allowed to study nature and make use of the processes and laws of nature in our computer simulations? That is apparently the lesson we are being taught by Dembski and Marks; or at least we are apparently supposed to believe we are cheating.

    Our knowledge of the processes in nature span the extremes from precise mathematical rules where we can use tools like Newton-Raphson methods to find solutions, to complex stochastic processes with known underlying principles that govern interactions where we have to simulate using random number generators and the rules that pertain to the interactions.

    Knowing those rules is not cheating; it is IMITATING what nature does and watching the consequences fall out. The more precisely our results match those found in nature, the better we understand the underlying rules; and THAT is the goal.

    I would suggest that Dembski and Marks don’t seem to have much of a clue about why we in the science community do simulations on computers.

  2. Mike Elzinga:

    “This is a nice example of the fact that processes in nature have underlying rules.”

    No IDist would deny it.

    Rules are arbitrary. Processes are teleological.

    None of this fits the materialist claims about nature.

  3. Steve:
    Mike Elzinga,
    … then you can lay claim to understanding science better than Marks.

    Hee hee. Robert J Marks II, the christian engineer who thinks that he has a valid explanation of the truth of Genesis if he only assumes the atmosphere was created opaque. Hee hee.

    About the only way he could be more incompetent is if he were following Ann Gauger in claiming that a literal Adam and Eve have not been dis-proven by science. If you make some peculiar assumptions. And if you hold your head and squint exactly so. And if you ignore that it’s not 1977 or 1995 but 2013 and science has continued to shrink the gaps for your god in the observable natural world.

    Should stick to his engineering and keep his christianity out of his so-called science. Fool.

    I thought you IDists were supposedly not confusing your anonymous Designer with the christian god. So why are you so quick to make heroes out of any christian engineering professor who will sit still for it? Why are your heroes all christian? Why default to the christian god?

    Who is the designer, honestly?

    And, to the subject of this thread, who created the information in the fitness landscape (which Dembski now seems to think allows evolutionary processes to search successfully) ? Same Designer as the one who created the universe as a whole, the christian god, right?

    Why do you think the christian god would tolerate working through evolutionary processes over 3 billion years to finally arrive at the one true “first couple” Adam and Eve ? IF not god, who is the Designer?

  4. I’ve moved a few posts to Guano. As always, this was not a moral judgement but an organisational one. Guano is useful stuff, in the right place.

  5. Mung:
    Mike Elzinga:

    “This is a nice example of the fact that processes in nature have underlying rules.”

    No IDist would deny it.

    Rules are arbitrary. Processes are teleological.

    What do you mean by “processes are teleological”. Explanations can be teleological, but how can a process be teleological.

    Do you mean that when we consider something a “rule” we are not attributing purpose, but when we consider something a “process” we are attributing purpose?

    Or that if something can be considered a “process” it must have a purpose?

    If so, that would seem to be a circular claim.

    None of this fits the materialist claims about nature.

    What claims are you talking about? Would you like to comment on my OP?

  6. So your answer to Dembski’s argument that Darwinists “suppose a can opener” is to suppose a can opener, and then beg the question of where the can opener came from back to a multi-universe, which is just another can opener that you attempt remove from reasoned debate by saying there’s no way to tell if your supposed can-opener is plausible or not?

    It doesn’t matter where you substitute a blind search for a target-information search, the problem is the same; blind searches are not plausible explanations for this kind of specified target outcome. If universes are being generated that “find” a universe like ours, the universe-generating mechanism is at least as unlikely to generate your self-replicating object as the product of all of its universes as it is unlikely for our one universe to generate it when you take into account the full purchase price of the information in question. The information required for success would still have to be included in the universe-generating mechanism.

    A universe-generating mechanism can as easily be structured so that it cannot produce any universes that have any life at all, even though it could produce countless universes. Also, why is the mechanism producing universes at all? Why doesn’t it just produce boltzmann brains, or toasters, or random splotches of something we have no concept of?

    Suppose a mechanism that just happens to produce universes, and just happens to produce kinds of universes that may be organized in a way that support life.

    Suppose a can opener.

  7. It would be easy to construct a scenario in which your parents never met, or in which the specific egg and sperm that produced you never got together.

    The sequence of events that produced you gets more and more improbable as you add more and more necessary events.

    Why does it seem so compelling to calculate the odds against something after it has happened?

  8. William J. Murray:
    So your answer to Dembski’s argument that Darwinists “suppose a can opener” is to suppose a can opener, and then beg the question of where the can opener came from back to a multi-universe, which is just another can opener that you attempt remove from reasoned debate by saying there’s no way to tell if your supposed can-opener is plausible or not?

    Well, no, that’s not what I’m saying. Let’s try a different approach:

    What I’m saying is that Dembski now appears to be saying that given a fitness landscape with the appropriate parameters (smoothness, high dimensionedness etc), then what would have been hard to find with blind search is now easy. The Target is now no longer high in Information (computed as the negative log of probability of finding it by blind search) but the landscape parameters are, hence Information is Conserved, and the Search for a Target has now been transferred as the Search for a Search.

    And he seems to be arguing that that problem is just as intractable – that an appropriate fitness landscape is at least as unlikely to be found by blind search as the original Target was.

    And I am saying that we are not faced with a scenario in which that fitness landscape has to be found by blind search. Sure the fitness landscape could have been put there by an Intelligent Designer (as I did in my exercise back when) but, we also can go a long way to explaining how a fitness landscape with the requisite properties could also emerge from chemistry.

    Now, Dembski could, and does, say – but what is the chance of finding an appropriate chemistry (one from which fitness landscapes emerge, from which complex life emerges)? And that is what I am saying we cannot compute.

    He appears to think we can, and because he thinks the answer is very small, we must infer Design.

    It doesn’t matter where you substitute a blind search for a target-information search, the problem is the same; blind searches are not plausible explanations for this kind of specified target outcome.

    Absolutely agreed.

    If universes are being generated that “find” a universe like ours, the universe-generating mechanism is at least as unlikely to generate your self-replicating object as the product of all of its universes as it is unlikely for our one universe to generate it when you take into account the full purchase price of the information in question.

    Well, that is mere assertion. To compute a probability (in the sense that Dembski uses the term, i.e. in a frequentist sense) you need a distribution – you need a sample of data. That data is not available. Ergo, we cannot compute the probability, so we cannot say whether it is “likely” or “unlikely”.

    So we can neither rule in, nor rule out, a Designer.

    The information required for success would still have to be included in the universe-generating mechanism.

    Yes. But as the information is computed as the negative log of the probability, then we don’t know whether that information is 0 bits (ours is the only possible universe) or an infinite number (ours is the only life-generating universe out of an infinite number of possible universes, nor the number of “trials” (the number of actuated universes).

    A universe-generating mechanism can as easily be structured so that it cannot produce any universes that have any life at all, even though it could produce countless universes.

    Well, we don’t know that, but it could be the case, I agree.

    Also, why is the mechanism producing universes at all? Why doesn’t it just produce boltzmann brains, or toasters, or random splotches of something we have no concept of?

    Dunno. As I said, that’s metaphysics, not probability. I’m happy with an argument for a Prime Designer from metaphysics (well, moderately), it’s the argument from probability (Dembski’s) that I think is fundamentally flawed.

    If I was up against the wall, and had to choose an argument for a Prime Designer, I’d actually go for something like your panpsychism, not Dembski’s.

    Suppose a mechanism that just happens to produce universes, and just happens to produce kinds of universes that may be organized in a way that support life.

    Suppose a can opener.

    Non sequitur. We are close to concluding that ours is a universe that can produce life spontaneously. We don’t know whether it is the only possible universe, or a handpicked one.

    That doesn’t make Dembski’s conclusion wrong, but it does make his inference fallacious (it is of course possible to come to a correct conclusion from fallacious reasoning, but that does not render the reasoning correct).

  9. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks.

    The claim relies on Dembski’s paper “The Search for a Search”. Unfortunately, the model for a search which is used in that paper isn’t applicable for most of his examples (like Easter Egg Hunt, Dawkins’s Weasel, etc.), as his search strategies rely only on the elements of the search space, but not on an input by an oracle, search function, someone shouting “warmer”.

    If you try to create a search strategy for the “Easter Egg Hunt” according to Dembski’s and Marks’s model, you’ll find that such a strategy is independent from being informed that your move brought you closer or not to your goal.

    tl;dr: Dembski’s and Marks’s model of regress doesn’t work for nearly all searches.

  10. DiEb:
    Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks.

    The claim relies on Dembski’s paper “The Search for a Search”. Unfortunately, the model for a search which is used in that paper isn’t applicable for most of his examples (like Easter Egg Hunt, Dawkins’s Weasel, etc.), as his search strategies rely only on the elements of the search space, but not on an input by an oracle, search function, someone shouting “warmer”.

    If you try to create a search strategy forthe “Easter Egg Hunt” according to Dembski’s and Marks’s model, you’ll find that such a strategy is independent from being informed that your move brought you closer or not to your goal.

    tl;dr: Dembski’s and Marks’s model of regress doesn’t work for nearly all searches.

    Could you explain the term “oracle”? In fact, could you translate your whole post out of geek? I’d be terribly grateful!

  11. Biology is such that every move must be from one egg to another. There is no in-between space.

    There are moves to non-egg spaces, but they do not survive or reproduce.

    Any modelling of biology must assume that the egg space supports such moves. That’s what research is about.

    Most ID advocates, particularly Axe, insist that one must move continually to better eggs, but that isn’t what is observed.

    In any case, one does not get warmer or colder.

  12. Elizabeth,

    The choice is between deliberate fine tuning by an intelligent agency, of organism and/or landscape, or chance tuning. To our current knowledge, chance tuning is incapable of accounting for such fine-tuned functional biological targets – this being the known search space of chemistry as we know it, and the known search space of biological elements (proteins) as we know them.

    Dembski shows that even if the landscape provides such information, that only begs the question.

    To overcome this, you (and others) are willing to reach outside of the known universe, into the complete unknown, and say that “the unknown” could account for why the search for a search is “less costly” than what we know about the cost of such searches in this universe.

    Even though we know of an existent causal agency in our universe that can do this very thing, you and others would apparently rather point to the utterly unknown and say that because Dembski cannot account for the complete unknown, our inference to best explanation (a cause known to be adequate – intelligence) is unfounded?

    Sorry, but your bias is showing. When it comes to (1) the pure chance of an unknown (and perhaps unknowable) commodity VS (2) a known causal agency that routinely generates and inputs this very kind of search information, there really is only one logical conclusion – unless one is serving some other ideological master.

    For now, the best conditional explanation for such features is intelligent design.

  13. I won’t try to anticipate what DiEB will say, but many of us have written Weasel-like GAs, and it is obvious that every generation in a population must be viable and must reproduce.

    Dembski’s warmer and colder metaphors are distortions. The assume there is some agency that knows the location of a target.

    Actual biology moves directly from target to target. In fact there is evidence that most point mutations on protein coding sequences are neutral. And higher level mutations don’t disturb the established and proven sequences; they merely move them around on the larger sequence.

    I still think that Braille is a good metaphor for evolution. Exploration rather than search. Finding the surfaces of the landscape rather than searching for a target. And the exploration metaphor fits easily into a multi-dimensional landscape.

  14. Mike Elzinga: This is a nice example of the fact that processes in nature have underlying rules.

    One of the most important underlying rules is proximity. The universe is ordered such that like things tend to clump together; in space, such as matter to form planets, water in oceans, photons streaming from a star; or the structures of proteins in sequence-space.

  15. IOW, Elizabeth, if per your argument Dembski cannot suppose that a can opener explanation doesn’t exist outside of our known universe, then you certainly cannot suppose that it does. Dembski is the one that is using the known to reach a best explanation inference; you are appealing to the unknown.

    IOW, in this case, you are appealing to magic, and Dembski is making an inference to best currently known adequate explanation.

  16. The concept of a “mechanism” that produced our universe is a misconception about what is meant by the expression “multiverse.”

    There was no “mechanism” in that scenario, our universe emerged out of a multitude of possibilities, many of which would not have produced sentient life, some of which could have other “periodic tables” that resulted in “sentient life” not as we would know it, and our universe.

    We happen to be in our universe, so we tend to think we are the target of a search. We weren’t in the multiverse scenario; other sentient beings in other universes might also think they were the target of a search, but neither would they have been such a target.

    Elizabeth’s example has also provided a specific scenario for why fitness landscapes tend to be smooth; and it is a subset of a much broader principle one finds in the processes of sampling and in nature.

    The field of signal and imaging processing has a nice mathematical description that explains the process of smoothing. It comes under the heading of “dithering” or, equivalently, under the concept of “convolution.” Look at convolution first.

    If we are sampling a signal feature in either time or space, the width of the sampling window will give us a limit on how much detail we will see in the feature we are sampling. If the window is narrow – either in time or in spatial extent – we will see sharp features with sharp, distinct edges. If the window is wide, the features will have rounded and smoothed edges.

    From the “dithering” perspective, we are looking at the features of the signal in the Fourier transform domain; and the idea that is involved here is what electrical engineers and those in the signal and image processing field call the “shift theorem.”

    If a signal S(x) – let’s use a spatially distributed signal – has a Fourier transform F{S(x)}, then a spatially shifted signal S(x – a) has a Fourier transform eikaF{S(x)}, where k is a spatial frequency (e.g., lines per mm).

    In other words, the Fourier transform of the spatially shifted signal is multiplied by a phase factor; and here is trick: jiggling the sampling back and forth over the signal will produce larger phase shifts for the higher spatial frequencies for a given shift a. This “washes out” (they tend to phase cancel) the higher spatial frequencies leaving only the lower frequencies. When we inverse Fourier transform the dithered result, we get an image with smoothed edges and all the fine detail gone.

    So both perspectives give the same result.

    What does this have to do with Elizabeth’s example and with nature in general?

    Elizabeth provided a specific example of a set of processes in which more and more “dithering” was going on beneath the surface of a “search.” If we wanted to land on some specific phenotype with a random search, we would be sampling with an extremely narrow aperture, and the “image,” which is the landscape in her example, would have extremely sharp features.

    However, there is “dithering” going on at the “genome” level that smears out the sampling process giving us a wider “aperture” that is scanning the fitness landscape; therefore the landscape is smoothed out.

    Variation and selection can be looked at as either a dithering process or as a wide aperture scanning over an image with fine details; these pictures are Fourier transforms of each other.

    These kinds of processes take place at all levels of complexity.

  17. William J. Murray:
    Elizabeth,

    The choice is between deliberate fine tuning by an intelligent agency, of organism and/or landscape, or chance tuning.

    OK, depending on what you mean by “chance”.

    To our current knowledge, chance tuning is incapable of accounting for such fine-tuned functional biological targets – this being the known search space of chemistry as we know it, and the known search space of biological elements (proteins) as we know them.

    I’m not sure what you are saying here. I think you’d better explain what you mean by “chance tuning”.

    Dembski shows that even if the landscape provides such information, that only begs the question.

    What question does it beg? As I read him, Dembski is saying, OK, sure, a smooth and high dimensioned fitness function will facilitate the emergence of “targets” that would not be found by blind search, but that a smooth and high dimensioned fitness is itself unlikely to found by blind search.

    My point is that chemistry itself has properties that make the finding of such a fitness landscape perfectly plausible, and vastly more probable than that such a fitness landscape would emerge from, say, grey goo.

    So sure, that sets back the question to: why chemistry and not grey goo? And my point is that the answer to that lies in metaphysics, not probabilities, because we can’t actually estimate the probabilities. Do you see what I am getting at?

    To overcome this, you (and others) are willing to reach outside of the known universe, into the complete unknown, and say that “the unknown” could account for why the search for a search is “less costly” than what we know about the cost of such searches in this universe.

    You are extrapolating way beyond anything I have actually said. All I have said is that we cannot know how costly the search is for a non-grey-goo universe, because we are only in possession of one exemplar.

    Even though we know of an existent causal agency in our universe that can do this very thing, you and others would apparently rather point to the utterly unknown and say that because Dembski cannot account for the complete unknown, our inference to best explanation (a cause known to be adequate – intelligence) is unfounded?

    “Intelligence” is only a “causal agent” if you specify your level of analysis. Intelligence is a the property of an organism. So if cat knocks my glasses off my bedside table to wake me up, I could say: the cat’s intelligence did this (she knows I don’t like her doing it,and will wake up and listen to her demands for breakfast); I could also say, at a different level of analysis: my cat did this. I could also say that my cat’s oaw did this. Or I could say that evolution did this. Or Big Bang did this. They aren’t alternatives, they are simply causal explanations at more proximal or more distal levels.

    In my view. I realise your view is different – you think that intelligence, or mind, is some kind of force that can move matter. I don’t, so we will disagree on this. But that’s an assumption on your part, and not one I share. I think that entities exist that have the property of intelligence, and that property enables them to do certain kinds of things (design things; intend things). I see any reason to postulate bodiless intelligence, or to infer that it exists. And to assume it exists, and then to use it to explain observations which we then use to infer that it exists, would seem to be assuming the consequent.

    Sorry, but your bias is showing.When it comes to (1) the pure chance of an unknown (and perhaps unknowable) commodity VS (2) a known causal agency that routinely generates and inputs this very kind of search information, there really is only one logical conclusion – unless one is serving some other ideological master.

    What do you mean by “pure chance”, William?

    For now, the best conditional explanation for such features is intelligent design.

    Well, no.

  18. I’ll put this brief note in as a separate clarification for those not familiar with sampling windows.

    There is a common misconception about wide sampling windows that confuses students encountering theses notions for the first time. They think of a wide slit lying on top of an image with fine features and think that the wide slit allows them to see the fine detail.

    But a sampling window is a capture of everything within that frame width; it is all averaged together. So a steep descending line, for example, would be sampled as the average of all of its levels. Move the aperture a little bit and sample again, and we get a slightly different average, and so on.

    So the sharp descent of the line is lost in the sampling by a wide aperture; i.e., it is smoothed out.

  19. Humor me a bit. Is it not possible to interpolate finer detail from multiple images that overlap? Is this not what NASA does with multiple low resolution images?

  20. Is this not what NASA does with multiple low resolution images?

    It’s called “synthetic aperture processing.” In this case the image is sharpened by summing the features of overlapping images in the way a lens would do it.

    There are several techniques that are used, depending on how the images are captured and on the wavelengths and medium in which these waves travel.

    Side-looking radar and sonar build up large apertures synthetically by shifting and summing returning echoes as a lens would do. Ultrasonic imaging sharpens images using the same techniques.

    In the satellite images, features from separate images are lined up and summed. This has a couple of advantages; it removes random noise from the images and it enhances the edges of the signal to sharpen the image.

    I worked in this area for a number of years. Some of my ultrasonic algorithms are out there in use.

  21. William J. Murray:
    IOW, Elizabeth, if per your argument Dembski cannot suppose that a can opener explanation doesn’t exist outside of our known universe, then you certainly cannot suppose that it does.Dembski is the one that is using the known to reach a best explanation inference; you are appealing to the unknown.

    I do think you are fundamentally missing my point, William. Can I make it absolutely clear that I don’t see anything wrong with the idea that the structured universe we observe was the deliberate and intended choice of a creator being? It would entail the assumption that intentional action is possible to a non-embodied entity, but that’s neither verifiable nor falsifiable, and possibly reasonable.

    What I am saying is that Dembski’s argument is faulty, not that his conclusion is incorrect. And it’s faulty for a very simple reason: ultimately, he has moved back the probability of a search for a search for a search to the origin of the universe itself, and at that point, we have no way of calculating the probability of such a universe under the assumption of blind search.

    So we cannot estimate the information content of our observed structured universe by Dembski’s method. So we cannot use that metric to infer a Designer.

    IOW, in this case, you are appealing to magic, and Dembski is making an inference to best currently known adequate explanation.

    No, I am appealing to nothing. I am simply pointing out the fallacy in Dembski’s argument.

  22. petrushka:
    It would be easy to construct a scenario in which your parents never met, or in which the specific egg and sperm that produced you never got together.

    The sequence of events that produced you gets more and more improbable as you add more and more necessary events.

    Why does it seem so compelling to calculate the odds against something after it has happened?

    Heh. What are the odds that a recent meteor crater and similarly-size and age volcanic craters both appear in the current era near Flagstaff Arizona?

  23. William J. Murray:
    Elizabeth,

    The choice is between deliberate fine tuning by an intelligent agency, of organism and/or landscape, or chance tuning.

    That’s not the choice at all. Why did you leave out iterative tuning via variation and selection?

    Unless you are categorizing evolutionary processes as “intelligent agency”…

  24. petrushka:
    I won’t try to anticipate what DiEB will say, but many of us have written Weasel-like GAs, and it is obvious that every generation ina population must be viable and must reproduce.

    Dembski’s warmer and colder metaphors are distortions. The assume there is some agency that knows the location of a target.

    Yes. That’s why I tried reveal the agent hidden in the weasel passive.

    Actual biology moves directly from target to target. In fact there is evidence that most point mutations on protein coding sequences are neutral. And higher level mutations don’t disturb the established and proven sequences; they merely move them around on the larger sequence.

    Yes.

    I still think that Braille is a good metaphor for evolution. Exploration rather than search. Finding the surfaces of the landscape rather than searching for a target. And the exploration metaphor fits easily into a multi-dimensional landscape.

    Yes. Although sometimes it’s worth getting rid of the metaphors altogether, and just describing it as it is. I do think that a lot of ID errors arise from conflating different levels of analysis, and forgetting that what we are talking about here is essentially simple. The universe is, for some reason, started lumpy, i.e. with a non-uniform distribution of stuff. Because it is lumpy, things happen that would be improbable if had stuff been was uniform – physics, chemistry, life, brains.

    So why did the universe start lumpy? Maybe lumpiness is more probable than uniformity, or maybe an Intelligent Designer foresaw the awesome consequences of lumps. Either way, we can’t make an inference from a probability distribution we don’t have.

  25. To be fair on Dembski I don’t think he is making the Texas Sharp-shooter fallacy, and in fact he goes to great lengths to show that he isn’t.

    I think it is perfectly valid to say that if some weird pattern turns up that has some kind of special property, even if you don’t specify the property in advance, and you can show that the proportion of patterns with a special property is a tiny subset of boring patterns, that something other than a blind draw from a uniform pdf is at work. I think “compressibility” is a poor criterion for specialness, myself, but “functional” could work perfectly well, as Hazen et all showed.

    His real problem was that for years he seemed to be saying that evolutionary scenarios were no better at turning up such patterns than blind search.

    Now at least, he’s realised this is wrong, so he’s saying, yes, certain evolutionary scenarios will turn up such patterns reliably, but they themselves are unlikely to be drawn blind from a population of pattern-generating scenarios.

    But that’s not true, because chemistry is going to bias scenarios towards ones that lead to interesting stuff.

    So he has to say, but chemistry itself is unlikely to turn up.

    But he doesn’t have the data for that.

  26. I have discussed SFS in a post I just made at Panda’s Thumb replying to Dembski.

    As I say there, for the arguments about CSI and Design, the part that matters is simply that Dembski and Marks admit the possibility that natural selection could put CSI into the genome. Their SFS argument is about whether we need a Designer to bring about a fitness surface that allows mutation and natural selection to do the job. I agree with Lizzie on this (I point also to the weakness of long-range interactions in physics).

    I addressed these issues in two posts (here and here) at PT in 2009, shortly after the Dembski/Marks papers came out. (Dembski seems to think I have been unaware of their papers).

  27. ‘Natural’ processes can’t find something in the vastest of vast possibility spaces, but intelligent entities can? How?

  28. I’ve been asking this for several years. The answer appears to be that they aren’t actually talking about intelligence. They are talking about omniscience.

  29. It’s always struck me how uninterested ID proponents seem to be in the nature of intelligence (William J Murray is an exception, I think).

    Intelligence is fascinating. Reducing it to a placeholder seems such a shame.

  30. If I am remembering correctly, Dembski, et. al. assume a uniform pdf as a default; and that seems to be almost always.

    But when we get back to the early universe, a uniform pdf over what?

    We are still in the process of trying to figure out what Dark Matter is. We know it has gravitational effects; but what else? These are frontier research questions. Dembski is simply making arbitrary assumptions about interactions that are still not understood.

    And is he now back to “front loading?” It’s hard to tell where he is drifting; but I’ll place my bets on the research going on in physics and cosmology.

  31. Lizzie,
    So why did the universe start lumpy?

    Quantum mechanics. Or by ‘why lumpy’ do you mean ‘why do we have the physical laws we do have’?

  32. Unless you have a way by which “iterative tuning via variation and selection” and a viable fitness landscape came about by chance, all you are doing is begging the question to avoid paying the price for your search.

  33. And it’s faulty for a very simple reason: ultimately, he has moved back the probability of a search for a search for a search to the origin of the universe itself, and at that point, we have no way of calculating the probability of such a universe under the assumption of blind search.

    No, he hasn’t. He has successfully exhausted the universe or the possibility of gaining a better search without paying the price. He’s content to leave the best inference completely within the confines of what we know about this universe; the best – no, the **only** — sufficient cause for such searches that we know of is intelligence.

    You are the one looking “outside the known universe” into the “mutliverse” – into the start unknown, or “magic” – in order to avoid the only rational conclusion currently available in our known universe.

  34. Unless you have a way by which “iterative tuning via variation and selection” and a viable fitness landscape came about by chance, all you are doing is begging the question to avoid paying the price for your search.

    This is where a little understanding of basic chemistry and physics pays huge dividends.

    “Rigid things” are tightly bound. Atoms and molecules sit in deep potential wells; and high energies are required to move them around. They don’t vary much unless “melted.”

    “Liquid and gaseous things” are too loosely bound to produce any long-term coherent structure. Little can be built on them.

    But “soft matter” is matter in which the binding energies and kinetic energies are roughly comparable in magnitude. Structure occurs; it is flexible, but it is at the edge of coming apart. All sorts of variation are possible.

    We can observe it without expensive scientific equipment; it occurs all around us, even as we sit at computers and type. It slowly “explores” and conforms to changes in its environment. It swaps atoms and molecules to make different versions of itself, different allotropes, different compounds; all of this because it exists in a heat bath that keeps it on the edge of coming apart.

    Soft matter is very interesting stuff.

  35. William J. Murray:
    Unless you have a way by which “iterative tuning via variation and selection” and a viable fitness landscape came about by chance, all you are doing is begging the question to avoid paying the price for your search.

    Before going into why “search for a search” is not a good model, are you recognizing, as apparently Dembski now does, that given the chemistry and physics of this universe and given the fitness landscape we observe, the theory of evolution is the best explanation for the current diversity of life? That is, are you claiming that ID is necessary to explain the context in which evolution takes place but is not necessary to explain evolutionary mechanisms themselves?

  36. Patrick: Before going into why “search for a search” is not a good model, are you recognizing, as apparently Dembski now does, that given the chemistry and physics of this universe and given the fitness landscape we observe, the theory of evolution is the best explanation for the current diversity of life?That is, are you claiming that ID is necessary to explain the context in which evolution takes place but is not necessary to explain evolutionary mechanisms themselves?

    Dembski isn’t positively conceding that, but he does seem to be conceding that his earlier argument that evolution couldn’t account for the current diversity of life (the Explanatory Filter) doesn’t hold water. Dembski isn’t a graceful conceder, but berating his critics for not having addressed his later work when they point out flaws in his earlier does seem to be Dembski-speak for “right, my earlier argument had problems”.

    But I too would be interested in seeing whether William’s position is that there must be an Intelligent Designer behind the universe’s life-permitting laws, or that there must be an Intelligent Designer because the universe’s laws alone are not life-permitting.

  37. With a side-order of omnipotence. ‘Knowing’, by unspecified means, where the functional stuff resides would be one thing (and itself beyond mere ‘intelligence’), but being able to generate the physics that can access it is quite another.

    This ‘inference to best explanation’ is simply hollow – use known intelligence as a start point and then discover that – gee! – it can do absolutely anything!

  38. Patrick: Before going into why “search for a search” is not a good model, are you recognizing, as apparently Dembski now does, that given the chemistry and physics of this universe and given the fitness landscape we observe, the theory of evolution is the best explanation for the current diversity of life?That is, are you claiming that ID is necessary to explain the context in which evolution takes place but is not necessary to explain evolutionary mechanisms themselves?

    As usual, you are misunderstanding Dembski. Dembski makes no such concession. Dembski spends most of his time in those articles making his case from the hypothetical position of the materialist/Darwinist. IOW, it is the materialist/Darwinist argues that chemistry and the landscape are sufficient to provide a path towards novel biological machinery; they claim that this is where sufficient new information comes from.

    Dembski is pointing out that even if we suppose that such information is in chemistry and the landscape, the materialist/Darwinist has not accounted for the origin of the very information that they were trying to explain.

    If you print out a text from your computer, and say “the computer generated the text”, and I ask “what generated the information”, and you say “a program in the computer”, you have only begged the question back a notch.

    Landscape, iteration, modification – these are seaches. The landscape could be an infinite number of ways that does not aid, and even contradicts, the potential for the development of novel biological features. The materialist/Darwinist is, in the first place, supposing that the fitness landscape is a can opener that can open biological diversity without intelligence paying the search debt along the way. When called upon this massive, groundless ideological assumption (which she says “may be impossible), Elizabeth and others suppose a magic materialist commodity somewhere outside of the universe to pay the debt for our universe having so convenient a fitness landscape.

    Then, when called upon how she has appealed to an extra-universe magic commodity to solve the debt, she then attempts to shift the burden to Dembski and claims that he has gone outside of the universe – when he never did. The point is that this is what materialists/Darwinists – like Hawking – must do to comfort themselves about how their magic can opener fitness landscape happens to exist in the first place.

    ID doesn’t assume the fitness landscape is convenient to the targets – in fact, it currently argues otherwise – because ID has intelligence, a known payer of such search debts.

    But, for all we know, the law of conservation of information applies outside the universe, and one must account for why there is a “multiverse generator” that has search information that is capable of producing a universe with such a fitness landscape.

    Dembski’s point is that no matter what you appeal to solve the search problem, there is no known commodity that can pay the price – except intelligence – whether that search information is applied to organisms by teleologically manipulating them to isolated islands of function, or by generating a smooth, accessible fitness landscape and/or a reproduction program cooperatively tuned to produce biologial novelty without further injections of search information. – otherwise known as “front loading”.

  39. I don’t hold that there “must be” intelligence behind the universe’s life-permitting laws – I hold that intelligence is the best (actually, the only) known cause of such fine-tuning.

    I hold that the universes laws are necessary for life, but not currently understood to be anywhere near sufficient.

  40. William J. Murray: Dembski is pointing out that even if we suppose that such information is in chemistry and the landscape, the materialist/Darwinist has not accounted for the origin of the very information that they were trying to explain.

    Well, glad that’s settled then. Given the universe as we know it, of planets and stars, carbon and water, evolution works quite well.

  41. Zachriel: Well, glad that’s settled then. Given the universe as we know it, of planets and stars, carbon and water, evolution works quite well.

    If by “evolution” you mean “evolution unguided by intelligence”, no, that has never been shown to be possible. It has only been assumed to be the case.

  42. William J. Murray: As usual, you are misunderstanding Dembski. Dembski makes no such concession.Dembski spends most of his time in those articles making his case from the hypothetical position of the materialist/Darwinist. IOW, it is the materialist/Darwinist argues that chemistry and the landscape are sufficient to provide a path towards novel biological machinery; they claim that this is where sufficient new information comes from.

    Well, no. We argue (or I do) that there isn’t an “new information” problem in the first place. But we do indeed argue that there is no need to posit anything other than the laws of the universe to account for what we observe – we don’t see that special tinkering by a disembodied Intelligence is likely to be required.

    So I’m not at all convinced you’ve understood Dembski. If you have, then Dembski is making a very poor fist of trying to understand our point of view!

    Dembski is pointing out that even if we suppose that such information is in chemistry and the landscape, the materialist/Darwinist has not accounted for the origin of the very information that they were trying to explain.

    Are you saying that Dembski is saying, OK, well the information is in the landscape, but how do you explain the landscape? And are you saying that Intelligence would still be needed to provide the chemistry and landscape?

    Why should Intelligence be needed to provide the chemistry and the landscape (fitness or actual)?

    If you print out a text from your computer, and say “the computer generated the text”, and I ask “what generated the information”, and you say “a program in the computer”, you have only begged the question back a notch.

    Sure. Doesn’t mean it can’t be answered in a way that dispenses with Intelligence. Let’s say we see a rock at the bottom of a cliff. We ask: why is the rock here? Ah, it fell from the cliff. Ah, but why did it fall? Well, it was eroded by rainfall. Why did the rain fall? Well because of heat from the sun evaporating the sea. Why is the sun hot? Because it is a nuclear fusion furnace. Why is the sun a nuclear fusion furnace? Because it contains hot hydrogen. Why does it contain hot hydrogen? Because of Big Bang.

    What has Intelligence got to do with any of this? Even if you stick “Intelligence” somewhere in there, we just get “why is there Intelligence?” Invoking Intelligence doesn’t get you out of your infinite regress, any more than chemistry or Big Bang does. And it has the huge disadvantage that we have no evidence that disembodied Intelligence even exists, and, if it did, how it could move stuff around. Do you think of it like a forcefield?

    Landscape, iteration, modification – these are seaches.The landscape could be an infinite number of ways that does not aid, and even contradicts, the potential for the development of novel biological features.

    But how do you know that the landscape “could be an infinite number of ways”? Under what conditions could it be a different way? How do you know how frequently those conditions occur, if they occur at all?

    This is what neither you, nor Dembski address, and yet Dembski’s argument depends on it.

    The materialist/Darwinist is, in the first place, supposing that the fitness landscape is a can opener that can open biological diversity without intelligence paying the search debt along the way.When called upon this massive, groundless ideological assumption (which she says “may be impossible), Elizabeth and others suppose a magic materialist commodity somewhere outside of the universe to pay the debt for our universe having so convenient a fitness landscape.

    William, first you have to make the case that there’s a debt at all. Then, if you do, I’ll try and pay it. But until then, I’m keeping my hands in my pockets.

    Then, when called upon how she has appealed to an extra-universe magic commodity to solve the debt,

    But I didn’t.

    she then attempts to shift the burden to Dembski and claims that he has gone outside of the universe – when he never did.

    I don’t know whether he has or not. He seems to have got stuck. He doesn’t seem to know whether the Information had to be Present at Big Bang (he says it must have been, because of his Law of Conservation of Information), in which case he is going to have to appeal to a source outside the universe, or whether it can’t have been, in which case his Law of Conservation of Information doesn’t hold.

    Do you think he is saying: look, there’s a Law of Conservation of Information, but it isn’t observed, so there must be an Intelligent Designer around somewhere? But in that case his argument would be circular – it’s his Law. If he thinks it is not universal, then it isn’t a Law. And if he thinks it is universal, then how does he infer it’s been broken? How can he tell, in other words, whether his Law isn’t a Law or his Law is a Law, but is broken a Designer?

    I think he thinks it’s a Law. But it might be worth asking him.

    The point is that this is what materialists/Darwinists – like Hawking – must do to comfort themselves about how their magic can opener fitness landscape happens to exist in the first place.

    No. That’s wishful thinking, William. Nobody needs to “comfort themselves” here. We have a very elegant model, and it doesn’t even rule out an Intelligent Prime Mover, or even an occasional Miraculous Tinkerer, which is why most theists are perfectly comfortable with consensus science, and why many scientists are devout theists.

    ID doesn’t assume the fitness landscape is convenient to the targets – in fact, it currently argues otherwise – because ID has intelligence, a known payer of such search debts.

    I can’t parse this.

    But, for all we know, the law of conservation of information applies outside the universe, and one must account for why there is a “multiverse generator” that has search information that is capable of producing a universe with such a fitness landscape.

    Well, if we don’t know whether it does or doesn’t, then we don’t have to account for anything it’s consequences, do we?

    Dembski’s point is that no matter what you appeal to solve the search problem, there is no known commodity that can pay the price – except intelligence – whether that search information is applied to organisms by teleologically manipulating them to isolated islands of function, or by generating a smooth, accessible fitness landscape and/or a reproduction program cooperatively tuned to produce biologial novelty without further injections of search information. – otherwise known as “front loading”.

    I think you need to step away from the metaphors, William, and tell us what you are actually talking about here – mathematically, and physically. Because this makes no sense as it stands.

  43. William J. Murray: If by “evolution” you mean “evolution unguided by intelligence”, no, that has never been shown to be possible. It has only been assumed to be the case.

    What about the Galapagos finches? Or the nylon-eating bacteria? Or the peppered moths?

    Or are you assuming that was Intelligently Guided? In which case, aren’t you assuming your consequent?

  44. Sorry for the barrage of responses, William, and let me add my thanks to you for coming over here, as so many of us can’t go to the other place 🙂

    I appreciate it.

    ETA: I’ve restored your posting rights (post-hack) – would you like to write a post on the Information Debt?

  45. William J. Murray:
    I don’t hold that there “must be” intelligence behind the universe’s life-permitting laws – I hold that intelligence is the best (actually, the only) known cause of such fine-tuning.

    wait-a-minute….

    Are you accepting that the universe has “life-permitting laws”? and that these must have been Designed? How would that be done by an Intelligence INSIDE the universe? Or do you think they were established once the universe got started? And where did that Intelligence come from?

    I hold that the universes laws are necessary for life, but not currently understood to be anywhere near sufficient.

    So you are proposing an Intelligence that first of all, designed the universe in such a way that it could, with a bit of additional on-line tinkering, produce life?

  46. Lizzie: What about the Galapagos finches?Or the nylon-eating bacteria?Or the peppered moths?

    Or are you assuming that was Intelligently Guided?In which case, aren’t you assuming your consequent?

    What about those things? The only rational conclusion I can reach is that you think that such examples are examples of evolution without intelligence-provided information; where has that been demonstrated? If it hasn’t been demonstrated, then it cannot be claimed. Without a metric that describes what non-intelligent evolutionary processes are capable of, and not capable of, one certainly cannot claim that any evolutionary process is plausible without intelligence being part of the sufficient cause.

    I don’t assume anything about the process – that it requires intelligence to occur, or that it does not. If one is going to claim that it does not, it is on them to support such a claim.

Leave a Reply