Gpuccio’s Theory of Intelligent Design

Gpuccio has made a series of comments at Uncommon Descent and I thought they could form the basis of an opening post. The comments following were copied and pasted from Gpuccio’s comments starting here

 

To onlooker and to all those who have followed thi discussion:

I will try to express again the procedure to evaluate dFSCI and infer design, referring specifically to Lizzies “experiment”. I will try also to clarify, while I do that, some side aspects that are probably not obvious to all.

Moreover, I will do that a step at a time, in as many posts as nevessary.

So, let’s start with Lizzie’s “experiment”:

Creating CSI with NS
Posted on March 14, 2012 by Elizabeth
Imagine a coin-tossing game. On each turn, players toss a fair coin 500 times. As they do so, they record all runs of heads, so that if they toss H T T H H H T H T T H H H H T T T, they will record: 1, 3, 1, 4, representing the number of heads in each run.

At the end of each round, each player computes the product of their runs-of-heads. The person with the highest product wins.

In addition, there is a House jackpot. Any person whose product exceeds 1060 wins the House jackpot.

There are 2500 possible runs of coin-tosses. However, I’m not sure exactly how many of that vast number of possible series would give a product exceeding 1060. However, if some bright mathematician can work it out for me, we can work out whether a series whose product exceeds 1060 has CSI. My ballpark estimate says it has.

That means, clearly, that if we randomly generate many series of 500 coin-tosses, it is exceedingly unlikely, in the history of the universe, that we will get a product that exceeds 1060.

However, starting with a randomly generated population of, say 100 series, I propose to subject them to random point mutations and natural selection, whereby I will cull the 50 series with the lowest products, and produce “offspring”, with random point mutations from each of the survivors, and repeat this over many generations.

I’ve already reliably got to products exceeding 1058, but it’s

possible that I may have got stuck in a local maximum.

However, before I go further: would an ID proponent like to tell me whether, if I succeed in hitting the jackpot, I have satisfactorily refuted Dembski’s case? And would a mathematician like to check the jackpot?

I’ve done it in MatLab, and will post the script below. Sorry I don’t speak anything more geek-friendly than MatLab (well, a little Java, but MatLab is way easier for this)

 

Now, some premises:

a) dFSI is a very clear concept, but it can be expressed in two different ways: as a numeric value (the ratio between target space and search space, expressed in bits a la Shannon; let’s call that simply dFSI; or as a cathegorical value (present or absent), derived by comparing the value obtained that way with some pre define threshold; let’s call that simply dFSCI. I will be specially careful to use the correct acronyms in the following discussion, to avoid confusion.

b) To be able to discuss Lizzie’s example, let’s suppose that we know the ratio of the target space to the search space in this case, and let’s say that the ratio is 2^-180, and therefore the functional complexity for the string as it is would be 180 bits.

c) Let’s say that an algorithm exists that can compute a string whose product exceeds 10^60 in a reasonable time.

If these premises are clear, we can go on.

Now, a very important point. To go on with a realistic process of design inference based on the concept of functionally specified information, we need a few things clearly definied in any particulare example:

1) The System

This is very important. We must clearly define the system for which we are making the evaluation. There are different kinds of systems. The whole universe. Our planet. A lb flask. They are different, and we must tailor our reasoning to the system we are considering.

For Lizzie’s experiment, I propose to define the system as a computer or informational system of any kind that can produce random 500 bits strings at a certain rate. For the experiment to be valid to test a design inference, some further properties are needes:

1a) The starting system must be completely “blind” to the specific experiment we will make. IOWs, we must be sure that no added information is present in the system in relation to the specific experiment. That is easily realized by having the system assembled by someone who does not know what kind of experiment we are going to make. IOWs, the programmer of the informational system just needs to know that we need random 500 bits string, but he must be completely blind to why we need them. So, we are sure that the system generates truly random outputs.

1b) Obviously, an operator must be able to interact with the system, and must be able to do two different things:

– To input his personal solution, derived from his presonal intelligent computations, so that it appears to us observers exactly like any other string randomly generated by the system.

– To input in the system any string that works as an executable program, whose existence will not be known to us observers.

OK?

2) The Time Span:

That is very important too. There are different Time Spans in different contexts. The whole life of the universe. The life of our planet. The years in Lenski’s experiment.

I will define the Time Span very simply, as the time from Time 0, which is when the System comes into existence, to Time X, which is the time at which we observe for the first time the candidate designed object.

For Lizzie’s experiment, it is the time from Time 0 when the specific informational system is assembled, or started, to time X, when it outputs a valid solution. Let’s say, for instance, that it is 10 days.

OK?

3) The specified function

That is easy. It can be any function objectively defined, and objectively assessable in a digital string. For Lizzies, experiment, the specified function will be:

Any string of 500 bits where the product calculated as described exceeds 10^60

OK?

4) The target space / search space ratio, expressed in bits a la Shannon. Here, the search space is 500 bits. I have no idea how big the target space is, and apparently neither does Elizabeth. But we both have faith that a good mathemathician can compute it. In the meantime, I am assuming, just for discussion, that the target space if 320 bits big, so that the ratio is 180 bits, as proposed in the premises.

Be careful: this is not yet the final dFSI for the observed string, but it is a first evaluation of its higher threshold. Indeed, a purely random System can generate such a specified string with a probability of 1:2^180. Other considerations can certainly lower that value, but not increase it. IOWs, a string with that specification cannot have more than 180 bits of functional complexity.

OK?

5) The Observed Object, candidate for a design inference

We must observe, in the System, an Object at time X that was not present, at least in its present arrangement, at time 0.

The Observed Object must comply with the Specified Function. In our experiment, it will be a string with the defined property, that is outputted by the System at time X.

Therefore, we have already assessed that the Observed Object is specified for the function we defined.

OK?

6) The Appropiate Threshold

That is necesary to transorm our numeric measure of dFSI into a cathegorical value (present / absent) of dFSCI.

In what sense the threshold has to be “appropriate”? That will be clear, if we consider the purpose of dFSCI, which is to reject the null hypothesis if a random generation of the Oberved Object in the System.

As a preliminary, we have to evaluate the Probabilistic Resources of the system, which can be easily defined as the number of random states generated by the System in the Time Span. So, if our System generates 10^20 randoms trings per day, in 10 days it will generate 10^21 random strings, that is about 70 bits.

The Threshold, to be appropiate, must be of many orders of magnitude higher than the probabilistic resources of the System, so that the null hypothesis may be safely rejected. In this particular case, let’s go on with a threshold of 150 bits, certainly too big, just to be on the safe side.

7) The evaluation of known deterministic explanations

That is where most people (on the other side, at TSZ) seem to become “confused”.

First of all, let’s clarify that we have the duty to evaluate any possible deterministic mechanism that is known or proposed.

As a first hypothesis, let’s consider the case in which the mechanism is part of the System, from the start. IOWs the mechanism must be in the System at time 0. If it comes into existence after that time because of the deterministic evolution of the system itself, then we can treat the whole process as a deterministic mechanism present in the System at time 0, and nothing changes.

I will treat separately the case where the mechanism appears in the system as a random result in the System itself.

Now, first of all, have we any reason here to think that a deterministic explanation of the Observed Object can exist? Yes, we have indeed, because the nature itself of the specified function is mathemathical and algorithmic (the product of the sequences of heads must exceed 10^60). That is exactly the kind of result that can usually be obtained by a deterministic computation.

But, as we said, our System at time 0 was completely blind to the specific problem and definition posed by Lizzie. Therefore, we can be safely certain that the system in itself contains not special algorithm to compute that specific solution. Arguing that the solution could be generated by the basic laws physics is not a valid alternative (I know, some darwinist at TSZ will probably argue exactly that, but out of respect for my intelligence I will not discuss that possibility).

So, we can more than reasonably exclude a deterministic explanation of that kind for our Observed Object in our System.

7) The evaluation of known deterministic explanations (part two)

But there is another possibility that we have the duty to evaluate. What if a very simple algorithm arose in the System by random variation)? What if that very simple algorithm can output the correct solution deterministically?

That is a possibility, although a very ulikely one. So, let’s consider it.

First of all, let’s find some real algorithm that can compute a solution in reasonable time (let’s say less than the Time Span).

I don’t know if such an algorithm exists. Im my premise c) at post #682 I assumed that it exists. Therefore, let’s imagine that we have the algorithm, and that we have done our best to ensure that it is the simplest algorithm that can do the job (it is not important to prove that mathemathically: it’s enough that it is the best result of the work of all our mathemathician friends or enemies; IOWs, the best empirically known algorithm at present).

Now we have the algorithm, and the algorithm must obviously be in the form of a string of bits that, if present in the System, wil compute the solution. IOWs, it must be the string corresponding to an executable program appropriate for the System, and that does the job.

We can obviously compute the dFSI for that string. Why do we do that?

It’s simple. We have now two different scanrios where the Observed Object could have been generated by RV:

7a) The Observed Object was generated by the random variation in the System directly.

7b) The Observed Object was computed deterministically by the algorithm, which was generated by the random variation in the System.

We have no idea of which of the two is true, just as we have no idea if the string was designed. But we can compute probabilities.

So, we compute the dFSI of the algorithm string. Now there are two possibilities:

– The dFSI for the algorithm string is higher than the tentative dFSI we already computed for the solution string (higher than 180 bits). That is by far the most likely scenarion, probably the only possible one. In this case, the tentative value of dFSI for the solution string, 180 bits, is also the final dFSI for it. As our threshold is 150 bits, we infer design for the string.

– The dFSI for the algorithm string is lower than the tentative dFSI we already computed for the solution string (lower than 180 bits). There are again two possibilities. If it is however higher than 150 bits, we infer design just the same. If it is lower than 150 bits, we state that it is not possible to infer design for the solution string.

Why? Because a purely random pathway exists (through the random generation of the algorithm) that will lead deterministically to the generation of the solution string, with a total probability of the whole process which is higher than our threshold (lower than 150 bits).

OK?

8) Final considerations

So, some simple answers to possible questions:

8a) Was the string designed?

A: We infer design for it, or we infer it not. In science, we never know the final truth.

8b) What if the operator inputted the string directly?

A: Then the string is designed by definition (a conscious intelligent being produced it). If we inferred design, our inference is a true positive. If we did not infer design, our inference is a false negative.

8c) What if the operator inputted the algorithm string, and not the solution string?

A: Nothing changes. The string is designed however, because it is the result of the input of a conscious intelligetn operator, although an indirect input. Again, if we inferred design, our inference is a true positive. If we did not infer design, our inference is a false negative. IOWs, our inference is completely independent from how the designer designed the string (directly or indirectly)

8d: What if we do not realize that an algorithm exists, and the algorithm exists and is less complex than the string, and less complex than the threshold?

A: As alreday said, we would infer design, at least until we are made aware of the existence of such an algorithm. If the string really originated randomly througha random emergence of the algorithm, that would be a false positive.

But, for that to really happen, many things must become true, and not only “possible”:

a) We must not recognize the obvious algorithmic nature of that particular specified function.

b) An algorithm must really exist that computes the solution and that, when expressed as an executable program for the System, has a complexity lower than 150 bits.

I an absolutely confident that such a scenario can never be real, ans so I believe that our empirical specificity of 100% will be always confirmed.

Anyways, the moment that anyone shows tha algorithm with those properties, the deign inference for that Object is falsified, and we have to assert that we cannot infer design for it. This new assertion can be either a false negative or a true negative, depending on wheterh the solution string was really designed (directly or indirectly) or not (randomly generated).

That’s all, for the moment.

AF adds “This was done in haste. Any comments regarding errors and ommissions will be appreciated.”

 

 

 

263 thoughts on “Gpuccio’s Theory of Intelligent Design

  1. Does anyone else have difficulty following this?

    Wouldn’t it be simpler to define this in a bit of pseudocode?

    Actually, rereading this I see that Elizabeth already did that, which leads me to ask what GP is talking about.

    However, starting with a randomly generated population of, say 100 series, I propose to subject them to random point mutations and natural selection, whereby I will cull the 50 series with the lowest products, and produce “offspring”, with random point mutations from each of the survivors, and repeat this over many generations.

    This is a fairly clear operational definition which any competent programmer could take as a system specification. Why can’t GP simply address it and say whether it qualifies or doesn’t qualify?

  2. I’ll take a stab at summarizing gpuccio’s argument, as I did for Upright Biped’s. Gpuccio seems to be more willing than Upright to offer feedback.

    I’ll post the summary later today. 

  3. Back in about the 1980s – during a time when some major corporations still had real research labs with real chemists and physicists doing research – one of those companies had hired a disastrous series of computer programmers who couldn’t seem to do anything right.

    The reason finally became clear; the programmers could code up a storm and write efficient code and compilers, but they had no clue about how to write a program that actually incorporated physical laws to model physical phenomena. The research director finally concluded that programmers who majored in computer science were not suitable for research labs that were developing programs to be run on large computers and supercomputers to model physical phenomena. The science backgrounds of these programmers were simply too weak or effectively nonexistent.

    The director finally called a meeting of the entire research division and laid out what the new hiring practices for programmers would be. The solution turned out to be to hire engineers, physicists, and chemists who were familiar with writing programs even though their coding may not start out compact and elegant. Those issues of making coding efficient and fast could come later, after the physics and chemistry were properly incorporated into those programs.

  4. 1a) The starting system must be completely “blind” to the specific experiment we will make. IOWs, we must be sure that no added information is present in the system in relation to the specific experiment. That is easily realized by having the system assembled by someone who does not know what kind of experiment we are going to make. IOWs, the programmer of the informational system just needs to know that we need random 500 bits string, but he must be completely blind to why we need them. So, we are sure that the system generates truly random outputs.

    The system is operationally defined by Elizabeth in my previous post. The initial 500 bit string is random, but child strings will be mutated from the starting string, and subsequent child strings will be iteratively mutated from their parents.

    The culling of the children is unambiguously described. I fail to see why this system requires a lot of discussion.

    The selection “oracle” has no information about the target string. Only a way of “measuring” a length parameter and passing on a subset of children.

  5. Joe: “And YES, THAT means that all you sorry losers have to do is get off of your butts and demonstrate that blind and undirected processes can produce dFSCI. “

    But evolution is a “non-conscious directed process” who’s “direction” is provided by feedback from a successful population.

    Why would gpuccio even describe something akin to “pure randomness” as being a required element of evolution?

    If an OP amp represented a “population” and a pair of resistors represented the “environment”, kairosfocus could put together a circuit with feedback that would demonstrate that the “output” and thus “input” of the system would behave in a mutually beneficial fashion within a very small “config space”.

    The OP amp, (population), would adapt to variations in feedback loop ratios and even power supply changes but in no way would it be purely random.

    Ask KF how it’s done.

    He can show you in five minutes how powerful feedback is in a system such as evolution.

     

  6. It seems unlikely that this example (Elizabeth’s genetic algorithm) is going to show that dFCSI can be put into the genome by natural selection. When I tried to assert that in the previous discussion with gpuccio, he argued that his definition of dFCSI only applies after one takes into account the effect of natural selection, which he views as a deterministic force.  Here is my comment that includes his:

    (My comment on September 26 at TSZ): gpuccio has responded (in comment #596 here) at some length. Basically, gpuccio’s argiment is that dFCSI tests whether random variation can produce an adaptation — that it is only computed when it has been ruled out that natural selection can produce such an adaptation:

    “[gpuccio]: the judgement about dFSCI implies a careful evaluation of available deterministic explanations, as said many times.”

    I think if you look at gpuccio’s comment 596 you’ll see gpuccio saying that. Of course William Dembski doesn’t need to do that. Although Dembski does have some wording about excluding simple deterministic natural explanations, he means simple things like moss growing always on the north side of trees owing to the shade. He does not have to work out whether natural selection can do the job because his Law of Conservation of Complex Specified Information is supposed to rule that out.

    gpuccio’s dFCSI is different. I suspect that after a while gpuccio will say that Elizabeth’s GA does not produce dFCSI because the information you get in the bit string has to have the part that natural selection puts there subtracted before you start to compute dFCSI.

     

  7. So he wins by asserting that rmns can’t do the job, when the point of GA’s is to demonstrate it can?

  8. I think it works out to saying that NS could do the job if it has the requisite mutational input. But in the cases where gpuccio wants to apply the argument, RM is not capable of coming up with enough mutational change at once. These are Behe-like arguments and I think that the effect of the dFCSI is to say that too much mutational change at once is needed.

    If individual mutants can be individually selected either one after another, or simultaneously and recombined into the string, I think gpuccio would say that we take that into account and then dFCSI is small. 

  9. Saltation was abandoned 70 years ago in the original synthesis. Perhaps GP could provide an observed example, if he thinks it happens. Perhaps a before and after snapshot of a new protein domain.

  10. The company Mike is talking about, adopted a “solution” that is far from optimal.  Even the best programmers (maybe especially the best) are not going to be experts in every subject area on which a program might be helpful. My first programming job out of computer science school was to write accounting programs – general ledger, payroll, accounts receivable, etc. About which I knew absolutely nothing. So was it dumb to hire me rather than an accountant who might, with effort, produce incompetent code?

    You do NOT start with terrible code that sort of does what’s needed, in the hopes of “cleaning it up” later. What you do is pair a competent programmer with someone competent in the subject matter, and they work as a team, neither of them more important than the other.

    Then the subject matter expert draws up a detailed specification – here is what the program should do, here are the metrics to ensure this, here are the testing methods and project milestones, etc. In this phase, the programmer might suggest various ways to accomplish the same goals more efficiently, making better use of resources, reducing costs, increasing speed, and so on.

    In the next phase, the programmer does most of the work, generating first testable models, which the subject expert evaluates. Gradually, this is scaled up to the target product, ensuring that at no time does the project wander afield of the initial specification.

    What Mike has provided here is an example of bad management, both in the expectation that programmers will be good chemists or whatever, and then in the expectation that good chemists will be capable programmers. This bad management did not find a “solution”, they found a woefully suboptimal (and very wrong) approach that was slightly better than a hopelessly stupid approach.

    And yes, the teamwork, combined-expertise approach was well known in the 1980s. I was there. Many books about it had already been written.         

  11. Flint notes: What Mike has provided here is an example of bad management, both in the expectation that programmers will be good chemists or whatever, and then in the expectation that good chemists will be capable programmers. This bad management did not find a “solution”, they found a woefully suboptimal (and very wrong) approach that was slightly better than a hopelessly stupid approach.

    Flint hits the nail right on the head. The management of that company was so bad that they were referred to among industry analysts as a bunch of cowboys who couldn’t shoot straight; they were constantly shooting themselves in the foot, and they finally spastically shot themselves in the head. In a period of about 7 years, they went from having a 4 billion per year research war chest to being13.5 billion in debt.

    When they decided to lay off their entire research staff, they put a major corporation onto an inevitable death spiral right into bankruptcy, despite the fact that they held many of the key patents in technology that is ubiquitous in our society today. They now have no part in any of it; they had to sell those patents to pay massive debts incurred as a result of all that mismanagement. The amount of effort that would be required of intelligent people to deliberately wreck a company those managers were able to accomplish just by sleepwalking and ignoring expert advice from their research staff and outside consultants.

    Indeed, I was there also.

  12. There are three key errors in the design argument and they are reflected in Gpuccio’s posts. I don’t suppose anyone has a problem with calculating the ratio all possible strings to target strings.  If he wants to call that dFSI then that’s OK.  The problems arise in saying that if that ratio is sufficiently large we can deduce design.

     

    The problems are:

     

    1) Assumption of uniform probability for each string using Bernoulli’s principle of indifference.  As Keynes has shown this principle does not give a unique result. For example, instead of assuming each bit is equally likely to be heads or tails why don’t we assume all ratios of 1s to 0s in the string are equally likely.  An assumption has been sneaked in about how the strings were generated because that is matches with our usual experience. Lizzie’s example makes that explicit but the calculation of dFSI does not.

     

    2) Assumption that the target was set independent of the deterministic process. In evolution it is the deterministic process – replication plus survival – that determines the target: those things that survive.  This is not sneaking in information because the target did not exist prior to the process.  The process created the target.

     

    3) Giving design as a solution a priveleged position. Why do we not attempt to calculate the probability that a design solution meets the target? Gpuccio says:

    we have the duty to evaluate any possible deterministic mechanism that is known or proposed.

    Why do we not get to review possible designed solutions?  It is in this sense that design is just the default solution.

     

     

     

     

  13. I would be happy to see a coherent method for excluding evolution as the source for a particular sequence. So far the best I’ve seen is the argument thatch’s if there are no living cousin sequences, poof.

    That seems vaguely related to Axe’s argument that function is isolated. These seem to be the best if not the only arguments for ID, and they are basically god of the gaps arguments.

    If this is GP’s argument, couldn’t we skip over the rhetoric and discuss this?

    Perhaps we could beging the debate by paraphrasing Darwin: What good is half a protein domain coding sequence?

  14. Petrushka:

    Does anyone else have difficulty following this?

    Yes – another tl;dr theory of ID! 

    Beware the theorist that tries to tell you what is, and what isn’t, scientific: 

    GP, to me:

    Finally, even if those intermediates existed and were eliminated, someone sholud be alble to produce some functional, naturally selectable intermediate in the lab. After all, if thousands of those intemediates were found by RV, how is it that you cannot find even one of them?

    No, your reasoning is not credible and not scientific at all.

    GP is essentially telling me that, if we do not encounter a ‘new protein domain’ with the probabilistic resources available in laboratories (a few litres of culture? 20 years? An extreme population bottleneck every morning as the spotty lab tech pipettes a new culture? Clonal competition?) in a modern organism that, one can be fairly sure, already has sufficient ‘protein domain’ modules to generate most likely biochemical functions, we are justified in ruling out RV + NS as a source of such domains on a planet over the course of some hundreds of millions of years.

    Even ignoring the scaling by ‘test-tube’ and population size, in (say) a hundred million years, 5 million more generations are produced than in 20. Failure to observe an event with a probability of less than 1 in n after n trials does not justify the assertion that the probability is most likely zero. And it is never clear how the designer is supposed to know where the good stuff lies in ‘protein space’, to shortcut RV+NS. We certainly can’t tell just by thinking about it, with no reference to existing structures, so intelligence per se isn’t the answer.

    Me: That unavoidable obscuration does not justify the assertion “RV and NS cannot do it.”

    GP: The correct scientific assertion is: ““RV and NS is not a credible scientific explanation for it”. And it is perfectly justified.

    The difference between ‘scientific’ assertions and other kinds is lost on me, I’m afraid. “RV and NS cannot do it” were GP’s own words.

    Protein domains are modules – but they are themselves modular. They are composed of repeats. Look at the proteins on this page (they’re prions, not enzymes, but it does not matter). The red helixes are modules. Then look at the ‘flattened’ structures. See how many different sequences form themselves ‘naturally’ into helixes. Including some very short ones – Pro-Asp-Ala, or Pro-Lys-Arg-Ala-Arg. The essence is a particular mix of hydrophobic and hydrophilc acids, which cause the peptide to curl up in that distinctive helical manner. Other motifs give sheets. You really think “RV” can’t find these sequences, and glue them together to make longer stretches with all the appearance of ‘dFSCI’?

    In a world with just five amino acids (Pro, Asp, Ala, Lys and Arg), how hard would it be to encounter one of those sequences – even if they were the only possible helix-forming sequences in the entire space, which is not very likely – and glue them into longer stretches by within-gene duplication and migration of fragments between genes? The processes that cause mutation have no idea where genes (or even codons) start and end, and certainly aren’t restricted to point mutation.

    The modern world has 20 acids, and peptides thousands of acids in length. Does that massive amplification of ‘search space’ make it easier or harder to find the ‘functional percentage’ within it, from currently viable bridgeheads?

  15. gpuccio has pointed to the UD thread’s comments numbers 680-696 as clarifying the issue of how gpuccio treats possible NS explanations in gpuccio’s argument.   The critical ones seem to be numbers 693 and 694.  Being a “bear of little brain” (to quote Winnie-the-Pooh) I am having some trouble understanding what they mean.

    Fortunately gpuccio has now responded to my guesses as to what the argument does with NS, in comment 760. gpuccio sort-of agrees with my most recent guesses.  However there are still some qualifications that puzzle me.  So here I include gpuccio’s entire comment 760 so that people here can help figure things out for me.  

    The only comment I will make right now is that I am “at TSZ” not at “TSA”. In the U.S. the acronym TSA is notorious (Transportation Security Administration).  That is where I will be Sunday morning when I have to travel on a plane.

    Anyway, any advice as to what “explicit, convincing, and testable” means in the context of Elizabeth’s GA example would be of interest.

    —- gpuccio’s comment 760 follows: —-

    To Joe Felsenstein (at TSA):

    I think it works out to saying that NS could do the job if it has the requisite mutational input.

    If you mean “a sufficietn number of functional selectable intermediates for each basic protein domain”, then I agree.

    But in the cases where gpuccio wants to apply the argument, RM is not capable of coming up with enough mutational change at once.

    As clarified many times, it needs not be “at once”. It can happen in any reasonable time (within the Time Span), and with any number of mutational events. The only requirement is that, in the end, the necessary mutational change must be present at the same time, so that it may be selected.

    These are Behe-like arguments and I think that the effect of the dFCSI is to say that too much mutational change at once is needed.

    They are in part similar to what Behe says, but not completely. My arguments, however, are perfectly compatible with what Behe says.

    If individual mutants can be individually selected either one after another, or simultaneously and recombined into the string, I think gpuccio would say that we take that into account and then dFCSI is small.

    Yes, I agree, provided that the “selection procedure” is explicit, convincing and testable.

    — end of gpuccio’s comment 760 —

  16. 3) Giving design as a solution a priveleged position. Why do we not attempt to calculate the probability that a design solution meets the target? Gpuccio says:

    we have the duty to evaluate any possible deterministic mechanism that is known or proposed.

    Why do we not get to review possible designed solutions?  It is in this sense that design is just the default solution.

    This is exactly why gpuccio’s dFSCI is a measure of ignorance, not of design.  The English translation of dFSCI is “This is complex and I don’t know what deterministic mechanism could have created it.”  Concluding “Therefore it is designed.” is not valid, particularly in the absence of any known or proposed designer.

     

  17. gpuccio,
    Thanks for laying out your position in more detail. Before getting into the questions I have, I’d like to reciprocate by adding some data from the Creating CSI with NS thread.

    With respect to your analysis of the time span available, I found four GA implementations mentioned in that thread, including Lizzie’s. Ido used a population size of 100,000 and got a solution in 10,000 generations. At most this means his GA considered 10e11 strings.

    R0b used a population size of 500 and got a solution in 1 to 3 million generations for a maximum of 1.5x10e9 strings.

    Patrick got a solution in 700 generations using a population of 10,000, considering a maximum of 7x10e7 strings.

    Lizzie’s solution apparently tested more strings.

    All but Lizzie’s solution finished in minutes, rather than the 10 days you use as a limit. Your 10e21 strings is many, many decimal orders of magnitude over that required in practice.

    If by “the probabilistic resources of the System” you mean the number of possible strings, that is 2e500. 10e11 is about 2e36, so it seems that the number of possible strings is at least 464 binary orders of magnitude greater than the number that needed to be considered.

  18. gpuccio,

    The second area I’d like to provide data for is your estimate of the solution space. olegt provided a nice analysis here, here, here, and here. He arrives at a value of 6.48x10e31 for the number of strings with a value over 10e60. This is roughly 2e106, which I believe means that the upper limit of functional complexity in a solution string is (500 – 106) or 394 bits.

    Do I understand your definitions correctly?

     

  19. gpuccio,

    1b) Obviously, an operator must be able to interact with the system, and must be able to do two different things:

    – To input his personal solution, derived from his presonal intelligent computations, so that it appears to us observers exactly like any other string randomly generated by the system.

    – To input in the system any string that works as an executable program, whose existence will not be known to us observers.

    So what you are calling “the system” is simply a source of bit strings that may be generated either by a human or by an algorithm.

    What you seem to be ignoring here and throughout your summary is that the algorithm being used is a model of a small number of known, observed mechanisms that are part of the modern synthesis. This is not an algorithm invented by a human with knowledge of the problem domain. Any conclusions you draw about the algorithm are therefore applicable to biological evolution.

    If you agree that the strings generated by the algorithm have sufficient functional complexity to warrant the dFSCI boolean if they were generated by a human, you are recognizing that mechanisms observed in real biological systems also have the capability to generate that much functional complexity. Given that, there is no reason to assume dFSCI exists in biological systems simply because their functional complexity is beyond some threshold.

     

  20. Being a bear with an even smaller brain, I’m still struggling with this. The way I read it. If biologists are correct about the history of life, then genomes contain no dFSCI at all, since all genomes are the result of accumulated small mutations that have passed through the sieve of natural selection. Even neutral mutations are sieved.

    GP’s argument rests not on any calculation, but on the principle voiced by Darwin in “Origin”: that all features must be the cumulative result of small variations.

  21. Gpuccio

    I see you responded on UD.  (I wish you would come over here.  It is ridiculously difficult trying to keep up with two threads simultaneously and most of us are banned from UD.)

    Anyhow thanks for taking the time. Unfortunately I haven’t got time to pursue all three points. So I will concentrate on the last one as it is the easiest to explain. You write:

    For the same reason that we do not calculate the probability of a determistic solution. Both deterministic mechanisms and design are not random systems. There is no probability to consider, because they are events that are not described probabilistically.

    This isn’t true. All solutions, designed and non-designed, can and should be described probabilistically.

    A deterministic solution is just a non-designed solution that has a probability of 1 of producing the target.  In this sense it can be described probabilistically. But anyway there are no such solutions in real life.  All real solutions may fail to produce the target for some reason or another (and therefore have a probability of succeeding <1) and if we are assessing whether they are good explanations we should take into account the probability that they produce the target and also the probability that the solution exists in the first place.

    In the same way all designed solutions have a probability that the designer exists and a probability that the designer achieves the target (which is <1 unless you are assuming an omnipotent designer!).  E.g. if we propose life was designed by a bunch of extraterristials from another galaxy we should properly assess the probability of them accessing earth, the probability of them wanting to create life, and the probability of them succeeding given the obstacles in their way. This is how the much quoted forensic scientists and archeologists infer design.

    A proper “inference to best explanation” would compare the probability of design solutions existing and achieving the target  to other solutions including other designed solutions and other non-designed solutions. Instead you dismiss known non-designed solutions because you believe there is a low probability they will achieve the target and then assume that a designed solution has a higher probability.  The non-designed solutions have higher priority  in the sense they are considered first.  But the designed solution is the default because once the others are dismissed it is taken to be the solution without attemptnig to examine its plausability.

  22. gpuccio has now explained at UD that in regards to “explicit, convincing, and testable”:

    I don’t see great application of those principles in the case of Lizzie’s example, because:

    a) There is no deterministic mechanism in the System at start.

    b) The only deterministic mechanism implied is the algorithm, which is designed by us. So, we must only check if the algorithm works, and in that case it is certainly explicit (we know the string of the program), convincing (we can see it work) and testable (we can verify that it gives a correct solution). So, that is no problem. The only problem, as exhaustively explained, is that the algorithm was introduced in the system by a designer, or it was generated in it randomly. So, we make a design inference for it, as explained (please, don’t make me say the same things a lot of times).

    The algorithm in Elizabeth’s example was designed by Elizabeth specifically so as to model a simple case of random mutation and natural selection. It is not an algorithm that contains within it the solution. It simply contains a fitness function that computes the product of lengths of runs of heads for any proposed string and alters the reproduction of that string accordingly. Of course it does not simply transfer information that it contains that specifies which strings are good solutions, because it does not have any such information.

    onlooker (below) has supplied a good summary of the results of computer simulations of the algorithm by three people (three different implementations) in Elizabeth’s original thread. The estimates of the sizes of the original space and the target seem to be compatible with dFCSI being present.

    So the issue now is, does gpuccio’s application of dFCSI say that it is present in this case?

    I can see three possibilities:

    (a) gpuccio argues that the designedness of Elizabeth’s algorithm means that dFCSI is present but comes from that. If that is gpuccio’s position then it implies that all adaptations in life are said to possess dFCSI because the original life form is said to have it. In that case even if RM+NS had produced all adaptations after the Origin Of Life, they would still all be declared to be the result of design.

    (b) gpuccio acknowledges that the solutions found do exhibit dFCSI, and that therefore the observation of dFCSI does not indicate that the solution was arrived at by a Designer.

    (c) gpuccio argues that dFCSI is only present if RM+NS cannot arrive at the solution. In which case, since Elizabeth’s algorithm is modelling RM+NS, it does not show that dFCSI can arise by RM+NS (by definition).

    But perhaps I misunderstand. 

  23. This is why I have asked repeatedly how GP interprets the first 20,000 generations in the Lensli experiment, which resulted in no observable somatic change, but which produced neutral mutations that enabled the first of a series of adaptive mutations.

    This appears to be a laboratory confirmation of a principle long conjectured to account for multi-step adaptations and inventions. It really is a small scale irreducible invention (in Behe’s original sense of the word).

    I liken Lenski’s experiment to Newton’s use of cannonball trajectories to extrapolate planetary orbits. Once you have observed a regular mechanism, it becomes the default explanation for that class of phenomena.

  24. Joe: “However just looking at a computer bus there is no way to predict what will come next, not even if you knew what program was running. “

    kairosfocus and I disagree.

    If you know a program running on an MCU running from internal RAM and ROM, is programmed to make consecutive *external* fetches, you can see with an ICE monitoring the bus, that after a fetch from location X there will be a fetch from (X+1), just as predicted.

     

  25. Mark Frank: “A proper “inference to best explanation” would compare the probability of design solutions existing and achieving the target  to other solutions including other designed solutions and other non-designed solutions. Instead you dismiss known non-designed solutions because you believe there is a low probability they will achieve the target and then assume that a designed solution has a higher probability. “

    Yes, and this is exactly where the IDists pull back from the debate.

    While they will criticize “design without a designer” capability, they refuse to do the same for ID capability.

     

  26. My reading of the fitness landscape is a hidden Markov model, which has no perfect solution. There really is no alternative process other than evolution for design.

  27. gpuccio,

    So what you are calling “the system” is simply a source of bit strings that may be generated either by a human or by an algorithm.

    Not exactly.

    [snip]

    The System is essentially the source of strings generated by pure RV.

    [snip]

    At the same time, it must allow the intervention of a designer, either in the form of the input of a final solution, or in the form of the input of an algorithm that can compute a solution.

    You are contradicting yourself. Either the “System” is the source of strings generated by pure random variation or it also allows the intervention of the designer. Hence my summary is correct: What you are calling “the system” is simply a source of bit strings that may be generated either by a human or by an algorithm.

     

  28. gpuccio,

    What you seem to be ignoring here and throughout your summary is that the algorithm being used is a model of a small number of known, observed mechanisms that are part of the modern synthesis. This is not an algorithm invented by a human with knowledge of the problem domain. Any conclusions you draw about the algorithm are therefore applicable to biological evolution.

    You are completely wrong here. The algorithm is a designed algorithm. You and your friends may believe that it is a “model of the modern synthesis”, or of part of it. Unfortunately, that is a completely unwarranted statement.

    GAs are designed, and they are never a model of NS, which is the only relevant mechanism in the modern synthesis. Your algorithms for Lizzie’s example are no exception. They are based on controlled random variation and intelligent selection. They are forms of intelligent design.

    Let’s break down a typical simple GA such as those used in Lizzie’s experiment. We have:

    1) An initial set of strings. This models the initial population in a biological context.

    2) A means of generating new strings from existing strings. This models observed evolutionary mechanisms, including point mutations and crossover (sexual reproduction). I don’t believe any of the simulations used to solve Lizzie’s problem used more complex mechanisms.

    3) A fitness function. This models the environment in a biological context. Strings with greater fitness will have a greater likelihood of reproducing. There are two characteristics to note about this function. First, it is obviously much, much simpler than real world survival and reproduction criteria. That’s typical of any model. Second, except in toy GAs like Dawkins’ weasel program that is so admired at UD, this function does not encode a solution. The actual solution isn’t known; the fitness function merely ranks two or more strings relative to each other.

    So in one sense you are correct, GAs are designed. They are designed as models of naturally occuring mechanisms that have been repeatedly observed. Whatever label you apply to the type of selection taking place is immaterial to the fact that it is a model, however simple, of the real world.

    If this model can generate significant functional complexity, as it can by your own definition, there is every reason to expect that the same mechanisms operating in real biological systems can do the same. That means that your assumption that an intelligent designer is required has no empirical support.

  29. Mark Frank:

    A proper “inference to best explanation” would compare the probability of design solutions existing and achieving the target  to other solutions including other designed solutions and other non-designed solutions. 

    Yes!   It seems that the concept of Bayesian hypothesis testing, which is essential to scientific thinking, is beyond gpuccio’s frame of reference.

  30. I see you responded on UD.  (I wish you would come over here.  It is ridiculously difficult trying to keep up with two threads simultaneously and most of us are banned from UD.)

    Hear, hear.  You are welcome to post here, as are all but one participant at UD.  Almost none of us are able to post on your thread.  I think it is safe to say that none of your posts are likely to be deleted here, either.

     

     

  31. Assumption is very much at the heart of this. Protein space is assumed to be so function-poor that evolution cannot traverse it incrementally and somehow the designer is also assumed to have insider info on where it resides.

    But as this paper shows (linked previously), protein space appears to be astonishingly function-rich. As the authors note, the 100-residue peptides they picked form a space that would take a mole of universes (6 x 10^23) to hold one molecule of each. Yet a ‘random’ library of 1.5 million sequences, which would take up less than a tenth of a single E Coli cell on the same scale, possessed FOUR sequences that were able to rescue function in nutritional mutants, out of 27 different strains tried. 

    Now, it is fully acknowledged that the researchers weighted the game by choosing an algorithmic mode of protein composition that mixed hydrophilic and hydrophobic residues in a way that is known to favour protein folding – and the word ‘Design’ even appears in the title! They aren’t truly ‘random’. Nonetheless, they had no connection between the folding algorithm and the actual ‘fitness function’ by which successes were measured. They could hardly be accused of smuggling something in, simply by eliminating proteins that are unlikely to fold. The size of the modular ‘repeat’ units that ensure folding was 3-4 acids in length – and the specific acid did not matter nearly so much as its hydrophilic/hydrophobic character.

    This is effectively a ‘saltational’ scenario. It splatters about its local region of space by quite wide steps, were they generated by ‘natural’ RV … and yet in this unimaginably tiny corner of the space-of-all-100-acid peptides, this functionally untargetted search finds several functional analogues of modern proteins, of very different length and composition.

  32. From your link:

    It has been found that rabbits have a helix capping motif the dramatically lowers the propensity of prion formation

    Apparently the Designer loved rabbits more than the creatures made in her image.

  33. onlooker wrote (in response to gpuccio’s dismissal of genetic algorithms):

    So in one sense you are correct, GAs are designed. They are designed as models of naturally occuring mechanisms that have been repeatedly observed. Whatever label you apply to the type of selection taking place is immaterial to the fact that it is a model, however simple, of the real world. 

    I hope I may provide a little perspective, as one who wrote his first GA 49 years ago.  Genetic simulations were first done by Nils Aall Barricelli in 1954 (on a very early computer) and were more noticeably developed by the Australian quantitative geneticist Alex Fraser in 1957.  Over the next 5 years they became widely used.  They have provided a lot of information about the interaction of multiple evolutionary forces.   gpuccio may say that “GAs are designed, and they are never a model of NS, which is the only relevant mechanism in the modern synthesis”, but gpuccio is quite wrong about that.  They are a fundamental tool in modeling all evolutionary forces, including natural selection.  An arbitrary dismissal by gpuccio is not going to stop evolutionary biologists making good use of GAs.

  34. One can see very clearly the fundamental misconceptions gpuccio and the others at UD are working with. They want the searches in any GA program to start with a completely randomly generated string (or whatever the system is) every time a new generation loop in the program is executed. In other words, they want to remove all interactions with the environment and all self-interactions within a system.

    Missed the first time? OK, generate another completely randomized string and start over again. Generated a string that met some part of the “target” criterion? OK, generate another completely randomized string and start all over again. Generated a system that met nearly all the criteria? OK, generate another completely randomized system and start all over again.

    The assertion that all sample spaces must be repeatedly sampled with a uniform, random sampling distribution is analogous to asserting that a crystalline solid must be generated within a plasma state. It is the kind of assertion that betrays those fundamental ID/creationist misconceptions about how the universe actually behaves.

    Modeling the effects of natural selection in GA’s requires that the program simulates the natural processes actually involved in molding successive generations of a system to fit the environment in which it is immersed. All GA programs incorporate a “law of nature,” whether an actual law that exists in nature or an “artificial law” in some alternate conception of a universe.

    The bottom line in every ID/creationist argument against genetic algorithms is that they are telling scientists that the laws of nature are irrelevant and scientists are not allowed to simulate the effects that these laws produce. ID/creationists are effectively telling the scientific community that they are cheating if they understand the physics, chemistry, or biological processes that work on matter.

    Thus, programs run on supercomputers that use the known forces of gravity, electromagnetism, strong force, and weak force to simulate galaxy formation are “illegal.” According to ID/creationist thinking, every time increment of such a program must start all over again at time zero with a completely randomized distribution of matter and energy. If galaxies don’t poof out of that, then that proves that galaxies have to be designed.

    This is the kind of argumentation we are seeing buried implicitly in gpuccio’s and all other ID caricatures of the physical world. They refuse to allow physical laws to be incorporated into computer simulations of the natural world. Everything must remain in a plasma state; and if complex systems of condensed matter don’t poof out of that plasma state, then the only alternative conclusion is that such systems can come into existence only by design.

    If scientists include some kind of physical law or laws into their genetic algorithms, then ID/creationists assert that scientists are really cheating by “designing” the systems that fall out of these programs.

  35. Missed the first time? OK, generate another completely randomized string and start over again. Generated a string that met some part of the “target” criterion? OK, generate another completely randomized string and start all over again.

    That was my reading of GP’s thesis, but I was afraid to say it because if I were wrong it would seem insulting.

    I have for the last couple of years thought GP doesn’t understand the physical basis of evolution or the nature of dynamic systems that are modified by feedback, but this seems insulting. So I have tried to limit my comments to questions regarding his understanding.

    I am unaware that any of these questions have been answered. Mostly they have been brushed aside.

  36. Not one of them understands the concept and importance of feedback in a “working system” as can be seen every time someone brings it up.

    Look at how badly William J Murray understood it.

    What I don’t understand is why kairosfocus with his electronics background, has not done any better with it.

     

  37. What I don’t understand is why kairosfocus with his electronics background, has not done any better with it.

    Isolated islands. KF is smart enough to realize that if you can bridge from function to function, evolution will work.

    That’s why duplication is so important. Nonessential sequences are free to mutate and explore nearby spaces.

    What neither KF nor GP seem to realize is that once freed from the burden of necessity, duplicate sequences can explore in many dimensions of functionality; they are not limited to refining existing function.

    It also seems likely to me that shifting evolution from inventing new proteins to regulation of existing functions has opened up many new dimensions of functionality, none of which can be anticipated from the sequence itself.

  38. Joe: “So write a GA in which there isn’t any replication nor looping and see how far you get. “

    Did not have to wait long! 🙂

  39. Petrushka said:

    That was my reading of GP’s thesis, but I was afraid to say it because if I were wrong it would seem insulting.

    I have for the last couple of years thought GP doesn’t understand the physical basis of evolution or the nature of dynamic systems that are modified by feedback, but this seems insulting. So I have tried to limit my comments to questions regarding his understanding.

    My own assessment of these endless “discussions” with the UD crowd is that the people here who have been over there have been effectively intimidated into being too polite with them (I say that even though I understand the value of trying to get them to talk about their “ideas”). You can see that in the way that the UD crowd will accuse people of being some kind of horrible demon for just pointing out the grotesque misconceptions or misrepresentations that ID/creationist advocates are using. You can see it in the constant insults coming professional insulters like Joe G over there. They have no ideas, so they try to tie the hands and feet of their opponents so their opponents can’t hit back at ID/creationists insults.

    Going all the way back to the debates with Morris and Gish, the ID/creationist tactic has been to take gratuitous umbrage at directness on the part of their “enemies” in debates. Gish would insult and taunt repeatedly with his caricatures of science, but if his opponent didn’t walk on eggshells in a reply to the insults, Gish and all ID/creationist advocates would accuse their opponents of being nasty and personal. Just pointing out that the caricatures were caricatures would generate such a response. ID/creationists make frequent use of the persecution complex they all have.

  40. gpuccio writes:

    You are completely wrong here. The algorithm is a designed algorithm. You and your friends may believe that it is a “model of the modern synthesis”, or of part of it. Unfortunately, that is a completely unwarranted statement.

    GAs are designed, and they are never a model of NS, which is the only relevant mechanism in the modern synthesis. Your algorithms for Lizzie’s example are no exception. They are based on controlled random variation and intelligent selection. They are forms of intelligent design.

    Gpuccio suffers from the same cognitive blind spot that bedevils most of his fellow IDers: a chronic inability to distinguish the model from the thing being modeled.

    Gil Dodgen memorably (and comically) demonstrated this blind spot with his assertion that:

    If the blind-watchmaker thesis is correct for biological evolution, all of these artificial constraints must be eliminated. Every aspect of the simulation, both hardware and software, must be subject to random errors. [Link, Link]

    The funniest part is that if gpuccio were actually right, then it would automatically invalidate all of the work done at the Evolutionary Informatics Lab. After all, Dembski and Marks are using designed models to illustrate the limitations of unguided evolution. According to gpuccio, those models are not, and cannot be, valid.

    It makes me wonder how much of the confusion among ID supporters is caused by a simple inability to think abstractly.

  41. I don’t feel hobbled by the necessity of being overly polite. I think it’s a discipline, like rhyming in poetry, that adds value to one’s argument.

    When honorable are banned from the “premier” Dembski founded ID discussion site, and the JoeGs are given free reign, it reflects on those who argue that side of the debate.

  42. I entirely agree with that; and I have also learned that directness is necessary to bring out the intellectual laziness of people who always want to be respected and soothed for sloppy thinking while pretending to be able to understand advanced scientific notions when they can’t even get high school level science right.

  43. I will go a step further and say I consider the “isolated island” conjecture to be scientific in the sense that it can be disconfirmed by evidence.

    Where I part company with GP and KF and Douglas Axe in in the interpretation of the evidence. I think the literature rather handily supports the notion that you can get here from there. I think the approaches of Thornton and Lenski are models for exploring this problem. There are the feathered dinosaurs and Tiktalliks of genomics. they demonstrate that when you look for gap fillers you find them.

    So I think the isolated island conjecture is without merit, even though it is structurally scientific.

  44. Gpuccio, to me:

    But I do like respect. Essentially, for me respect means trying to really understand what other people are saying, if only it is possible, and expressing frankly what we think of that. As much as human limits allow, obviously…

    If you have followed what I say about you behind your back, you will know that I respect your intelligence and I think your argument is worthy of discussion. It’s really the only line of ID reasoning that I consider interesting.

    You will have to forgive me if I think your conclusions are wrong and your attitude toward researchers like Lenski to be dismissive. I find your argument to be the molecular biology equivalent of the “no intermediate fossil” argument.

    Interesting for a while, but seriously eroded by the accumulation of evidence. Your line of argument has parallels in other branches of science, and I cannot think of a single instance in the history of science where the conjecture of interfering disembodied entities was fruitful.

  45. Joe: “However just looking at a computer bus there is no way to predict what will come next, not even if you knew what program was running. “

    Toronto: “kairosfocus and I disagree.”

    This statement is directed at Joe and states that “kairosfocus and I” disagree with Joe.

    kairosfocus: “[BTW, I never said that one cannot predict or observe what is going on on a computer bus, or the like.

    Exactly, kairosfocus agrees with me that one can predict what is going to happen next on the bus, and disagrees with Joe, who believes one can’t.

     

  46. For varying degrees of knowing. Knowing the program that is using the bus makes successful prediction more likely, but not certain.

    I’ve sometimes described evolution and learning as methods of dealing with a partially predictable future.

  47. Joe: “I have a strong background in electronics and understand feedback. “

    Sadly, gpuccio and UPB, don’t understand the concept at all.

    What I don’t understand is why you haven’t put it in biological terms for them since you do understand feedback is a vital part of the operation of systems.

    Very simply, a drop in “population” is negative feedback while a rise in “population” is positive feedback for “evolution”.

    How can you, someone who understands the importance of this element, ignore it when we see it in biology in such a glaring way?

    How could you have missed it Joe?

  48. The next bus operation is not entirely predictable, because there are sometimes race conditions, and in complex systems bus arbitration is not entirely deterministic. Especially in systems with many buses, where a signal on one bus ultimately results in a signal on another, which will depend on indeterminate factors.

    Think of a cop directing traffic at a busy intersection. Even though you might be assured of no collisions, you cannot predict the order of the vehicles on the “other side” of the cop. 

  49. petrushka: “Knowing the program that is using the bus makes successful prediction more likely, but not certain. “

    Joe’s point was about randomness, that even with a known program, the bus activity is random and unpredictable.

    If an MCU is running with internal resources and I make sequential reads to an external data device, then only accesses to that memory segment will appear on the bus, since the executable code and other processes are within the MCU and not visible.

    If there is an on-board cache big enough to hold X bytes of fetched data, I can predict a cache miss or be able to proclaim, “There will be no bus activity at all!”, and I’d be right! 🙂

     

  50. Gpuccio

    Do you ask what the probability is for an object subject to the gravitationa field of the earth to fall down if it is not sustained by anything? Are you denying deterministic reasoning in science?

    I don’t ask because the probability is so very high – but it is not actually 1 as I am sure you know – classical mechanics is a very good approximationt to reality – but it is not reality.

     

    In the same way all designed solutions have a probability that the designer exists and a probability that the designer achieves the target (which is <1 unless you are assuming an omnipotent designer!)

     

    I don’t agree. If a designer has the intention and power to implement a plan, and knows exactly how to implement it, there is no probability there: he will implement it, even if he is not “omnipotent”. Appropriate “potency” is more than enough to ensure a result.

    Are you really claiming that all designers can carry out everything they wish to with certainty?  Come on.  This is only possible if they are omnipotent.  Otherwise there will always be the unanticipated problem.


     

    The existence of a designer is not something that can be evaluated probabilistically. For instance, the existence of a non physical designer is certainly the object for philosophical and cognitive application, but there is no way you can describe it in terms of probability. It is a judgement that implies intuition, cognition, reason, feeling, and probably many other things.

    But intuition, cognition, reason, feeling and man other things do not result in an assessment of how likely (i.e. probable) it is that that designer exists. What else could they result in?

    It is interesting that you use “plausibility”, and not “probability”, in this last statement. Perhaps there is hope, after all.

    Not really – “plausible” was a short hand for postier probability based on prior probability and likelidhood.

     

    I certainly try to examine the plausibility of the design solution, and find it completely plausible. But a judgement about plausibility is a blending of many things, and relies heavily on one’s general worldview, that is essentially a philosophical choice.

     

    So, I will not state that everybody must say that a design solution is plausible. For me, and many others, it is perfectly plausible. I only require that you guys admit that the only reason why it seems so implausible to you is that you are already committed to a materialistic, reductionist view of reality.

    Of course you believe in a specific designer that has omnipotent powers and given that believe it is the best explanation of everything.  We could have a discussion about the plausibility of that specific design hypothesis. But I thought that ID was meant to be independent of any such belief?  Surely argument to design should work even for an atheist who does not believe there is a supernatural designer?  Or are you saying ID depends on believing that such a designer exists?

     

Leave a Reply