Gpuccio’s Theory of Intelligent Design

Gpuccio has made a series of comments at Uncommon Descent and I thought they could form the basis of an opening post. The comments following were copied and pasted from Gpuccio’s comments starting here

 

To onlooker and to all those who have followed thi discussion:

I will try to express again the procedure to evaluate dFSCI and infer design, referring specifically to Lizzies “experiment”. I will try also to clarify, while I do that, some side aspects that are probably not obvious to all.

Moreover, I will do that a step at a time, in as many posts as nevessary.

So, let’s start with Lizzie’s “experiment”:

Creating CSI with NS
Posted on March 14, 2012 by Elizabeth
Imagine a coin-tossing game. On each turn, players toss a fair coin 500 times. As they do so, they record all runs of heads, so that if they toss H T T H H H T H T T H H H H T T T, they will record: 1, 3, 1, 4, representing the number of heads in each run.

At the end of each round, each player computes the product of their runs-of-heads. The person with the highest product wins.

In addition, there is a House jackpot. Any person whose product exceeds 1060 wins the House jackpot.

There are 2500 possible runs of coin-tosses. However, I’m not sure exactly how many of that vast number of possible series would give a product exceeding 1060. However, if some bright mathematician can work it out for me, we can work out whether a series whose product exceeds 1060 has CSI. My ballpark estimate says it has.

That means, clearly, that if we randomly generate many series of 500 coin-tosses, it is exceedingly unlikely, in the history of the universe, that we will get a product that exceeds 1060.

However, starting with a randomly generated population of, say 100 series, I propose to subject them to random point mutations and natural selection, whereby I will cull the 50 series with the lowest products, and produce “offspring”, with random point mutations from each of the survivors, and repeat this over many generations.

I’ve already reliably got to products exceeding 1058, but it’s

possible that I may have got stuck in a local maximum.

However, before I go further: would an ID proponent like to tell me whether, if I succeed in hitting the jackpot, I have satisfactorily refuted Dembski’s case? And would a mathematician like to check the jackpot?

I’ve done it in MatLab, and will post the script below. Sorry I don’t speak anything more geek-friendly than MatLab (well, a little Java, but MatLab is way easier for this)

 

Now, some premises:

a) dFSI is a very clear concept, but it can be expressed in two different ways: as a numeric value (the ratio between target space and search space, expressed in bits a la Shannon; let’s call that simply dFSI; or as a cathegorical value (present or absent), derived by comparing the value obtained that way with some pre define threshold; let’s call that simply dFSCI. I will be specially careful to use the correct acronyms in the following discussion, to avoid confusion.

b) To be able to discuss Lizzie’s example, let’s suppose that we know the ratio of the target space to the search space in this case, and let’s say that the ratio is 2^-180, and therefore the functional complexity for the string as it is would be 180 bits.

c) Let’s say that an algorithm exists that can compute a string whose product exceeds 10^60 in a reasonable time.

If these premises are clear, we can go on.

Now, a very important point. To go on with a realistic process of design inference based on the concept of functionally specified information, we need a few things clearly definied in any particulare example:

1) The System

This is very important. We must clearly define the system for which we are making the evaluation. There are different kinds of systems. The whole universe. Our planet. A lb flask. They are different, and we must tailor our reasoning to the system we are considering.

For Lizzie’s experiment, I propose to define the system as a computer or informational system of any kind that can produce random 500 bits strings at a certain rate. For the experiment to be valid to test a design inference, some further properties are needes:

1a) The starting system must be completely “blind” to the specific experiment we will make. IOWs, we must be sure that no added information is present in the system in relation to the specific experiment. That is easily realized by having the system assembled by someone who does not know what kind of experiment we are going to make. IOWs, the programmer of the informational system just needs to know that we need random 500 bits string, but he must be completely blind to why we need them. So, we are sure that the system generates truly random outputs.

1b) Obviously, an operator must be able to interact with the system, and must be able to do two different things:

– To input his personal solution, derived from his presonal intelligent computations, so that it appears to us observers exactly like any other string randomly generated by the system.

– To input in the system any string that works as an executable program, whose existence will not be known to us observers.

OK?

2) The Time Span:

That is very important too. There are different Time Spans in different contexts. The whole life of the universe. The life of our planet. The years in Lenski’s experiment.

I will define the Time Span very simply, as the time from Time 0, which is when the System comes into existence, to Time X, which is the time at which we observe for the first time the candidate designed object.

For Lizzie’s experiment, it is the time from Time 0 when the specific informational system is assembled, or started, to time X, when it outputs a valid solution. Let’s say, for instance, that it is 10 days.

OK?

3) The specified function

That is easy. It can be any function objectively defined, and objectively assessable in a digital string. For Lizzies, experiment, the specified function will be:

Any string of 500 bits where the product calculated as described exceeds 10^60

OK?

4) The target space / search space ratio, expressed in bits a la Shannon. Here, the search space is 500 bits. I have no idea how big the target space is, and apparently neither does Elizabeth. But we both have faith that a good mathemathician can compute it. In the meantime, I am assuming, just for discussion, that the target space if 320 bits big, so that the ratio is 180 bits, as proposed in the premises.

Be careful: this is not yet the final dFSI for the observed string, but it is a first evaluation of its higher threshold. Indeed, a purely random System can generate such a specified string with a probability of 1:2^180. Other considerations can certainly lower that value, but not increase it. IOWs, a string with that specification cannot have more than 180 bits of functional complexity.

OK?

5) The Observed Object, candidate for a design inference

We must observe, in the System, an Object at time X that was not present, at least in its present arrangement, at time 0.

The Observed Object must comply with the Specified Function. In our experiment, it will be a string with the defined property, that is outputted by the System at time X.

Therefore, we have already assessed that the Observed Object is specified for the function we defined.

OK?

6) The Appropiate Threshold

That is necesary to transorm our numeric measure of dFSI into a cathegorical value (present / absent) of dFSCI.

In what sense the threshold has to be “appropriate”? That will be clear, if we consider the purpose of dFSCI, which is to reject the null hypothesis if a random generation of the Oberved Object in the System.

As a preliminary, we have to evaluate the Probabilistic Resources of the system, which can be easily defined as the number of random states generated by the System in the Time Span. So, if our System generates 10^20 randoms trings per day, in 10 days it will generate 10^21 random strings, that is about 70 bits.

The Threshold, to be appropiate, must be of many orders of magnitude higher than the probabilistic resources of the System, so that the null hypothesis may be safely rejected. In this particular case, let’s go on with a threshold of 150 bits, certainly too big, just to be on the safe side.

7) The evaluation of known deterministic explanations

That is where most people (on the other side, at TSZ) seem to become “confused”.

First of all, let’s clarify that we have the duty to evaluate any possible deterministic mechanism that is known or proposed.

As a first hypothesis, let’s consider the case in which the mechanism is part of the System, from the start. IOWs the mechanism must be in the System at time 0. If it comes into existence after that time because of the deterministic evolution of the system itself, then we can treat the whole process as a deterministic mechanism present in the System at time 0, and nothing changes.

I will treat separately the case where the mechanism appears in the system as a random result in the System itself.

Now, first of all, have we any reason here to think that a deterministic explanation of the Observed Object can exist? Yes, we have indeed, because the nature itself of the specified function is mathemathical and algorithmic (the product of the sequences of heads must exceed 10^60). That is exactly the kind of result that can usually be obtained by a deterministic computation.

But, as we said, our System at time 0 was completely blind to the specific problem and definition posed by Lizzie. Therefore, we can be safely certain that the system in itself contains not special algorithm to compute that specific solution. Arguing that the solution could be generated by the basic laws physics is not a valid alternative (I know, some darwinist at TSZ will probably argue exactly that, but out of respect for my intelligence I will not discuss that possibility).

So, we can more than reasonably exclude a deterministic explanation of that kind for our Observed Object in our System.

7) The evaluation of known deterministic explanations (part two)

But there is another possibility that we have the duty to evaluate. What if a very simple algorithm arose in the System by random variation)? What if that very simple algorithm can output the correct solution deterministically?

That is a possibility, although a very ulikely one. So, let’s consider it.

First of all, let’s find some real algorithm that can compute a solution in reasonable time (let’s say less than the Time Span).

I don’t know if such an algorithm exists. Im my premise c) at post #682 I assumed that it exists. Therefore, let’s imagine that we have the algorithm, and that we have done our best to ensure that it is the simplest algorithm that can do the job (it is not important to prove that mathemathically: it’s enough that it is the best result of the work of all our mathemathician friends or enemies; IOWs, the best empirically known algorithm at present).

Now we have the algorithm, and the algorithm must obviously be in the form of a string of bits that, if present in the System, wil compute the solution. IOWs, it must be the string corresponding to an executable program appropriate for the System, and that does the job.

We can obviously compute the dFSI for that string. Why do we do that?

It’s simple. We have now two different scanrios where the Observed Object could have been generated by RV:

7a) The Observed Object was generated by the random variation in the System directly.

7b) The Observed Object was computed deterministically by the algorithm, which was generated by the random variation in the System.

We have no idea of which of the two is true, just as we have no idea if the string was designed. But we can compute probabilities.

So, we compute the dFSI of the algorithm string. Now there are two possibilities:

– The dFSI for the algorithm string is higher than the tentative dFSI we already computed for the solution string (higher than 180 bits). That is by far the most likely scenarion, probably the only possible one. In this case, the tentative value of dFSI for the solution string, 180 bits, is also the final dFSI for it. As our threshold is 150 bits, we infer design for the string.

– The dFSI for the algorithm string is lower than the tentative dFSI we already computed for the solution string (lower than 180 bits). There are again two possibilities. If it is however higher than 150 bits, we infer design just the same. If it is lower than 150 bits, we state that it is not possible to infer design for the solution string.

Why? Because a purely random pathway exists (through the random generation of the algorithm) that will lead deterministically to the generation of the solution string, with a total probability of the whole process which is higher than our threshold (lower than 150 bits).

OK?

8) Final considerations

So, some simple answers to possible questions:

8a) Was the string designed?

A: We infer design for it, or we infer it not. In science, we never know the final truth.

8b) What if the operator inputted the string directly?

A: Then the string is designed by definition (a conscious intelligent being produced it). If we inferred design, our inference is a true positive. If we did not infer design, our inference is a false negative.

8c) What if the operator inputted the algorithm string, and not the solution string?

A: Nothing changes. The string is designed however, because it is the result of the input of a conscious intelligetn operator, although an indirect input. Again, if we inferred design, our inference is a true positive. If we did not infer design, our inference is a false negative. IOWs, our inference is completely independent from how the designer designed the string (directly or indirectly)

8d: What if we do not realize that an algorithm exists, and the algorithm exists and is less complex than the string, and less complex than the threshold?

A: As alreday said, we would infer design, at least until we are made aware of the existence of such an algorithm. If the string really originated randomly througha random emergence of the algorithm, that would be a false positive.

But, for that to really happen, many things must become true, and not only “possible”:

a) We must not recognize the obvious algorithmic nature of that particular specified function.

b) An algorithm must really exist that computes the solution and that, when expressed as an executable program for the System, has a complexity lower than 150 bits.

I an absolutely confident that such a scenario can never be real, ans so I believe that our empirical specificity of 100% will be always confirmed.

Anyways, the moment that anyone shows tha algorithm with those properties, the deign inference for that Object is falsified, and we have to assert that we cannot infer design for it. This new assertion can be either a false negative or a true negative, depending on wheterh the solution string was really designed (directly or indirectly) or not (randomly generated).

That’s all, for the moment.

AF adds “This was done in haste. Any comments regarding errors and ommissions will be appreciated.”

 

 

 

263 thoughts on “Gpuccio’s Theory of Intelligent Design

  1. I doubt that it is possible, especially in the case of the Kolmogorov complexity. How would an algorithm figure out the shortest possible description of a string? The Kolmogorov complexity is a mathematical abstraction rather than a practical tool. 

    For Shannon information, one would have to treat the string as a stream of bits and determine their predictability by using this finite window. I suppose one could use some standard compression algorithm as a proxy and make the size of the compressed file as a fitness function. 

    Either way, it would be easy to get strings that have very high fitness by using a good generator of random numbers. 

  2. Petrushka,
    It’s an implementation of the coin toss game, aka Lizzie’s Experiment  as detailed in the original post. 

    Imagine a coin-tossing game. On each turn, players toss a fair coin 500 times. As they do so, they record all runs of heads, so that if they toss H T T H H H T H T T H H H H T T T, they will record: 1, 3, 1, 4, representing the number of heads in each run.

    What you see on the screen are 100 runs of 500 coin tosses and the product of those scores, each minute another round is played where the bottom 50% are overwritten by mutated versions of the top 50%.

  3. I wasn’t sure whose program was being emulated. I’m a bit concerned about the pace. Lizzy’s program ran for millions of generations, I believe.

    Also, the graphics prevent me from viewing the output on a tablet.

    I believe the justification for calling it CSI is that the set of strings satisfying the halting criterion is sufficiently sparce as to satisfy Dembski’s probability bound.

  4. I’ll make a reduced version that’ll better work on, well, almost everything. I can, and will, speed it up but it was intended just as a visual aid really 🙂 I’ll patch it up as comments note on the atbc thread linked. 

  5. Mung:

    There seems to be some misunderstanding about my program. It’s fundamental purpose is to show how easy it is to generate CSI using a simple algorithm. Isn’t that what Lizzie program is designed to show as well?

    The purpose of Lizzie’s program is to show that a Darwinian process can generate CSI without using the fitness function to “smuggle in” information about the form of a solution.

    The fundamental question that needs to be asked and answered is, why does her program generate CSI while mine does not?

    I don’t think you’ve told us explicitly what the target of your program is. If olegt’s surmise is correct and your target is a sequence of all 1’s, then your program does generate CSI: it finds a single specified target pattern out of a search space containing 2500 patterns.

    The problem is not that your program doesn’t generate CSI. It’s that an ID proponent might try to argue that because your fitness function rewards long runs of 1’s, it is in effect telling the program “make the runs longer.” In other words, the IDer might argue that your fitness function is delivering too much information and telling the program how to generate a solution.

    I don’t think that it is, but Lizzie skirts that problem entirely by using a fitness function that clearly does not tell the program anything about the form of the solutions. A human can look at your fitness function and instantly see that longer run lengths will increase fitness. It’s obvious. The same is not true of Lizzie’s fitness function. If you increase a particular run length in Lizzie’s program, you may end up decreasing another.  Will the fitness increase?  It depends.  It’s not immediately obvious to a human.  The mutation engine certainly doesn’t figure it out and take advantage of the knowledge.

    Also, your complaint about the runtime of Lizzie’s program is easily addressed. I wrote a C program that uses her fitness function with a parameterized population size and mutation rate. It achieves a solution in less than a minute.

  6. Yeah, I guess I didn’t think that through!

    It does raise the question why people would think any informatic measure – including CSI – is appropriate to treatment of digital biological strings in isolation. The unaddressed first-level layer for proteins is folding – a property deriving from all the atoms and bonds in the string, and their constraint upon it adopting a particular lowest-energy configuration in a ‘reasonable’ time ‘sufficiently’ often.

    Obsessing over primary sequence emphasises the peptide bond, as the preserver of that primary sequence, but ignores all other interactions. Proteins are made sequentially, but they certainly don’t fold sequentially (even though they begin folding as they are being extruded, a whole peptide can be stretched out and will then ‘twang’ back into its favoured configuration).

    Holding key atoms in space is not easily deducible from primary sequence, and many equivalent ‘functional’ neighbours can derive from very distant sequence-neighbours, likewise close sequence-neighbours can flip between very different functional regions. Assignment of the label CSI, and dismissing the ability of ‘evolutionary’ search (selectively neutral and non-neutral reproduction + RV) to find it needs to take this into account.

  7. Normal
    0

    false
    false
    false

    EN-GB
    X-NONE
    X-NONE

    MicrosoftInternetExplorer4

     

    gpuccio:

     

    So, Szostack could easily engineer a protein with a strong binding to ATP (however useless in any biological context) because he knew what he wanted (an ATP binding protein), he measured and selected that function at very trivial levels in random sequences, he amplified, mutated, and intelligently selected the resulting sequences for that function. Good design, and very bad interpretation of the results, still echoed by yourself for bad reasoning.

     

    So he caused the mutation?

     

    But if the organism existed in the wild, and changing environmental conditions favoured a stronger binding to ATP then either the mutations could never happen, or if it did then they would not result in a survival advantage – even if conditions favoured those mutations?

     

    algorithms are not conscious. They have no experience of purpose. Therefore, they cannot recognize function, unless in their code something has alredy been defined as “functional”.

     

    Why do they need to be – A GA that rewards the efficient progress of a robot across a landscape does not need to recognise the purpose and function of a neural oscillator in generating leg motion, or understand how the distribution of mass in a limb affects gait efficiency. More efficient walkers have more offspring – that is all the reproductive element of such a GA does, and that is the extent of its ‘knowledge’.

     

    So, Lizzie’s algorithm can compute answers to the question that is already embedded in it: it can do nothing else. My pi computing algorithm can compute pi: it can do nothing else.

     

    Is the physical morphology and neural architecture of the robot embedded within a GA whose fitness function consists of the sum of two variables – Distance walked and the inverse of energy consumed?

     

    Bear in mind that the GA doesn’t know what these two variables represent – It doesn’t even know that there are two variables – it just gets a number.

     

    In some algorithms, the function can be defined more generically, so that they will be more flexible in their performance. But a new function, that is not covered by the definitions embedded in the algorithm, will never be recognized by the algorithm, and therefore no dFSCI related to that new function will ever be computed by the algorithm, because the algorithm cannot recognize that function.

     

    The most generic fitness function for a GA would be “number of offspring produced”. For the walking robot it is “number of offspring produced is proportional to efficient locomotion”. I would quibble about calling this ‘function’ as you do – what is specified here is a behavioural outcome. The functions that produce those outcomes are not defined.

     

    Of course the robot will never fly – or will it? – If the simulated environment does not explicitly forbid flight, and flight offers a reproductive advantage then you might well get flight. – If fact you get some surprises, I’ve seen walking robots that evolved to fly in simulations when the GA found and exploited an unknown flaw in the simulation.

     

    Try taking a good look at Carl Sims work here:http://www.karlsims.com/evolved-virtual-creatures.html

     

    The physical form and neural controllers of these ‘creatures’ are generated at random, the fitness function is, in most of the cases just “distance travelled in its lifetime” The functional parts of the creatures that evolved effective locomotion are not specified in the fitness function – it only specifies the behavioural outcome.

     

     

     

    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:”Table Normal”;
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:””;
    mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
    mso-para-margin-top:0cm;
    mso-para-margin-right:0cm;
    mso-para-margin-bottom:10.0pt;
    mso-para-margin-left:0cm;
    line-height:115%;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:”Calibri”,”sans-serif”;
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-fareast-font-family:”Times New Roman”;
    mso-fareast-theme-font:minor-fareast;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;}

  8.  

    gpuccio:

    So, Szostack could easily engineer a protein with a strong binding to ATP (however useless in any biological context) because he knew what he wanted (an ATP binding protein), he measured and selected that function at very trivial levels in random sequences, he amplified, mutated, and intelligently selected the resulting sequences for that function. Good design, and very bad interpretation of the results, still echoed by yourself for bad reasoning.

    So he caused the mutation?

    But if the organism existed in the wild, and changing environmental conditions favoured a stronger binding to ATP then either the mutations could never happen, or if it did then they would not result in a survival advantage – even if conditions favoured those mutations?

    algorithms are not conscious. They have no experience of purpose. Therefore, they cannot recognize function, unless in their code something has alredy been defined as “functional”.

    Why do they need to be – A GA that rewards the efficient progress of a robot across a landscape does not need to recognise the purpose and function of a neural oscillator in generating leg motion, or understand how the distribution of mass in a limb affects gait efficiency. More efficient walkers have more offspring – that is all the reproductive element of such a GA does, and that is the extent of its ‘knowledge’.

    So, Lizzie’s algorithm can compute answers to the question that is already embedded in it: it can do nothing else. My pi computing algorithm can compute pi: it can do nothing else.

    Is the physical morphology and neural architecture of the robot embedded within a GA whose fitness function consists of the sum of two variables – Distance walked and the inverse of energy consumed?

    Bear in mind that the GA doesn’t know what these two variables represent – It doesn’t even know that there are two variables – it just gets a number.

    In some algorithms, the function can be defined more generically, so that they will be more flexible in their performance. But a new function, that is not covered by the definitions embedded in the algorithm, will never be recognized by the algorithm, and therefore no dFSCI related to that new function will ever be computed by the algorithm, because the algorithm cannot recognize that function.

    The most generic fitness function for a GA would be “number of offspring produced”. For the walking robot it is “number of offspring produced is proportional to efficient locomotion”. I would quibble about calling this ‘function’ as you do – what is specified here is a behavioural outcome. The functions that produce those outcomes are not defined.

    Of course the robot will never fly – or will it? – If the simulated environment does not explicitly forbid flight, and flight offers a reproductive advantage then you might well get flight. – If fact you get some surprises, I’ve seen walking robots that evolved to fly in simulations when the GA found and exploited an unknown flaw in the simulation.

    Try taking a good look at Carl Sims work here:http://www.karlsims.com/evolved-virtual-creatures.html

    The physical form and neural controllers of these ‘creatures’ are generated at random, the fitness function is, in most of the cases just “distance travelled in its lifetime” The functional parts of the creatures that evolved effective locomotion are not specified in the fitness function – it only specifies the behavioural outcome.

  9. gpuccio,

    I see that you’re back after a brief hiatus.  Could you please address my comment about Tierra?  I believe it meets the criteria you set forth for modeling natural selection and I’d like to understand how you would measure functional complexity in that environment.

    Thanks.

     

  10. gpuccio,

    I was thinking about your claims in your post 910 of the original UD thread.  In my previous response I was so focused on the details that I didn’t recognize that you saying that a more realistic example would have a cutoff of 10e58 completely misses the point and, inadvertantly I’m sure, is an attempt to move the goalposts.

    In fact, you are tacitly admitting that a deterministic mechanism can generate significant functional complexity.  Your new question is if selectable intermediates exist in real world real world fitness landscapes.  That’s interesting, but still a different issue.

    Ultimately the question is whether or not observed mechanisms of evolution can generate functional complexity that meets the threshold for dFSCI, whether or not you call it dFSCI.  When you abstract away the implementation details, you find that strings specifying a product computed according to a particular algorithm, a path for a traveling salesman, behaviors of virtual vehicles, or the program of a digital organism all have significant functional complexity and all came about through mechanisms we observe operating in the real world.  This directly contradicts your claim that we only see such complexity from humans.

  11. gpuccio,

    To Zachriel and onlooker (at TSZ):

    I realize only now that by mistake I have conflated in my answer #341 to Zachriel comments made by Zachriel and comments made by onlooker. I humbly apologize to both for that.

    While you may owe Zachriel and apology, you certainly don’t owe me one. I am pleased that my writing can even be accidentally compared to his, particularly since I lack the panache to use the royal we.

  12. gpuccio,

    In response to your 362 at UD — I’m catching up from a couple of days away from the discussion.

    Heh. You couldn’t have stated the God of the Gaps more explictly. Per your own statements, there are some sequences with “functional complexity” and that some of these sequences have known causes! But you still conclude that those that don’t must be designed.

    Complete nonsense.

    I don’t understand your reference to known causes. Either you misunderstand, or you don’t even read with a minimal attention what I write.

    The “known causes” have nothing to do with the assesment of dFSCI.

    Yes, they do. According to your own definition, they are essential to the determination of dFSCI. You show this, and contradict yourself, in your very next statements:

    The requisites to assess dFSCI are two (as I have said millions of times):

    a) High functional information in the string (excludes RV as an explanation)

    b) No known necessity mechanism that can explain the string (excludes necessity explanation)

    See that word I bolded? dFSCI is, by definition, a claim about knowledge (or, conversely, ignorance).

    The “known causes” enter the scene only when we want to test the procedure against real examples. So, someone takes n strings of sufficient length whose origin he knows because he was responsible for their collection. Let’s say that 5 strings are taken from books, of which we know the author. 5 strings are generated by a random generator.

    Then another person, who does not know the origin of the 10 strings, evaluates dFSCI in them. He will correctly attribute dFSCI to the first 5, and infer design. Take for example the paragraph about Shannon’s biography from Wikipedia. The questions are:

    a) Is the dFSI of the string high? Answer: Yes.

    b) Do we know a necessity mechanism that can output that paragraph? Answer: No.

    So, we infer that the piece was written by a designer. And we are right. The first person, who collected the strings, knows that it was written by someone, and can confirm that the inference is correct.

    For the 5 randomly generated strings, I will not be able to recognize any function (meqaning) in them, and I will not infer design. Correctly. The first person will confirm that they were generated randomly, without any intelligent design.

    A more interesting test would be the strings used as solutions to Lizzie’s problem. Let’s say you are given two strings, both of which represent a solution to the problem. One was generated by a human who thought about the problem for a bit and wrote down his best solution. The other is the output of one of the GAs mentioned in that thread. Since both are solutions to the problem, as previously agreed both have high functional complexity.

    Do you consider the string created by human thought to have dFSCI and the one generated by the GA to not have dFSCI? Please explain your reasoning.

     

Leave a Reply