A modest proposal for detecting design

I’d like to thank everyone who participated in the recent Max’s demon thread, it might be helpful to revisit that OP for context before continuing on to what follows here. http://theskepticalzone.com/wp/maxs-demon-a-design-detection-riddle/

As promised and for what it’s worth I’d now like to submit my proposal for a method for detecting design in situations like Max’s demon where instead of looking at a single isolated artifact or event we are evaluating a happening that is extended spatiotemporally in some way.

I believe that looking at these sorts of phenomena allows us to sidestep the usual sorts of contentious probability discussions that have plagued questions of design inference in the recent past. Usually these discussions involve a single highly unlikely object or event that is only categorized as design or coincidence in retrospect. My hope is that we can in some cases move to determinations based on the correspondence of ongoing observations to predictions and expectations.

Before we begin I’d like to once again clarify a few terms. For our purposes design will be defined as the observable effects of “personal choice” and “personal choice” is simply the the inverse of natural selection as is understood in Darwinian evolution. Such that we can say that something is the result of personal choice (ie designed) when it is ultimately prescribed by something other than it’s immediate local environment.

With that caveat in mind I will detail how my method would work with Max’s demon.

                                            My method

The first step is build the best model(M)that we can given the information we have right now. It could be a physical copy of Max’s original container(O) or it could be a computer simulation or perhaps a just simplified mental description that includes all the known relevant details. The goal would be that the model(M)represents what we know about (O) as it pertains to the specific phenomena(P)we are looking at, namely here a persistent temperature difference between the two chambers of (O). It’s very important to specify ahead of time what P is so as to narrow our focus.

Next I would look to remove P in some way. This could be done by subjecting both my model (M) and the container(O) to temperatures cold enough to completely remove the observable differences between the two chambers in each. Absolute zero for an extended time should do the trick.

At this time I would allow(M)and(O)to thaw for a specified period of time and record any temperature difference that arises between the various chambers in (M)and(O). The difference between these two numbers{(P of O)-(P of M)} gives us a rough approximation of any relevant information present in(O)that is missing in(M)at this particular moment in time.

Now we can repeat the process again and again to look for any variation in P. If we record the results of these trials next to each other we get a number sequence showing information difference in P for each consecutive trial. Below are some examples of what such a sequence might look like.

0,0,0,0,0,0,0…
7,7,7,7,7,7,7…
7,8,9,10,11,12…
1,4,1,4,2,1,3…

In the first sequence we see no measurable difference at all between (P of O)and(P of M). Therefore there appears to be no compelling reason to infer design for P in a process that yields this sequence. Since as far as we know (M) has no ongoing personal choice involved there is no compelling reason to infer that(O)does either at least in context of the singular P we are evaluating. Of course that does not imply that there is not an observable design influence in another aspect that we are not at present evaluating or that design is not involved in the system as a whole.

Moving on to the second sequence it’s interesting to realize that we don’t have to know specifically what is causing the 7 degree difference between (P of M) and (P of O)in order to make a determination. All we have to do is add 7 to each instantiation of (M) and we see the same repeating zeros we saw in number one. And just as with number one we can discount the design inference as superfluous in regards to P.

The same goes for the third sequence we simply add one each time we repeat the trial and we are again left with the first sequence. We can make similar simple modifications to (M) to cause most sequences to morph to zero repeating and demonstrating that a design inference is not warranted.

However when we come to sequences like we see in number four this is not as easy and therefore personal choice becomes a live option for explaining the P in the particular spatio temporal dimension reflected in this sequence.

There is no reason to expect that this sequence will terminate on its own and there is no obvious way that we can modify (M) to make the sequence terminate. It is of course possible that the difference we see is nothing but random voice so we need to look for some sort of recognizable pattern in the sequence before we can say are justified in inferring design.

*A recognizable pattern is just one that allows us to predict the next digit in the sequence before running the trial.

In the case of the third sequence it turns out that the elements are precisely the decimal expansion of the square root of two, an irrational number.

There is no clear way to modify (M) to produce this sequence in full, that does not involve input from something outside the local environment or the container. At a minimum we must assume (O) was frontloaded to react in a certain prescribed but unexpected way to a trial that was only conceived after it was constructed. That sort of input must have come from something that transcends the immediate local environment of the container. Therefore as long as the pattern persists whatever is causing P tentatively meets our criteria for being the result of design.

In short— if a sequence produced by our comparison of (O) with (M) yields a persistent recognizable pattern that can’t be duplicated convincingly by making an adjustment to(M) then it’s my contention that we can infer design for P in more than a purely subjective way. That is it in a nutshell

Few sequences are as cut and dried as the square root of two. In our everyday experience the more sure we are that there can be no satisfactory modification to our model to eliminate the recognizable pattern we see  the more confident we are that the phenomena we are evaluating is the result of design.

For example a sequence like the one below

001002003….. seems to imply design in that every third trial yields an increasing difference  . But it is still open to debate since we don’t know for sure if the sequence is irrational or will eventually repeat or terminate. The longer a recognizable pattern continues with out repeating the more confident we can be in our design inference but only when we know a particular sequence is irrational can we be certain .

Well there you have it

I don’t think that my method is anything new or revolutionary it’s simply an attempt to make more explicit and structured the informal common sense approach that we use all the time when inferring design in these types of situations. Also l want to point out that I don’t need to call my method scientific it’s just important to me that it be reasonable useful and repeatable.

Most of all I want to emphasize my method is not meant to be some sort of argument for the existence of God. His existence is self evident and unavoidably obvious and not to be proved by some puny little human argument. By their very nature such arguments inevitably lead only to foolish human hubris and arrogance instead of any kind of genuine knowledge or wisdom.

We can discuss places other than Max’s demon where this method might prove to be useful and can get into some possible implications in places like evolution, cosmology and artificial intelligence if you like in the comments section.

As usual I apologize ahead of time for spelling and grammar mistakes and welcome any constructive criticism as to clarity or content.

Peace

267 thoughts on “A modest proposal for detecting design

  1. Alan Fox:
    fifthmonarchyman,

    I’ve already told you, there’s no difference. They’re the same process.

    Actually I am not so sure. I like FMM’s idea of relating design to choice. In artificial selection, choices are made. In natural selection, not so much. See it as an active filter versus a passive one.

  2. fifthmonarchyman: Here is where you are wrong. The choice is to exhibit that pattern rather than another or none at all.

    If the choice can be limited to the demon’s initial decision to manipulate the apparatus so that the temperature differences follow the decimal expansion of the square root of 2, you ought to be perfectly fine with the concept of theistic evolution where the deity arranges the initial conditions and then lets nature take care of the rest.

    Are you?

    Again you can’t do that with out importing something beyond the local environment into the model. Namely the rank of the next trial in the sequence.

    It’s precisely this importation of information beyond the local environment that we are looking for when we infer design.

    You still haven’t defined what you mean by the ‘local environment’. Why is the person who builds the model not part of that? Models don’t fall from the sky readily formed, they have to be constructed. Your thought experiment falls flat on its face if there wasn’t a modeler involved from the outset, so to arbitrarily exclude the modeler once the process is up and running seems like cheating to me. Who is going to make the predictions if not the modeler?

  3. Alan Fox: Apologies if this is inconvenient for your proposal.

    It’s not really inconvenient for my proposal if a minority of folks think that there is absolutely no difference between natural selection and personal choice. Like I said I’m not addressing those people.

    I suppose I could try and show how your position makes Darwinian evolution incomprehensible as a concept but I’m just not interested in wasting my time on such a idiosyncratic position.

    My method is really for the rest of humanity.

    peace

  4. faded_Glory: you ought to be perfectly fine with the concept of theistic evolution where the deity arranges the initial conditions and then lets nature take care of the rest.

    I have no problem with evolution at all theistic or otherwise. Neither does ID as far as I can tell.

    My problem is with mindless evolution. It just does not seem to be a reflection of what I see in the sweep of animal life on this planet.

    Besides my method is not really about evolution per se and I have very limited suggestions on how to utilize it for looking at evolution. It would have to be in things like convergence where the same phenomena is arrived at repeatedly in different lineages.

    peace

  5. faded_Glory: You still haven’t defined what you mean by the ‘local environment’. Why is the person who builds the model not part of that?

    Because the model is meant to replicate a container which has supposedly no input from a person outside it’s self.

    By local environment I mean only the things that are in present physical contact with the phenomena we are evaluating. Spooky action at a distance is not allowed.

    I’m not sure if it’s helpful when thinking about this
    But for me it’s important that each trial you conduct must be independent of the others. When trial 4 influences the parameters of trial 3 we are violating the premise of the method. I can elaborate if that does not make sense.

    faded_Glory: Who is going to make the predictions if not the modeler?

    The model makes the predictions inputs map to outputs directly.

    If you must go beyond the model to make your predictions then you are in the realm of design as we are defining it.

    peace

  6. fifthmonarchyman: I have no problem with evolution at all theistic or otherwise. Neither does ID as far as I can tell.

    My problem is with mindless evolution. It just does not seem to be a reflection of what I see in the sweep of animal life on this planet.

    Ok, but your example of letting the demon use the expansion of the square root of 2 is a really poor example of introducing choice in the process. Like I said, this is a wholly determined string of digits and there is no choice involved. The only choice is at the outset when he opts for this particular way of manipulating the device.

    So we can go even further than theistic evolution and declare you a Darwinist with front loading, because your example quite nicely reflects the final words of Darwin in the Origin of Species:

    “There is grandeur in this [natural selection] view of life, with its several powers, having been originally breathed by the Creator into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved.”

  7. fifthmonarchyman,

    I still see a muddle in your concept of the ‘local environment’. In your third example, the one where the temperature difference grows by 1 at each step, the model also needs to be tweaked by someone or something ‘outside the local environment’ to make it predict accurately. Arguably also in the second example of a constant difference: how can the model itself add or subtract a specific value unless something or someone instructs it to do so?

    You either must allow the model to be tweaked, or you don’t. It is unclear why some tweaks are permissible and others are not.

  8. faded_Glory: The only choice is at the outset when he opts for this particular way of manipulating the device.

    I think that you are confused as to choice it’s not about quantity. You either have choice or you don’t. It’s not like 3 choices are more choice-full than two. You either choose or you don’t.

    Often radically different outcomes can be produced by minor changes chosen at the outset of process.

    faded_Glory: the one where the temperature difference grows by 1 at each step, the model also needs to be tweaked by someone or something ‘outside the local environment’ to make it predict accurately.

    No you simply add each time you run the model you don’t need to know where you are in the sequence. You could be at the first trial or the seventy first. If you predict just one digit you will predict them all in this example.

    faded_Glory: how can the model itself add or subtract a specific value unless something or someone instructs it to do so?

    Simple addition all coded at the outset

    TN=((TN-1)+1)

    peace

  9. faded_Glory: So we can go even further than theistic evolution and declare you a Darwinist with front loading, because your example quite nicely reflects the final words of Darwin in the Origin of Species:

    Again ID is not anti-evolution. Most ID folks I know are perfectly happy with having the design we see front-loaded into the system. I have no problem with Darwin seen as a complement to design and I don’t think most IDers would.

    The problem is when his interesting insight is seen as the cause of the design we see in biology.

    peace

  10. faded_Glory: Actually I am not so sure. I like FMM’s idea of relating design to choice. In artificial selection, choices are made. In natural selection, not so much. See it as an active filter versus a passive one.

    I’ll grant you that where human plant and animal breeders are involved, the process is faster at filtering out alleles. It’s still the same process.

    No matter, let’s see where FMM ends up.

  11. fifthmonarchyman: Most ID folks I know are perfectly happy with having the design we see front-loaded into the system

    If it had any basis in scientific reality, perhaps. Where DNA sequences are not under purifying selective pressure, they degrade due to the effect of genetic drift. Front-loading is a vain hope.

  12. Alan Fox: the process is faster at filtering out alleles. It’s still the same process.

    “the process is faster but it’s still the same process.”

    Do you not even see the bare contradiction in your statement?

    Alan Fox: No matter, let’s see where FMM ends up.

    I thought I sensed a little fear of being trapped in your contention that there was no difference between natural and artificial. That is why if did not take it too seriously.

    Trust me I’m not trying to trap anyone I’m just working through an idea.

    peace

  13. Alan Fox: Where DNA sequences are not under purifying selective pressure, they degrade due to the effect of genetic drift.

    When did anyone say that it was DNA sequences that were frontloaded?

    It’s quite possible that DNA was not even involved at the beginning of life. I’m much more apt to think that if frontloading was involved it was with the natural laws or founding conditions of the universe.

    peace

  14. fifthmonarchyman: It’s quite possible that DNA was not even involved at the beginning.

    Most likely in my view. Doesn’t alter the point that selective pressure keeps genomes from deteriorating.

  15. fifthmonarchyman: Trust me I’m not trying to trap anyone I’m just working through an idea.

    Just thought I’d save you some wasted effort. Please feel free to carry on.

  16. Alan Fox: There isn’t one.

    A process can’t be “faster” and “the same” at the same time and in the same respect.

    Your claiming otherwise is pretty much a textbook example of a contradiction.

    peace

  17. fifthmonarchyman: A process can’t be “faster” and “the same” at the same time and in the same respect.

    Your claiming otherwise is pretty much a textbook example of a contradiction.

    peace

    You can run fast. You can run slow. Same process.

  18. Alan Fox: Doesn’t alter the point that selective pressure keeps genomes from deteriorating.

    Agreed, but still we have ultra-conserved elements that don’t appear to be under any selective pressure at all.

    peace

  19. fifthmonarchyman:

    No you simply add each time you run the model you don’t need to know where you are in the sequence. You could be at the first trial or the seventy first. If you predict just one digit you will predict them all in this example.

    Simple addition all coded at the outset

    TN=((TN-1)+1)

    You can’t do this at the outset because you don’t know the pattern of the differences before you have run the experiment for some time. To make a valid prediction you need to analyse the historical pattern first and then modify the model accordingly to make it predict the future differences. This holds true regardless of what pattern the differences demonstrate.

    Again, why allow some tweaks and not others? You are not explaining this at all.

    Frankly, I think you would be better off to abandon this particular example and try to come up with something more robust.

  20. faded_Glory: To make a valid prediction you need to analyse the historical pattern first and then modify the model accordingly to make it predict the future differences.

    Right, that is why I said that you could make any tweaks you needed to except those that input information from out side the local environment of the phenomena being evaluated

    faded_Glory: why allow some tweaks and not others? You are not explaining this at all.

    Adding one to every single trial is very different from determining exactly what rank a particular trial is in the sequence you are seeing.

    Again I think a good way to get a handle on this is to understand that the independence of the trials must be maintained.

    faded_Glory: You are not explaining this at all.

    It seems self evident to me. I’m not sure why it needs explaining. A model can’t smuggle information that comes from outside the local environment of the phenomena we are evaluating.

    The container does not know the rank of the trial that is being run or even that a trial is being run and neither should the model.

    faded_Glory: I think you would be better off to abandon this particular example and try to come up with something more robust.

    There is no need to try to come up with anything new.

    Like I said before. I use the method in my own place of employment. I have several real world examples that I will share as soon as I feel you are getting the concept as it relates to the hypothetical.

    The reason I want us to be on the same page before that is because I don’t want to get bogged down in the inevitable accusations of cherry picking or to muddy the waters with multiple different scenarios before we even agree on the utility of this method.

    Introducing known designers (humans) into the equation complicates things

    I want you to join me in thinking about how you would verify a hunch that a personal choice is involved in an observed process.

    I’ll give it a while before I punt on that one.

    peace

  21. Alan Fox: You can run fast. You can run slow. Same process.

    My running slow and my running fast are not the same Just like artificial and natural selection are not the same.

    The difference in both these cases is personal choice

    peace

  22. fifthmonarchyman:
    I want you to join me in thinking about how you would verify a hunch that a personal choice is involved in an observed process.

    My hunch is that this is far too broad a question to allow a simple answer that would hold in all situations.

  23. faded_Glory: My hunch is that this is far too broad a question to allow a simple answer that would hold in all situations.

    That is a legitimate and understandable position. What I’m trying to do is take broad outlines of all the different approaches and look for key similarities.

    From a very high level I think it all boils down to looking for predictable behavior that is not modelable.

    That is the core of the method.

    peace

  24. fifthmonarchyman:

    From a very high level I think it all boils down to looking for predictable behavior that is not modelable.

    I surmise that every prediction is the output of a model. Even if you predict, say, someone’s behaviour, it is based on your mental model of who they are and how they tend to behave. The model may not be explicit or mathematical, but it certainly is there.

  25. fifthmonarchyman: https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.0050234

    peace

    OK, interesting. Except I note it was published in 2007. A more recent (2018) paper refers to the findings and comments

    In the January 2018 issue of Cell, Dickel et al. [4] have shown, to much anticipation, that the deletion of an UCE leads to a measurable phenotype, despite viability of the enhancer knockout animals (Fig. 1a, b). By deleting UCEs near the essential neuronal transcription factor Arx using the CRISPR-Cas9 technique, the team found that mice carrying single or pairwise deletions in nearly all cases showed neurological or growth abnormalities.

    The paper (PDF) is available here.

  26. fifthmonarchyman: My running slow and my running fast are not the same Just like artificial and natural selection are not the same.

    The process that brings about allele filtering is exactly the same process in artificial and natural selection.The rate at which allele elimination occurs is faster when breeders are part of the niche working with small populations in tightly controlled conditions.

    The problem you have set yourself is thus an impossible one. But you are welcome to discover this for yourself. It’s a subset of what I think is a fundamental consequence of a universe that is not precisely deterministic. The history of an object is not precisely recorded in its current configuration.

    The difference in both these cases is personal choice

    You might say so, yes! 🙂

  27. I’d say that both artificial and natural selection are instantiations of how selection pressure on a mutating genome will result in evolutionary changes. The difference is where the selection pressure originates from: in artificial selection it comes from the purposeful actions of the breeder, in natural selection it is the net effect of a multitude of interplaying environmental variables.

  28. The simple fact is that any number of regular processes, including some algorithms, produce outputs that effectively hide their history.

  29. Alan Fox: A more recent (2018) paper refers to the findings and comments

    Also interesting I will check out the paper.

    At first glance it only purports to show that some of these elements are in fact under selective pressure. I have no problem with that result in fact it’s something I would expect.

    It only makes the earlier results more of a mystery.

    peace

  30. faded_Glory: I surmise that every prediction is the output of a model. Even if you predict, say, someone’s behaviour, it is based on your mental model of who they are and how they tend to behave. The model may not be explicit or mathematical, but it certainly is there.

    Interesting I think this is something we should chew on.

    When I think of how a particular person will behave I most certainly don’t think of it as a model. I think of it as knowing the individual.

    For instance I would not usually be able to tell you exactly how most people would act given a particular set of inputs. It’s more about their internal mental state than anything outside of them.

    I would say we will always act according to our nature but never according to a codded script.

    I would say there is a profound difference between interpersonal knowledge I have of someone and knowledge based on a predictive algorithm

    peace.

  31. petrushka: The simple fact is that any number of regular processes, including some algorithms, produce outputs that effectively hide their history.

    I completely agree.

    The method is not about what is hidden but what is revealed.

    peace

  32. fifthmonarchyman,

    To go philosophical a bit (with trepidation, because there are many people here much more qualified than me to do so) I would suggest that everything we call knowledge is actually just models. We take our sensory input and previous experiences to build mental models, representations, of what is happening around us. Sometimes we go more explicit and build mathematical or physical models, but mostly these models are ‘fuzzy’ and internal to our minds. We use feedback to update and correct these models as and when events happen. All this helps us to navigate and make sense of our environment.

    As you say, we can’t look inside someone’s brain (or soul, if you prefer) to predict what they will do in certain circumstances. All we can do is check our mental model of them and use that to come up with a prediction, for better or worse.

  33. faded_Glory: I would suggest that everything we call knowledge is actually just models. We take our sensory input and previous experiences to build mental models, representations, of what is happening around us. Sometimes we go more explicit and build mathematical or physical models, but mostly these models are ‘fuzzy’ and internal to our minds. We use feedback to update and correct these models as and when events happen. All this helps us to navigate and make sense of our environment.

    I’d agree with that.

  34. fifthmonarchyman: At first glance it only purports to show that some of these elements are in fact under selective pressure. I have no problem with that result in fact it’s something I would expect.

    It only makes the earlier results more of a mystery.

    Well, what happened was more experimentation produced a better model as an explanation.

    The question is whether any sequences considered ultra-conserved elements remain impervious to explanation by evolutionary models.

    ETA clarity

  35. faded_Glory: I would suggest that everything we call knowledge is actually just models.

    I think perhaps we need a more robust definition of model.

    To me a model maps inputs to outputs directly. Y is a function of X so to say. A model if complete enough will in theory be able to predict exactly what the modeled entity will do given particular inputs.

    I would say that lots of our knowledge of persons is ontologically different than that. We know a person as a person not as a glorified function. A person could encounter exactly the same inputs and produce a different output depending only on her internal state at the time.

    I would submit when it comes to persons what you are calling fuzzy models are not models at all but instead are something like mental analogies that we assume at the outset can never be more than rough approximations based on caparisons to our own minds.

    Something like this.

    If I know Alice as well as I think I do she like me would do Y given X.

    however

    The fact is Alice is not me she is her own person.

    peace

  36. fifthmonarchyman: …something like mental analogies that we assume at the outset can never be more than rough approximations based on caparisons to our own minds.

    That’s a model.

  37. fifthmonarchyman,

    I’ve been fascinated by the ideas of neuroscientist Iain McGilchrist on the divided brain. I keep posting this link to a TED talk but nobody seems interested. 🙁

    Anyway, shouldn’t you be working on your modest proposal? The idea you can divine “artificial” from “natural” by examining the object concerned by some standard or universal method?

  38. faded_Glory:
    Sometimes we go more explicit and build mathematical or physical models, but mostly these models are ‘fuzzy’ and internal to our minds. We use feedback to update and correct these models as and when events happen. All this helps us to navigate and make sense of our environment.

    That sounds like the Baysian Brain approach to explaining perception and basic beliefs.

    For knowledge, philosophers usually add a standard to evaluate a belief to the Bayesian, causal story.

    What makes the statement “there is water” an assertion of knowledge in the case of a lake but not of knowledge in the case of a mirage?

    In both cases science provides the same causal story: EM radiation causes optical electrochemical processes leading to neural processes leading to muscular action leading to sound waves.

  39. faded_Glory: To go philosophical a bit (with trepidation, because there are many people here much more qualified than me to do so) I would suggest that everything we call knowledge is actually just models.

    What I call “knowledge” is mostly abilities.

    However, I would go with “everything that we call real is mostly models”. In a way, this is comparable to Kant’s view that we have no access to the world in itself. Our access to the world is indirect. So we build models that fit very well, and we describe those as “real”.

    If that’s about what you mean, then it is the right direction.

  40. BruceS: What makes the statement “there is water” an assertion of knowledge in the case of a lake but not of knowledge in the case of a mirage?

    How well the model works.

  41. Neil Rickert: What I call “knowledge” is mostly abilities.

    However, I would go with “everything that we call real is mostly models”.In a way, this is comparable to Kant’s view that we have no access to the world in itself.Our access to the world is indirect.So we build models that fit very well, and we describe those as “real”.

    If that’s about what you mean, then it is the right direction.

    Yes that is roughly what I was getting at, thanks.

    Now that I have opened the floodgates for the resident philosophers to come in and confuse us all, I better bow out of this conversation 😉

  42. From the OP:

    The longer a recognizable pattern continues with out repeating the more confident we can be in our design inference but only when we know a particular sequence is irrational can we be certain .

    This seems to be the heart of FFM’s method – if the sampled output from the system of interest exhibits a non-repeating expansion that cannot be predicted by any available model, then an inference of design is warranted.

    Now here is an interesting sequence of true statements:

    1. The decimal expansion of pi is non-repeating (because pi is an irrational number). In other words there is no algorithmic way to predict the decimal expansion, you can only calculate it post hoc
    2. If I choose a non-decimal counting system, but one based on an integer (base-2, base-3, base-6, base-49, whatever), and perform the expansion of pi in that base, the result will be non-repeating. So the irrationality of pi appears to be invariant with respect to the counting system that I choose.
    3. So far, this supports FFM’s contention – apparently, no integer-base counting method will convert a non-repeating expansion to a repeating one, which would defeat FFM’s contention.
    4. However, nothing constrains me to count only using integer bases. What if I use real numbers? How about counting in base-3.1? It turns out that expanding pi in base-3.1 (or any other rational real number) still yields a non-repeating sequence. FFM’s contention is still looking good.
    5. But nothing constrains me to only use rational reals as the base for my counting system. What happens if I count in “base-irrational”? Let’s see . . .
    6. Lets say, for arguments sake, that the object of interest appears to be generating the expansion of pi. I decide to build a model that counts in base-pi. What is the output string generated in base-pi of the expansion of pi itself:

    Pi(base-pi) = 10

    Which is a rational number. Oops.

    FFM’s design inference is contra-indicated. His inference turns out to depend on the **arbitrarily chosen** counting system used to analyse the object’s output. Back to square one.

    (By the way, counting in irrational-base systems has some bizarre consequences. It turns out that any rational number – for instance, 3 – will have an uncountably large number of simultaneous and different expansions, some rational, some not. Quora has a fascinating discussion by professional mathematicians on the implications)

  43. timothya: FFM’s design inference is contra-indicated. His inference turns out to depend on the **arbitrarily chosen** counting system used to analyse the object’s output. Back to square one.

    Even after mathematical proof I doubt FFM relents. His presuppositions were challenged and disproven on many grounds already, but he sticks to them regardless.

Leave a Reply