The Demise of Intelligent Design

At last?

Back in 2007, I predicted that the idea of “Intelligent Design” would soon fade into obscurity. I wrote:

My initial assessment of ID in my earliest encounter with an ID proponent* was that ID would be forgotten within five years, and that now looks to me an over-generous estimate.

*August, 2005

I was wrong. Whilst the interest in “Intelligent Design” (ID) as a fruitful line of scientific enquiry has declined from the heady days of 2005 (or perhaps was never really there) there remain diehard enthusiasts who maintain the claim that ID has merit and is simply being held back by the dark forces of scientism. William Dembski; the “high priest” of ID has largely withdrawn from the fray but his ideas have been promoted and developed by Robert Marks and Winston Ewert. In 2017 (with Dembski as a co-author) they published Introduction to Evolutionary Informatics, which was heralded as a new development in the ID blogosphere. However, the claim that this represents progress has been met with scepticism.

But the issue of whether ID was ever really scientific has remained as the major complaint of those who dismiss it. Even ID proponents have admitted this to be a problem. Paul Nelson, a prominent (among ID proponents) advocate of ID famously declared in 2004:

Easily the biggest challenge facing the ID community is to develop a full-fledged theory of biological design. We don’t have such a theory right now, and that’s a problem. Without a theory, it’s very hard to know where to direct your research focus. Right now, we’ve got a bag of powerful intuitions, and a handful of notions such as ‘irreducible complexity’ and ‘specified complexity’ – but, as yet, no general theory of biological design.

Whilst some ID proponents – Ann Gauger, Douglas Axe are perhaps most prominent among them – have tried to develop ID as science, the general scientific community and the wider world have remained unimpressed.

Then a new young vigorous player appears on the field. Step forward, Eric Holloway! Dr Holloway has produced a number of articles published at Mind Matters – a blog sponsored by the Discovery Institute (the paymasters of ID) on artificial and “natural” intelligence. He has also been quite active here and elsewhere defending ID and I have had to admire his persistence in arguing his case for ID, especially as the whole concept is, in my view, indefensible.

But! Do I see cracks appearing? I happened to glance at the blog site formerly run by William Dembski, Uncommon Descent, and noticed an exchange of comments on a thread entitled Once More from the Top on “Mechanism” The post author is Barry Arrington, current owner of UD and a lawyer by trade, usually too busy to produce a thoughtful or incisive piece (and this is no different). However, the comments get interesting when Dr Holloway joins in at comment 48. He writes:

If we can never be sure we account for all chance hypotheses, then how can we be sure we do not err when making the design inference? And even if absolute certainty is not our goal, but only probability, how can we be confident in the probability we derive?

Eric continues with a few more remarks that seem to raise concern among the remaining regulars. ( ” Geeze you are one confused little pup EricMH.” “Has a troll taken over Eric’s account?) and later comments:

But since then, ID has lost its way and become enamored of creationism vs evolution, apologetics and the culture wars, and lost the actual scientific aspect it originally had. So, ID has failed to follow through, and is riding on the cultural momentum of the original claims without making progress.

Dr Holloway continues to deliver home truths:

I would most like to be wrong, and believe that I am, but the ID movement, with one or two notable exceptions, has not generated much positive science. It seems to have turned into an anti-Darwin and culture war/apologetics movement. If that’s what the Discovery Institute wants to be, that is fine, but they should not promote themselves as providing a new scientific paradigm.

I invite those still following the fortunes of ID to read on, though I recommend scrolling past comments by ET and BA77. Has Dr Holloway had a road-to-Damascus moment? Is the jig finally up for ID? I report – you decide!

ETA link

824 thoughts on “The Demise of Intelligent Design

  1. Wikipedia via BruceS: “If the degenerate distribution is univariate (involving only a single random variable) it is a deterministic distribution and takes only a single value”

    I think they mean that if it is univariate it takes only a single value at any one time. A one-dimensional variable can certainly follow a stochastic process. You have to be a zero-dimensional variable to not be stochastic.

  2. newton,

    Not probably, it is required.Logically there has to be a undesigned designer , because if ID allows that some set of possible natural, physical conditions in the Universe could result in life ,sans designer, rather than a eternal deity , the camel’s nose is underneath the tent.

    This is not an ID requirement. Its a requirement for basic philosophy. Otherwise you have an infinite causal regress.

    And how does a disembodied human mind know that what information is “creates” is functional?

    Why the straw man?

    One would need to know the initial conditions to determine such jumps, is this through the same fossil record as ID points out as inadequate because it is full of gaps?

    You can determine the jumps from the data. We can see substitutability from animals that have a more recent split from each other. If this data has lots of substitutions compared to the human data we can see information jumps.

    What is the experimental evidence of such a choice beyond analogy?

    The experimental evidence is that a mind can create information is constantly being tested as we are testing it now by our exchange. This is a test of the mechanism and that’s how science works. Einstein’s test for gravity was initially using the mass of the Sun. This was indeed a narrow test but the prediction stood the test of time.

  3. Joe Felsenstein: I think they mean that if it is univariate it takes only a single value at any one time.A one-dimensional variable can certainly follow a stochastic process.You have to be a zero-dimensional variable to not be stochastic.

    I’m happy to use stochastic./deterministic terminology if that is clearer. It’s just English semantics; it does not affect the math Eric uses which applies to “both” cases.

    On the Wiki definition: If the time series is sequence of (ETA) iid univariate rv with (the same) degenerate distribution, I think that would be empirically indistinguishable from deterministic. I don’t think a it would matter if time index was continuous or not.

    ETA: Thinking about this a bit more, I guess iid is not right. There needs to be a deterministic function of time and possibly other variables that determines the single supported value of the distribution for a time series. So not identical or independent but still degenerate.

    I’m not sure what philosophers would say about the ontology of stochastic with degenerate distribution versus deterministic. They might want to add an “in all metaphysically possible worlds” in there somewhere. But maybe I read too much philosophy.

  4. Joe Felsenstein,

    I think they mean that if it is univariate it takes only a single value at any one time. A one-dimensional variable can certainly follow a stochastic process. You have to be a zero-dimensional variable to not be stochastic.

    Or the process steps are subject to change. Minds can alter algorithms.

  5. colewd:
    Joe Felsenstein,

    . Minds can alter algorithms.

    You need to allow for degenerate ones, technically speaking.

    Might get a bit repetitive to interactive to them.

    Come to think of it …

  6. colewd:
    Or the process steps are subject to change. Minds can alter algorithms.

    Nope. Humans can alter algorithms, and humans are organisms with minds. Other organisms also have minds, and they cannot alter algorithms. So, it depends on the way those minds work, and whether the organism has characteristics allowing them to write and alter algorithms, like eyes, hands, fingers, etc, besides a culture where organisms exchange / share knowledge.

  7. BruceS: There needs to be a deterministic function of time and possibly other variables that determines the single supported value of the distribution for a time series. So not identical or independent but still degenerate.

    In the quoted case, a one-dimensional “degenerate” distribution meant the variable varied in only one dimension. I’ve spent a lot of time fretting about stochastic processes in population genetics that had one dimension, gene frequency, on which things changed. But that in itself does not make the process deterministic.

  8. colewd: You can determine the jumps from the data. We can see substitutability from animals that have a more recent split from each other. If this data has lots of substitutions compared to the human data we can see information jumps.

    Here’s how the technique works:
    1. Pick a protein, preferably a long one.
    2. The human amino acid sequence (from a Caucasian male, ideally) is the acme; it has maximal information, by definition.
    3. Select examples of the same* protein from a range of organisms along the scala naturae.
    4. Measure, for each species, the number of amino acids that are identical to the human protein.
    5. Each identical-to-human amino acid is worth log20base2 bits of information.
    6. Plot this information content against species.
    7. Marvel at the sudden loss of information when the amino acid identity drops off an apparent cliff. If you were crafty enough to pick a protein over 300 aa then you have demonstrated intelligent design, due to the “injection” of impossibly large amounts of information (>500 bits) at this particular “step”.

    *proteins with the same name are assumed to be homologs

    Now, if anyone (Hi Bill!) wants to claim that this is a strawman, they will have to get VERY specific as to how this technique differs from gpuccio’s method, including worked examples where the results differ.
    If anyone wants to fisk this approach, I encourage them to find a reasonably intelligent teenager to do the honors.

  9. BruceS: That’s not just Eric; here is Wiki on degenerate distribution:
    “In mathematics, a degenerate distribution is a probability distribution in a space (discrete or continuous) with support only on a space of lower dimension. If the degenerate distribution is univariate (involving only a single random variable) it is a deterministic distribution and takes only a single value”

    I don’t have a problem with that. It is normal, within mathematics, to consider boundary cases.

    The problem for Eric, is that he attempts to draw conclusions that don’t actually fit these boundary cases.

  10. colewd: The experimental evidence is that a mind can create information is constantly being tested as we are testing it now by our exchange.

    Minds may create information but minds alone are not capable of putting that information into a physical instantiation. Bill is still pushing for his Designer to use the MAGIC POOF! to physically construct desired genomes.

  11. Neil Rickert: He suggests a halting oracle.But he has no evidence that such a thing can actually exist.

    OK, that is what I had in mind via traveling salesman example upthread.

    Eric evidence is his math: He claims his KMI argument shows evolution requires non-stochastic process. That has to be a designing intelligence given observed fitness. He has separately argued that intelligence can solve halting problem which is how it escapes his math no-go result for stochastic/deterministic processes delivering observed population genetics change.

    So you are an halting oracle! At least according to Eric, as I understand him. Although I suppose he could argue that people who fail agree with his argument are not intelligent. [insert irony emoji here]

  12. Joe Felsenstein: But that in itself does not make the process deterministic.

    Joe, you say you don’t like philosophy, but here you are, engaging in semantic arguments about meanings of words. Pretty close to how some philosophers spend their time, although they may call it “conceptual analysis”.

    Anyway, I will try to remember to stick with stochastic/deterministic.

  13. BruceS: So you are an halting oracle! At least according to Eric, as I understand him.

    Yes, that seems to be Eric’s viewpoint. To me, it is absurd.

    Yes, humans can do some clever things. But they fall far short of being a halting oracle.

  14. DNA_Jock,

    2. The human amino acid sequence (from a Caucasian male, ideally) is the acme; it has maximal information, by definition.

    Great game finding the straw 🙂

  15. BruceS: Anyway, I will try to remember to stick with stochastic/deterministic.

    Eric, supposedly, takes stochastic to include random, deterministic, and combinations therein, then starts by comparing things to probability distributions to find “non-stochastic.” So, it means everything, except when it means random, then non-random means god-did-it.

    Eric is a classic apologist trying to pass for a mathematician. A mathematician who misuses concepts, engages in equivocation, mistakes concepts for their referents, and does not want to acknowledge and fix his conceptual problems.

  16. Well spotted, Bill: despite all the ego-driven sequencing of Jim and Craig’s genomes, 70% of the HGP reference genome comes from a man who is of half-European, half-African descent, who lived in Buffalo NY in 1997.
    So, safe to say that you are happy with the rest of the assessment, then?
    😉

  17. Entropy: Eric, supposedly, takes stochastic to include random, deterministic,

    Yes, in this thread at least, assuming we take random and stochastic according to standard math definitions of the terms (ie involving random variables and probability distributions.)

    Those semantics can be justified from a mathematical statistics viewpoint, but I understand how most would find that treating deterministic as a special case of stochastic is counter-intuitive to say the least.

    If we take Eric’s math on its own, ignoring his biological claims, Tom E is probably best to point out any errors. To the best I understand it, the math is fine. It’s Eric’s attempts to use it to draw conclusions regarding the mechanisms of evolution that I think are misguided.

  18. BruceS,

    I strongly suspect that EricMH’s mathematics rely on unspoken underlying assumptions that are invalid in the real world.

    I am still bothered by his claim that all real world processes will eventually converge. For many processes this is in my view simply not possible, mainly because the real world is not a stationary system but one that constantly and irrevocably changes (in no small part because of the 2nd Law), and many processes even influence themselves (for instance plate tectonics, a system that reconfigures itself continuously in ways that depend on a great many contingencies – how are you ever going to derive probabilities for the outcomes of such a system, and why would it ever converge?).

  19. faded_Glory: how are you ever going to derive probabilities for the outcomes of such a system, and why would it ever converge?

    It’s even more interesting than that. Eric has renounced methodological naturalism, so when he claims:

    The reason is probably too simple, but in plate tectonics and rainfall each subsequent state is entirely determined by a probability distribution conditioned on the prior state.

    The question arises: How does he know? He won’t just assume that, right?

  20. DNA_Jock,

    So, safe to say that you are happy with the rest of the assessment, then?

    Its reasonably close except I think his measurement is based on the blast formula which is a little more complicated. I do think transitional add of human preserved bits 400 to 500 million years ago is very interesting data. It gives a new evolutionary perspective based on sequencing technology. We can agree to disagree exactly what it does and doesn’t tell us.

    I am discussing w a statistician at PS how the data can be used.

  21. Entropy,

    Eric is a classic apologist trying to pass for a mathematician. A mathematician who misuses concepts, engages in equivocation, mistakes concepts for their referents, and does not want to acknowledge and fix his conceptual problems.

    Given you feel the need to attack him this way maybe I should take a closer look at what he is arguing. 🙂

  22. Obviously I’m missing something here, but if a process is confined to a one-dimensional subspace (of some bigger space) then it could move deterministically in that one-dimensional space, or it could have a stochastic component to its motion. I can’t see why anyone thinks that therefore the motion must be deterministic, just because it is in one dimension.

    A seond note: we are accustomed to modeling some processes as approaching an equilibrium distribution. For example particles undergoing Brownian Motion in a gravitational field, which settle down into a distribution of how far they are above the substrate. Of course this is not sustainable in the very long term, for example if the necessary heat is radiating away into space and is ultimately not replenished. But we ignore that in the medium-to-long-term as it is only relevant in the very-long-term.

  23. faded_Glory: I am still bothered by his claim that all real world processes will eventually converge.

    As best I understand it, the sun will eventually expand to a red giant. And, when that happens, real world processes will converge to debris.

    However, I don’t think that’s what EricMH had in mind.

  24. colewd:
    Given you feel the need to attack him this way maybe I should take a closer look at what he is arguing.

    The question is whether you can take a critical look. For example, do you notice the conceptual problems? The mistaking concepts for their referents?

    ETA: I’m not attacking him. I’m describing my conclusion about him built from what he’s shown himself to be.

  25. faded_Glory: I am still bothered by his claim that all real world processes will eventually converge. For many processes this is in my view simply not possible, mainly because the real world is not a stationary system but one that constantly and irrevocably changes (in no small part because of the 2nd Law), and many processes even influence themselves (for instance plate tectonics, a system that reconfigures itself continuously in ways that depend on a great many contingencies – how are you ever going to derive probabilities for the outcomes of such a system, and why would it ever converge?).

    I am talking about examining repeated processes in isolation. You have a process, collect data, and repeat from scratch over and over again to generate the probability distribution. In the wild, they all have fixed probabilities guiding their behavior, but you are correct, we most often will not be able to infer the probability due to the difficulty of isolating the processes and repeating them.

    My claim is that for every in the wild natural process, if you can isolate and repeat it, you will see its output always converge to a probability distribution.

    This claim is derived from these premises:
    1. every natural process operates according to the laws of physics
    2. the laws of physics are all stochastic (i.e. everything has a fixed probability distribution)
    3. the output of every stochastic process will converge to this fixed probability distribution when you repeat and observe it enough times
    4. therefore, for every natural process, when isolated and observed repeatedly, will converge to a specific probability distribution

    For a rainfall example, consider a rainfall simulator based on a pseudorandom number generator running on computer with finite memory. For simplicity, assume the rain is dots appearing and disappearing on a line, signifying when the rain hits the ground. Seems like a fairly simple thing to program, right?

    At some point, the pseudorandom sequence will repeat, and thus so will the rainfall simulation. Thus, if we run the simulation long enough, or do short runs from enough randomly selected start points in the pseudorandom sequence, the data we collect will converge to a specific distribution of events.

    As far as we know, every natural process can be modeled computationally to an arbitrarily precise level of accuracy. Thus, pick any natural process you are interested in, create the simulation to desired degree of accuracy, and run the above process on a computer with finite memory, and after enough repetitions the data from the simulated process will converge to a fixed distribution.

    Perhaps the above is all religious apologetics, but I’m fairly certain it is also non controversial in any science and engineering department. Just ask the people modeling climate change 🙂

  26. Entropy: ETA: I’m not attacking him. I’m describing my conclusion about him built from what he’s shown himself to be.

    You should join Gregory in writing an OP on Bulverism.

  27. Eric,

    It’s clear that you haven’t thought this through properly.

    My claim is that for every in the wild natural process, if you can isolate and repeat it, you will see its output always converge to a probability distribution.

    You can’t do that for any of the processes we’ve talked about. Not for weather systems, nor for plate tectonics, nor for humans. You can’t isolate them and you can’t “repeat” them.

    In fact, you can’t establish that non-stochastic, non-deterministic processes even exist, much less identify particular ones. You have neither an existence proof nor a feasible test.

    As far as we know, every natural process can be modeled computationally to an arbitrarily precise level of accuracy. Thus, pick any natural process you are interested in, create the simulation to desired degree of accuracy, and run the above process on a computer with finite memory, and after enough repetitions the data from the simulated process will converge to a fixed distribution.

    What’s the point? We already know that any finite-state system will eventually repeat at least one of its states. It’s mathematically certain, and so it’s a waste of time to bother with the simulation.

  28. EricMH

    As far as we know, every natural process can be modeled computationally to an arbitrarily precise level of accuracy.Thus, pick any natural process you are interested in, create the simulation to desired degree of accuracy, and run the above process on a computer with finite memory, and after enough repetitions the data from the simulated process will converge to a fixed distribution.

    So once again you’re using math to model something with no connection to physical reality. Are you really surprised your frankly ridiculous claims about evolution are met with such skepticism?

  29. Entropy: Eric is a classic apologist trying to pass for a mathematician. A mathematician who misuses concepts, engages in equivocation, mistakes concepts for their referents, and does not want to acknowledge and fix his conceptual problems.

    Agreed. I have no problem at all acknowledging that George Montañez is a very bright man, and is usually (not always) meticulous in his math. So it should not be taken as anti-ID bias when I observe that much of EricMH’s “math” is crankery.

    BruceS: If we take Eric’s math on its own, ignoring his biological claims, Tom E is probably best to point out any errors. To the best I understand it, the math is fine.

    The mathematicalistic yimmer-yammer is not fine. Eric is oblivious to the distinction of stationary and nonstationary random processes. It’s not necessary for a nonstationary random process to converge in any nontrivial sense. But that’s relatively unimportant. What Eric is saying is hardly different, in essence, from what creationists have long been saying. He’s merely dressing creationism in yet another cheap tuxedo.

    You are entirely capable of identifying what’s wrong with Eric’s argument:

    EricMH: [T]he reason why I define MN as stochastic processes is due to the following logic:
    1. MN claims everything reduces to matter.
    2. Matter operates entirely by the laws of physics.
    3. The laws of physics are entirely stochastic processes.
    4. Therefore, MN claims everything reduces to stochastic processes.

    You (Bruce) have been missing obvious errors here, and I suppose it must be that you’ve been distracted by “stochastic.” The thing to keep in mind is that Eric, like most creationists, believes that naturalism can be refuted by empirical observation of the world.

    1. MN claims everything reduces to matter. Not even naturalism “claims everything reduces to matter.” And even if it did, Eric would be wrong in ignoring the “methodological” qualifier. Arguments for the utility of methodological naturalism in scientific investigation of nature allow that there may be more to reality than what we regard as nature.

    2. Matter operates entirely by the laws of physics. Creationists take the term “laws of physics” literally, and furthermore assume that we know the laws. Indeed, they often indicate that physicists have mathematically proven the laws of physics. Then they claim that observed violations of the laws of physics implicate the supernatural (but nowadays avoid saying “supernatural” straight out). Of course, the “laws” are actually defeasible models and theories. (The roles of mathematical proof in physics are to establish consistency of models, and to deduce consequences of models that might subjected to empirical test.) Creationists are incensed by revisions to evolutionary theory, but physical theory is also under revision.

    3. The laws of physics are entirely stochastic processes. There actually is no randomness in fundamental physical theory itself. Whether or not certain quantities in quantum mechanics are probabilities is a matter of interpretation.

    4. Therefore, MN claims everything reduces to stochastic processes [and Eric can refute MN by demonstrating that there are non-stochastic processes]. Methodological naturalism claims neither that models are restricted to a particular class, nor that models are entirely correct. “All models are wrong but some are useful.” I do not think, however, that this response gets at what is actually going on with Eric. It appears to me that he is reifying the model. That is, if fundamental physical models were all stochastic processes, then physicists would be claiming that all of nature’s goings-on are stochastic processes. However, whatever form a physical model might take, the form of the model is not the form of the modeled entity. Many macroscopic processes that we casually regard as random are, according to physicists, governed by classical (deterministic) mechanics for all practical purposes. For instance, a coin toss is not random, but instead deterministic and unpredictable. When we model a coin toss as random, we indicate our uncertainty about the outcome. Scientists commonly use stochastic models when uncontrolled factors contribute to observations. Their use of the models does not imply that the modeled processes are themselves random.

    What I’ve just written is not all that different from stuff you’ve posted. I’d advise you to guard against further “mathematical” distractions.

  30. faded_Glory: I strongly suspect that EricMH’s mathematics rely on unspoken underlying assumptions that are invalid in the real world.

    I agree with all of your points.

  31. Tom English: I’d advise you to guard against further “mathematical” distractions.

    Thanks for taking the time to write this detailed reply. I agree with your points and tried to express some of them in my own posts.

    It’s the math I find most interesting in Eric’s posts; I don’t accept arguments made to apply it to biology, but I find the math itself interesting on its own.

    I’m thinking of concepts Eric lists without much explanation eg as in his OP:

    Correspondences between ID theory and mainstream theories


    Here among others he lists Randomness deficiency, Martin Löf test for randomness, Data processing inequality (chance), Chaitin’s incompleteness theorem (necessity), Levin’s law of independence conservation (both chance and necessity addressed).

    When I referred to checking Eric’s math, I meant specifically how he applies these concepts, to the extent he is explicit about that. So I’ve appreciated the OPs you’ve have done on these topics.

  32. EricMH: Perhaps the above is all religious apologetics, but I’m fairly certain it is also non controversial in any science and engineering department.

    What’s surprised me most about Christian apologists, in my many years of observing them, is their fabulous dishonesty.

    You got caught making stupid claims. Now you revise your claims, dropping entirely the connection to methodological naturalism, but pretend to be clarifying what you said previously. And for good measure, you close with sass.

    EricMH: My claim is that for every in the wild natural process, if you can isolate and repeat it, you will see its output always converge to a probability distribution.

    Its output? The notion that natural processes are programs running on a computer is compatible with naturalism, but is not entailed by naturalism.

    EricMH: This claim is derived from these premises:
    1. every natural process operates according to the laws of physics

    False. The “laws” are models contrived by humans to account for observations. We know, by accurate measurement, that nature does not abide precisely by our “laws.” Furthermore, we presently infer that there’s a great deal of physical reality for which we have no account. Fundamental physics is actually in a big mess. It may well be that physicists will have to make big changes to their models.

    2. the laws of physics are all stochastic (i.e. everything has a fixed probability distribution)

    It’s wrong to write “i.e.” here. Saying that the laws of physics are “stochastic” does not imply that they are unvarying over space and time.

    3. the output of every stochastic process will converge to this fixed probability distribution when you repeat and observe it enough times

    There’s that “output” again. The word you need to use is outcome. However, it’s nonsense to say that an outcome converges to a probability distribution. The sequence of (normalized) frequency distributions of outcomes perhaps converges to some distribution.

    4. therefore, for every natural process, when isolated and observed repeatedly, will converge to a specific probability distribution

    So what!? Previously, your main claim was that you would observe nonconvergence for some systems, and thus would refute methodological naturalism. Are you now abandoning that claim? You ought to. If, for example, there were a steady trend in the data over time, then physicists might take it as evidence that the “laws of physics” had been changing. There is nothing in the stance of methodical naturalism that prohibits fundamental physical relations from varying over space and time.

    EricMH: As far as we know, every natural process can be modeled computationally to an arbitrarily precise level of accuracy.

    You’re trying to pull a fast one with the physical Church-Turing thesis. What we know is that physical theory is computable. If we were to establish that some process, e.g., a human cognitive process, is Turing-incomputable, then that would refute the physical Church-Turing thesis, not naturalism.

    Thus, pick any natural process you are interested in, create the simulation to desired degree of accuracy, and run the above process on a computer with finite memory, and after enough repetitions the data from the simulated process will converge to a fixed distribution.

    As keiths noted, the limit on memory makes this a trivial observation. Earlier in the thread, you indicated that it was the physical process itself, not the simulator of the process, that had limited memory. It still seems you’re saying that analysis of the simulator tells us something about physical reality. However, the simulator is not the simuland.

  33. EricMH,

    If you put your stochastic approach like that I have no problem with it in principle, a lot of scientific experiments and simulations are done just in that way of course.

    However, this is a theoretical/experimental approach that chops up the real world in tiny little discrete events and ignores the overall picture of how all these small events add up to the totality of what we see. As I mentioned above, for convergence to occur you have to ignore the non-stationarity of nature. Nature doesn’t repeat itself in the way your experiment, or simulation, does.

    Real rainfall over time in spot X is determined by factors like time of day, seasonality, climate, topography and geographic position (which changes over time because of plate tectonics). The averages you arrive at depends critically on the time period over which you measure. There won’t be repetition nor convergence.

    I’m no biologist but I’d wager that evolution is subject to similar non-stationary (non-repetitive) behaviour.

    There is a difference between the predictable law-like behaviour of matter (stochastic or deterministic s.s.) at small scales, in controlled experiments and in simulations, and the emergent behaviour of complex systems at larger scales. The sum is greater than the parts.

  34. EricMH: I am talking about examining repeated processes in isolation

    Eric: Keith, Tom, faded_Glory, and others have already posted concerns with your arguments; here is my take on the issues:

    First, your ideas on testing for non-stochastic/deterministic processes in the real world will not work. I think you bring out some of this issues yourself in this post. In particular, tests of models on computers have no bearing that I can see on the open-ended, exploratory testing in the world that would be needed. In fact, it seems it is not even possible to specify such testing in a way to would apply to the world. Doing mathematics or computer simulations is irrelevant to the core issues.

    Second, how can one build scientific models incorporating any of the following suggestions you have made for non-stochastic/deterministic processes: undefined, unconstrained intelligence; libertarian free will; halting -problem oracles; intrinsic teleology.

    These ideas have been rejected as part of science since Bacon because they add no value to building models and explanations which meet the goals of science.

    It’s called methodological naturalism for this reason. It’s not a claim about how the world is. It’s a claim about what successful scientific practices are. Those practices are discovered by studying what types of explanations and evaluation processes are acceptable to successful scientific communities.

    Those practices are not fixed by something outside of science, but rather are created and changed by successful scientific communities themselves to incorporate what is found to work. As examples, consider the changes in practice due to QM, cognitive psychology, big data in the social sciences.

  35. Tom English: the physical Church-Turing thesis

    You may be interested in Scott Aaronson’s Bernay’s lectures; he claims that the modern way to think about the CTT is that it is the physical CTT. He also examines how QM and quantum computing relate to this claim.

    The math and computing complexity theory are presented in a form for those unfamiliar with the ideas, and so will be nothing new to you, I am sure. But you enjoy his take on them.

    The links to the presentation are in this blog post
    https://www.scottaaronson.com/blog/?p=4301

  36. BruceS: You may be interested in Scott Aaronson’s Bernay’s lectures; he claims that the modern way to think about the CTT is that it is the physical CTT.

    Yeah, I saw your link, up-thread — thank you — and watched the videos several days ago. I laughed aloud at his “modern way” comment, because it was so transparently self-serving. I’m in the camp saying that there’s no computation without representation.

    BruceS: The math and computing complexity theory are presented in a form for those unfamiliar with the ideas, and so will be nothing new to you, I am sure. But you enjoy his take on them.

    Scott did a fantastic job of laying out the essentials succinctly, accurately, and clearly. It was a pleasure to watch.

    P.S.–I’ve also got the SEP article on “Computation in Physical Systems” open in a tab, but i haven’t read it yet.

  37. Joe Felsenstein: I can’t see why anyone thinks that therefore the motion must be deterministic, just because it is in one dimensio

    I don’t think that. I think it is deterministic if it is confined to zero dimensions, ie a point. That will be for each outcome if we are talking about a time series, although not necessarily the same point.

    Does “zero dimensions” used that way have any meaning?

    You could think of it this way: What happens mathematically to the Normal distribution as the variance goes to zero. The answer, I believe, is that it goes to a Dirac delta function. See second answer here for math equation:
    https://stats.stackexchange.com/questions/233834/what-is-the-normal-distribution-when-standard-deviation-is-zero

    Is Dirac delta an allowable though degenerate probability distribution? If so, is it another way of representing a deterministic processes with a single outcome? Formally and mathematically, I think yes, it is. In terms of scientific or everyday usage of the terms “stochastic” and “deterministic”, it seems not.

    My only “real” experience with actual statistical modeling is assignments in my statistical inference work at school. There we built General Linear Models (using 1970s versions of SAS!). So the way I think of the above ideas on Normal distribution is what is left if the error term in such models only takes on the value zero in theory in all cases. And the answer is a deterministic linear model, at least in terms of the math equation itself.

  38. Tom English:I’m in the camp saying that there’s no computation without representation.

    The SEP article includes concerns with that definition of computation. Piccinini, the author of the article, prefers his mechanism approach.

    I like Scott because he shows how work in computational complexity and QM can be applied to philosophy:

    Here he relates that work to “the strong AI debate, computationalism, the problem of logical omniscience, Hume’s problem of
    induction, Goodman’s grue riddle, the foundations of quantum mechanics, economic rationality,closed timelike curves, and several other topics of philosophical interest.”

    Why Philosophers Should Care About Computational Complexity

    Here he relates that work to his version of weak compatibilism in the philosophical debates on free will:

    Ghost in Quantum Turing Machine

  39. BruceS: Is Dirac delta an allowable though degenerate probability distribution?

    Of course it is. Probability is, mathematically, a measure. Any set whose measure is 1 is a probability distribution. If the (cumulative) probability distribution along a one-dimensional axis is simply a step function of height 1, rising at point x, then that is a degenerate probability distribution. Note I said probability distribution (and so did you), not probability density function. The Dirac delta “is” in some sense the density function, sitting at point x. See Wikipedia “Atom (measure theory)”.

  40. What really bugs me is that (1) various people here have accepted the idea that a one-dimensional subspace has only one point, and (2) Eric’s idea that it says something relevant about nature how often your (pseudo)random number generator repeats its cycle. That latter would seem to just depend on the precision of the calculations you use to make the pseudorandom number generator.

  41. Tom English: What’s surprised me most about Christian apologists, in my many years of observing them, is their fabulous dishonesty.

    I have long since stopped being surprised at that. Dishonesty is the very core of apologetics. But they don’t recognize it in themselves, because the confirmation bias is so strong.

  42. Neil Rickert,

    I have long since stopped being surprised at that. Dishonesty is the very core of apologetics. But they don’t recognize it in themselves, because the confirmation bias is so strong.

    I see honesty on both sides and dishonesty. Each side has the full range of human ethics. To say creationists hold a monopoly on deception seems naive.

Leave a Reply