This is a follow up to Earth is the Center of the Universe? OP with the link to the full documentary entitled The Principle. It is really worth to watch it in its entirety just to get the sense of how cosmologist, like Lawrence Krauss, and many other scientists deliberately resist the data (verified by 3 different probes) that Axis of Evil are pointing to the special location of the Earth in the Universe…
While researching the evidence for Cosmic Consciousness, the implications of the collapse of wave function, QM and so on, I came across some interesting evidence pointing to the fact that the Earth not only resides in the special place of the universe, it is the center of the universe…The evidence comes from the so-called “Axis of Evil – the earth’s ecliptic and equinoxes, and this represents a very unusual and unexpected special direction in space, a direct challenge to the Copernican Principle, which “appears to give the plane of the Solar System and hence the location of Earth a greater significance than might be expected by chance.”- Wikipedia
As a relatively recent arrival here at TSZ, I am somewhat intrigued to still see the Fine-Tuning Argument in regular rotation. It appears often in comments, but the two most recent OP’s that I have come across dedicated to the topic are Mung’s ‘The Wonder of Water‘ and RobC’s ‘The Big Numbers Game‘.
That I find the Fine-Tuning Argument completely unconvincing will not come as a surprise to anyone who has read any of my comments on TSZ. But I think it is worth taking a moment to explain why that is as my reasoning differs slightly from that of others whose comments I have seen. In a comment on the ‘Wonder of Water’ thread, Joe Felsenstein comes closest while referring to the ability of the Schrodinger Wave Equation to model all of the properties that we see expressed in Chemistry:
“If Michael Denton’s Intelligent Designer wants to fine-tune properties of water she has to do it by tinkering with the SWE. Which would mess up a lot else.”
In the properties of water the cosmos is revealed to be a transcendent unity, with life on Earth and beings of our biological design as its central aim and focus.
– Michael Denton
Marks, Dembski, and Ewert open Chapter 3 by stating the central fallacy of evolutionary informatics: “Evolution is often modeled by as [sic] a search process.” The long and the short of it is that they do not understand the models, and consequently mistake what a modeler does for what an engineer might do when searching for a solution to a given problem. What I hope to convey in this post, primarily by means of graphics, is that fine-tuning a model of evolution, and thereby obtaining an evolutionary process in which a maximally fit individual emerges rapidly, is nothing like informing evolution to search for the best solution to a problem. We consider, specifically, a simulation model presented by Christian apologist David Glass in a paper challenging evolutionary gradualism à la Dawkins. The behavior on exhibit below is qualitatively similar to that of various biological models of evolution.
Animation 1. Parental populations in the first 2000 generations of a run of the Glass model, with parameters (mutation rate .005, population size 500) tuned to speed the first occurrence of maximum fitness (1857 generations, on average), are shown in orange. Offspring are generated in pairs by recombination and mutation of heritable traits of randomly mated parents. The fitness of an individual in the parental population is, loosely, the number of pairs of offspring it is expected to leave. In each generation, the parental population is replaced by surviving offspring. Which of the offspring die is arbitrary. When the model is modified to begin with a maximally fit population, the long-term regime of the resulting process (blue) is the same as for the original process. Rather than seek out maximum fitness, the two evolutionary processes settle into statistical equilibrium.
It is a little known fact that scientists who argue that the paleontological record of life is hundreds of millions of years old, when confronted with astrophysical facts, must eventually rely heavily on the hypothesis of finely tuned, large scale global warming. The problem is known as the Faith Young Sun Paradox. A few claim they have solved the paradox, but many remain skeptical of the solutions. But one fact remains, it is an acknowledged scientific paradox. And beyond this paradox, the question of Solar System evolution on the whole has some theological implications.
Astrophysicists concluded that when the sun was young, it was not as bright as it is now. As the sun ages it creates more and more heat, eventually incinerating the Earth before the sun eventually burns out. This is due to the change in products and reactants in the nuclear fusion process that powers the sun. This nuclear evolution of the sun will drive the evolution of the solar system, unless Jesus returns…
I’ve written a genetic algorithm, which I have posted below, which implements single-step selection.
The relevant line for single-step selection is line 38:
fitness == 28 ? fitness : 0
- Have I successfully coded a genetic algorithm?
Is this a version of the Weasel program?
Does it demonstrate the power of cumulative selection?
The big numbers game: Fine Tuning
Creationist fine tuning claims are incomprehensible. I know certain physicists argue for cosmological fine tuning (I’m not sold, for reasons I may discuss later). I believe the claims we’ve seen lately are far removed from any serious claim of fine tuning, and reflect a lack of comprehension of physics or basic math by those that use them.
Here’s a typical claim: “The force of gravity is determined by the gravitational constant. If this constant varied by just one in 10^60 parts, none of us would exist.” (1). You will find this claim repeated in the UD comments section, and even their own glossary of definitions (2-5). Creationists love it, because the big numbers, to them, mean impossible, therefore God. Right?
But wait. Humans have only measured the gravitational constant “big G” to parts per million accuracy (with variations between measurement methods at hundreds of parts per million (6)). So, using Chem 101 terminology, the statement above says that we know a constant to about 6 significant figures, but if the value was different at the digit 54 decimal past the frontier of human understanding, we’d be hosed. Where did this precise value come from? What math-a-magics creates precision from fine air?