Imagine a coin-tossing game. On each turn, players toss a fair coin 500 times. As they do so, they record all runs of heads, so that if they toss H T T H H H T H T T H H H H T T T, they will record: 1, 3, 1, 4, representing the number of heads in each run.
At the end of each round, each player computes the product of their runs-of-heads. The person with the highest product wins.
In addition, there is a House jackpot. Any person whose product exceeds 1060 wins the House jackpot.
There are 2500 possible runs of coin-tosses. However, I’m not sure exactly how many of that vast number of possible series would give a product exceeding 1060. However, if some bright mathematician can work it out for me, we can work out whether a series whose product exceeds 1060 has CSI. My ballpark estimate says it has.
That means, clearly, that if we randomly generate many series of 500 coin-tosses, it is exceedingly unlikely, in the history of the universe, that we will get a product that exceeds 1060.
However, starting with a randomly generated population of, say 100 series, I propose to subject them to random point mutations and natural selection, whereby I will cull the 50 series with the lowest products, and produce “offspring”, with random point mutations from each of the survivors, and repeat this over many generations.
I’ve already reliably got to products exceeding 1058, but it’s possible that I may have got stuck in a local maximum.
However, before I go further: would an ID proponent like to tell me whether, if I succeed in hitting the jackpot, I have satisfactorily refuted Dembski’s case? And would a mathematician like to check the jackpot?
I’ve done it in MatLab, and will post the script below. Sorry I don’t speak anything more geek-friendly than MatLab (well, a little Java, but MatLab is way easier for this).
Mike wrote
The second genetic algorithm I ever wrote, sometime back in the early 1980s, had two different selective environments to which the population was exposed in alternate generations. Over some fairly small number of generations the population split into two subpops, one subpop fairly well adapted to environment A and pretty badly adapted to environment B (though not badly adapted enough to go extinct in one generation)
RBH,
, the other subpop the reverse. Speciation in action! 🙂
And I have no idea what happened to the formatting in that comment. Please disregard the interpolated “RBH”.
I agree – “latching” is a non-issue, a side-track. Quite a while ago, I calculated the probabilities to observe the fact the the algorithm doesn’t “latch”…
One of the nice things I learned from The Beak of the Finch is the way that populations colonise however many niches you provide, so the distributions of the relevant dimensions track the changes in available niches.
Umm, excuse me but I have been trying to tell you that for quite some time-
And oleg, I am well aware of page 193, and 194 and all the pages in the book. However, unlike you, I am able to read them in context and don’t just consider them in isolation.
Ya see before page 193 comes page 149- section 3.8- “The Origin of Complex Specified Information”. But even before that is:
And even more support for my claims-
That evolution is a blind, purposeless process is difficult to grasp, yet it is a fundamental part of understanding biology.
Life is good….
Has anyone ever confirmed any mutation rate/ fixation rate equations on populations in the wild?
We know that “beneficial” is relative- that is what is beneficial for one genration may not be beneficial for the next- and environments change.
Latching may be a non-issue, but if Dembski says a latching algorithm is ten times as effective as a non-latching, his statement is worth a mention.
Good. I am glad you finally realize that. In your earlier posts you kept insisting that a scientist should be able to construct a protein from knowing nothing but its function.
That’s an empty claim as long as you have no idea whether k is or is not a large value, in other words: until you can tell us what k is. Can you?
Really? Again, show your math: Elizabeth, with the help of many others on this forum, has provided the k and n, so you just need to look these up. Now please give us the c and the m, so we can compare and evaluate your claim.
The rest of your post has already been addressed by Liz and others.
I see questions raised by patrick and madbat as well as commentary by Liz. The comment whereas Liz considers that the specified Nebulin protein function described as “regulates thin filament length in mice” is analogous to “God did it” is especially astonishing. This may be evidence of a disconnect far more severe than I considered. I’m going to try and connect with these all at once so I’ll catch up within the day.
What I meant, junkdnaforlife, before you spend too much time considering a response, is that “short” does not equal “compressed”. It may simply mean “lacks detail”.
Furthermore: the equivalent of “regulates thin filament length in mice” in mine is unspecified. It could be “maximises energy consumption”.
Make sure you are comparing like with like.
Joe G,
“And oleg, I am well aware of page 193, and 194 and all the pages in the book. However, unlike you, I am able to read them in context and don’t just consider them in isolation.
Ya see before page 193 comes page 149- section 3.8- “The Origin of Complex Specified Information”. But even before that is:
So, it’s the origins claim again. Can you explain, demonstrate, and show positive, testable evidence of/for the origin of CSI? Can Dembski?
Don’t you claim that CSI and algorithms were front loaded into every ‘kind’ of organism at the moment of creation by “the intelligent designer”? But don’t you claim that ID is also OK with side loading/intervention in organisms (i. e. new, amended, or revised CSI and algorithms) by “the intelligent designer”? And don’t you claim that the universe itself was front loaded and/or was/is side loaded with CSI, natural laws, and algorithms by “the intelligent designer”? Can you produce positive, testable evidence and a testable hypothesis for any of those claims?
Since Dembski and you rely on what he says: “Algorithms and natural laws are in principle incapable of explaining the origin of CSI”, and since you (and apparently Dembski) claim that origins are what ID/CSI are all about, and since, according to you and other IDists, the origins originated in/from “the intelligent designer”, I’ll remind you that you have the burden of explaining, demonstrating, and producing positive, testable evidence of “the designer” and the ultimate origin of “the designer”, and everything else.
Well, I got the jackpot of jackpots 🙂
Here is my winning sequence of runs-of-heads:
4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
And here is the lineage of the winner (white=Heads, Black=Tails, generations run from top to bottom):
Product is 1.6069e+60
It is interesting that after about 250 generations the sequences tend to become more “robust” in sequences of 4 heads. Once the pattern becomes established, it is less likely that the next generations will deviate much from these sequences with 4 heads.
This phenomenon seems to occur even without any partial or total “latching” in these programs. If one includes some low probability of “latching” in order to simulate the influence of being “deeper in a potential well,” the shape of the “decay” curve toward the target is affected. Usually one can find a “latching coefficient” that produces a nice exponential decay curve that plots as a decreasing straight line on a log versus linear plot, the curve representing the number of members of the population that have NOT totally adapted in a given generation.
You should celebrate at the Restaurant at the End of the Universe.
Well, what it’s taught me, which was something I hadn’t explicitly appreciated before, is how important the “connectivity” between possible combinations is, in terms of mutation types.
It’s not something I’ve seen referred to (or not in language I’ve registered!) I’m trying to make a simplified matrix showing connectivity patterns conferred by different mutation types.
For my winning run, parents “gave birth” to offspring with one of four “mutations types”: None (i.e. identical); .01 probability at each locus of a flip (Heads to Tails or Tails to Heads) i.e. point mutation; randomly picked string of randomly selected length (from poisson distribution) removed and inserted elsewhere, i.e. deletion and insertion of the deleted portion; randomly picked string of randomly selected length (from poisson distribution) duplicated in place of some part of the existing string (possibly overlapping with the duplicated portion.
On that last run I increased the mean for my poisson distribution to 50, which meant that substantial portions could be duplicated. If these were bad portions, then the thing will fail. Even if quite a good portion gets duplicated, if it creates short runs in doing so, it may fail.
But what it does is to increase connectivity between fitness peaks at the upper end of fitness.
And while there is no reason to think that fitness peaks, when plotted on to a phase space of given connectivity, clearly the more connected phase space is by one-step mutations, the closer fitness peaks are likely to be, thus smoothing the landscape.
I stared appreciating connectivity when I tried making words, It’s a non-issue if you have a target, but if you are assigning fitness to substrings then you have the IC problem to overcome. How do substrings get connected to form words?
It depends entirely on the characteristics of the functional space. If sequence space is not connectable, then Behe is probably right. That’s why the work of Thornton is so important in this argument.
Yes, exactly. Behe still makes the best ID argument. I’m sure some things are unevolvable. The problem is: how do you tell from looking at a thing, whether it could have evolved or not?
Taking away bits and seeing if it still works is clearly fallacious. We know IC structures can evolve, and that things can also evolve by “IC pathways” (that’s what AVIDA tells us).
That doesn’t mean that everything is evolvable, it’s just that you can’t, post hoc, say that it can’t.
Which means, of course, that you also can’t say for sure that it could. But that’s where the assymmetry comes in – biologists don’t claim that “there was no ID”, but IDists claim that there must have been. (p>UPB).
Before leaving this topic I want to say just one more thing about my algorithm. The fitness function is quite capable of “rewarding” two or three substrings that can never fit together to form one word. So it is capable of getting into an IC hole from which it cannot climb.
Sometimes it does and sometimes it doesn’t.
Again AVIDA gas nothing to do with biology- nothing at all.
I disagree.
We don’t go around trying to confirm Pythagoras’s Theorem in the wild. Therefore it is wrong?
We know that exact geometric triangles don’t exist in nature — every actual triangle is a little bit nontriangular. So therefore Pythagoras’s and Euclid’s results are useless?
Like geometry, models of theoretical population genetics give us insight, and to deal with the complexities of nature we make them more complicated, in ways that can still be analyzed mathematically. ‘Nuf said.
Joe Felsenstein,
[JoeG] LOL. Your position can’t even account for the existence of triangles! [/JoeG]
[joeG] LOL. Triangles are intelligently designed, anyway![/JoeG]
Disagree all you want- it doesn’t change tha=e fact that AVIDA does not represent real-world biology- for one the “organisms” are far too simple and for another the rewards are far too generous.
LoL! We can prove mathematics in the wild- we can prove geometry in the wild. What you cannot do is model biological evolution- and guess what? It appears we are having a tough time trying to model climate- that’s right the models that say we are doomed if we do not stop CO2 from rising have been shown to be nonsense- they were applied to see if they could retordict the past climate and they failed.
So yes you can try to model biological evolution but you have no way of knowing if your models are correct/ reflect reality. ’nuff said, indeed.
One more time-
Until YOU step up and produce positive, testable evidence and a testable hypothesis for your position- whatever that is- there is no use discussing science with you.
So please step up and actually say something so I know what you will and do accept as “science”.
I see I have a little time today to make a quick post.
A finite state machine is a very important concept with respect to information processors and information processing systems.
All states that are mapped (implicitly or explicitly) within a complex system must keep track of possible state trajectories, this includes its own state, a next state, a previous state etc…including error-checking and correction mechanisms which needs to take place at any given state depending on subsequent input.
For a complex system to function all states depend on each other and so we must have at at least error detection and correction mechanism ready at each state for expected or unexpected input.
Basic stuff really, when we discuss it, but as anyone who’s written more than 1000 lines of code knows – things slowly become intractable, OOP or not. The intractability problems comes with a greater amount of information to control, which only adds to the problem.
The problem with evolutionary algorithms like weasel relative to reality is that at each mutable instance there is no state-tracking involved, no possible way of knowing what state has been affected until the outcome and what magnitude the random change had on the system as a whole. You could say ” well it survived or didn’t”, but that is not an explanation. If it survived, it is because it survived the volatility because of implicit error-detection and error-correction mechanisms.
That is all for now.