Over in the ID Quiz thread at Uncommon Descent, Mark Frank asks;
“Here’s quiz on ID for you ID proponents:
On page 21 of “Specification: The Pattern That Signifies Intelligence” William Dembksi defines the context dependent specified complexity ofgiven
as
Consider the context of the bacterial flagellum.
1. What is?
2. What is the function?
3. How isestimated?
4. What is?
5. How isestimated?
6.is meant to be a probability. Under what conditions might the answer exceed 1?”
This seems a fair question, asking IDists to use their own proposed methodology to detect design, empirically.
But instead of heartily embracing this opportunity to show the power of ID, he is set upon.
Joe moans that
“Really Mark? Then tell us how to determine the probability wrt unguided evolution. If you know the answers to your questions you should be able to do that.”
Sorry Joe, it’s your methodology, do your own work, if you can.
KirosFocus has a length moan from large numbers and incredulity, finishing with;
“We cannot stop such from those tactics but we can expose them and red ring fence off those who insist on such tactics.”
STOP. RIGHT. THERE.
If you can’t employ your own design detection methodology it is no ones fault but your own.
Either the methodology is unusable,
OR non of you are smart enough to use it,
OR you’re all secretly supports of Darwinian evolution and are making ID look as ridiculous as possible.
All Mark Frank has asked is for you to employ your own methodology. You can’t. ID is intellectually bankrupt, and only addressing the required calculations will change that.
[Edit: symbols latexed by Lizzie]
No results found for site:www.uncommondescent.com “explanatory filter example”.
It’s pretty clear that they just can’t use their own methodology, because they don’t know how. Hence the “no, you first!” retort.
It’s actually quite remarkable that “no, you first!” is their response to being taken seriously. Most of the time they complain that they’re not being taken seriously, but here’s an effort to take them seriously, and the response isn’t, “wow, I’m glad you asked that! Here’s how the method works: _____” — which is the response of an intellectually serious scientist — but “no, we don’t have to! you first!”
They just can’t stand the fact that evolutionary theory has paid its way, and that if they want to propose an alternative theory, they have to show that it works.
Until they can do that — and I’m surely not holding my breath — they’re not going to change our perception of them as disgruntled culture warriors doing cargo-cult science.
There is already at least one example available on the web: God and the Explanatory Filter.
It’s pitiful, really. The bacterial flagellum is the premier icon of Intelligent Design. It’s on the masthead at UD and on the cover of Dembski’s books. It figured prominently in the Dover trial. Yet not one of those feckless ID boosters at UD (or anywhere else) can show — even using Dembski’s own methodology, fercrissakes — that the bacterial flagellum was designed.
And they wonder why ID isn’t taken seriously by the scientific community.
nullasalus responds to Mark’s question:
The “professionals” can’t answer it either. To my knowledge, no one but Dembski has even made an attempt, and his was marred by an egregious mistake: the assumption that the flagellum is a “discrete combinatorial object”.
Of course Dawkins and Kopplin cannot answer that question. No one can. That’s the point!
Dembski’s methodology is useless to ID proponents, it’s useless to ID critics (except for illustrating the vacuity of ID), and it’s useless to Dembski himself, which is probably why he has abandoned it in favor of arguments involving searches and “active information”.
I have a question for ID-watchers.
Is VJ Torley’s assessment correct? Has ID thrown the formal notion of CSI under a bus?
Here:
“You are assuming that ID is tied to Professor Dembski’s definition of specified complexity in the paper you quote.”
Is ID now tied to some other functional definition of specified complexity?
“But the definition of information used by Dembski and Marks in their more recent paper, Life’s Conservation Law, had nothing to do with specified complexity: it was simply anything that improves on a blind search.”
If specified complexity is not part of ID, why does VJ’s post ask about the flagellum. How would they assess its design or lack thereof?
And this surely can’t be true! Anything that improves a search? If beachball is on a search for the lowest potential energy on some sand dunes, are the shape of the landscape and gravity and wind all intelligent agents?
If this is a correct assessment of the state of ID, I think we’re very close to “intelligent evolution,” where the intelligence is provided by the environment, which can by no means be demonstrated as requiring supernatural intervention (or to be lacking it).
Question 9 from the quiz gets close to this: “The designer arranges the environment and the organisms involved in the process in such a way so as to yield a particular, specified and intended result, with no intervention on the designer’s part aside from initially setting up the situation, organisms and environment.”
1) Key words: Specified and Intended result. Prove specification and intent.
2) Isn’t this a form or theistic evolution?
This begs the question of why you need to improve on a blind search.
I like my Braille analogy. What is impossible about exhaustively searching adjacent space? Lenski seems to think it can be done. He suggests it was don in his experiment.
Design ‘theory’ gets its veneer of plausibility by following a strategy comparable to Descartes’ argument for mind-body dualism.
Descartes’s argument hinges on assuming that everything that makes a human body what it is can be fully specified in terms of mathematical physics. And mathematical physics tell us the whole truth about nature. So, everything that cannot be specified in terms of mathematical physics — paradigmatically, intentionality and consciousness — must arise from something that plays no role in a complete specification of nature. This is how we get the argument that mind is not a proper part of nature, but something added to something that is a proper part of nature — the body — in order to produce a living human being.
More generally: the more austere and stripped-down the conception of nature, the greater the need to posit something other than mere nature in order to compensate for what was removed from the conception of nature in order to re-produce the richness of nature-as-experienced.
The Cartesian dualist posits mind (res cogitans) as a different kind of “substance” in order to get back to the richness of psychological phenomena once nature has been stripped down to only those properties that can be fully specified using 17th-century mathematical physics.
Likewise, the design theorist posits “information” as a different kind of entity in order to get back to the richness of biological phenomena once nature has been stripped down to only those properties that can be fully specified using 20th-century mathematical physics.
The rather remarkable result of the design theorist’s conception of “naturalism” is that life is unnatural, because she first identifies the natural with the physical, and then, seeing no way of getting from physics to biology, introduces something non-natural — a transcendent Designer — in order to get from matter to life.
Whereas it has my principle contention in these discussions that if one ends up with the conclusion that life is unnatural, then something has gone very badly wrong in one’s conception of nature!
Add to that, the need for a deity separate from observable existence, and you have my personal belief system.
Did someone say reduction only works one way?
Well, yes, but to be meaningful it has to be based upon what has been shown to actually be designed, not to simply claim that evolutionary expectations are actually design expectations.
ID, insofar as it is not simply big tent creationism, is little more than trying to define biologic complexity, with all of its evidence for evolutionary causation, as having been designed. And no, real science is not done via redefinition.
Glen Davidson
Joe still maintains we should do his work for him. We of course won’t, so ID in all it’s parasitic glory is again a non starter. No inference, no design.
Nullasalus also says:
Probably not. However, what they almost certainly could do is explain the basic Darwinian principle of heritable variation in reproductive success.
And Mark’s question is the ID equivalent – it’s the equation that lies at the heart of the claim that Intelligence can be inferred from a Pattern. Indeed the title of Dembski’s paper that presents those equations is: “Specification: The Pattern that Signifies Intelligence”.
And yet no ID supporter I have encountered yet can answer Mark’s questions. Dembski himself has largely disowned it AFAICT
And yet ID critics regularly include those “who properly understand and explain highly technical, math-heavy facets of ID”, see its flaws, and challenge ID supporters to come up with counter-rebuttals.
But rarely is anything forthcoming. Hence the “eleP(T|H)ant in the room”.
And when something is forthcoming, it usually strongly resembles a disavowel.
So I suggest that first of all ID proponents actually find out what their leaders are actually saying, and then decide whether they still support it.
To be fair to Joe, he did once try to measure CSI. I present for your unlimited enjoyment his efforts: http://intelligentreasoning.blogspot.com/2009/03/measuring-information-specified.html
M•N•?S(T)•P(T|H) is meant to be a probability. Under what conditions might the answer exceed 1?”
I think that this quantity is not meant to be a probability. Dembski’s argument is that this indicates that the probability of getting an adaptation as good as, or better than, the one we see is large enough that the expedted number of times this will occur is greater than 1 if we have a huge number of trials equal to the number of all events that could have happened in the universe since the Big Bang.
So he computes an expected number of events if you have this many trials, and that is what this is. It is not a probability.
Anyway, it is now clear, and maybe always should have been, that P(T|H) is what no one can compute. It is the probability of getting this good an adaptation or better given all the usual evolutionary mechanisms (natural selection included). No one can do that.
As for CSI, Dembski’s computation of it turns out to be useless. You want to establish that the adaptation is so good that natural evolutionary processes can’t bring it about. If they can’t, that is evidence for design. Once you have done the computation of P(T|H), which no one can do, then after that you are to declare that CSI is present. But when you go to do that, by then you have already drawn your conclusion! Adding the conclusion about the presence of CSI is redundant.
For all that, see of course Elizabeth’s posts on the “EleP(T|H)ant”.
As for Dembski’s more recent arguments, they are arguing only that there is a need for a Designer to set up the universe (or life, or something) so that fitness surfaces to be smooth enough for evolution to work. They don’t at all establish that any Designer intervenes once life has gotten started.
Sensing the weakness of Dembski’s arguments, his erstwhile followers have been slowly backing away from them, though not saying this too loudly.
Is there any instance in which Dr. Dembski (or another IDist) has tested his methods against a known subject? It seems to me that if he believed that his methods were truly effective and useful, it would be quite compelling if he were to successfully test them in a blinded trial (one in which he did not know the origins of the subject, but an arbiter did). Has he ever done so? If not, doesn’t that suggest that Dr. Dembski believes his methods don’t work?
Pro Hac Vice,
No and yes. 🙂
Joe Felsenstein,
I think Dembski intended M•N•?S(T)•P(T|H) to be a probability. For example on page 20 he writes:
As it turns out, the probability of some archer shooting some arrows hitting some target is bounded above by M•N•?S(T)•P(T|H)
I don’t see how this could be true unless M•N•?S(T)•P(T|H) is intended to represent a probability. I think he just made a careless error in calculating probabilities. Fair enough given the length of the paper and the lack of a peer review.
Never mind the “semiotic agent” – I’d be happy with a calculation of P(T|H).
They really don’t want to touch that eleP(T|H)ant.
Winston Ewert pretty well disowned it when he responded at EnV.
Mostly, the IDers who understand it disown it. Some add some extra letters, although all they do is adjust the T, rather than define the H, and the ones who don’t understand it keep invoking it as being the thing that demonstrates the truth of ID.
The eleP(T|H)ant has no clothes. As soon as you try to calculate it, you find you have to enter as an assumption the very conclusion you are trying to draw.
Also, I’ll gladly explain to them what their own math means.
Just say the word. Mocking ID critics for not understanding ID when IDers won’t discuss the absolutely key concept on which a major ID theorist built his argument, despite the fact that ID critics are able and willing to discuss it in any degree of detail requested is, as kairosfocus would say, “telling”.
Also *harrumph* I see Nullasalus has told Joe not to quote from TSZ.
Because Nullasalus says we have “swampers” here.
I suggest that it may also be because we have the eleP(T|H)ant here.
And that rather than turning his nose up at our tagline, he reads it.
I’m more than willing for an IDer to show that my understanding of the eleP(T|H)ant is mistaken. But no IDer even attempts the task.
This CSI bullshit was settled beyond all rational doubt back with the mathgrrl debacle. The term is meaningless, they can’t calculate it for any real-world example.
petrushka,
Indeed. If one has impatient customers, a finite lifespan, MI-hungry managers, or is paying for number-crunching by mips, one might be interested in efficiency. But a planet on which no entity gives a shit about speed, efficiency, adaptation, survival or extinction, prodding the genetic locality with a stick would serve just fine – provided that locality is not overwhelmingly detrimental. [eta: and if it is, you just stay where you are].
I think Joe F is correct – M•N•?S(T)•P(T|H) is intended to be the expected number of hits, calculated as the per-trial-probability multiplied by the number of trials, M•N. On page 20 Dembski is making the point that the probability of one-or-more hits cannot exceed the expected number of hits, hence the phrase “bounded above by”.
IOW Dembski is calculating n•p, ‘cos it’s easier, and then saying 1 – (1-p)^n (the probability he cares about) is less than n•p, with some fancy math phrasing to cover the cases of n=1 or p = 0.
I guess my question is answered by Lizzie. Dembsky has switched from arguing that evolution is impossible to arguing that evolution would be impossible unless chemistry and physics were fine tuned.
Somewhat buried under the rug is the fact that a drunkard’s walk through a smooth landscape doesn’t have a direction or destination. No Chardinian Omega Point.
Not good news either for friends of SETI.
DNA_Jock,
Possibly – although
DNA_Jock,
Maybe I am being unfair on Dembski. All these years I thought he was just making a careless error but I see how it could be true. I would have expected a bit of explanation as to why he used n*p rather than 1-(1-p)^n as n*p can be much larger than 1-(1-p)^n for large n.
There is also the blatant flaw in the Seth Lloyd number part.
But that really ceases to be relevant when the other flaws are taken into account.
But it’s one that raises its head whenever Denyse lays into “multiverses”. It’s as though they think that the UPB is such a killer argument that the only retreat for materialism is into multiverses.
Whereas CSI is dead on arrival anyway.
But the UPB argument is silly anyway. Most scientists are happy with five sigma, rather than 23 sigma or whatever the UPB stands for. And the UPB doesn’t stand for numbers of trials anyway, so its irrelevant (and far too large).
But it’s also too small because the Seth Lloyd number was calculated on the size of the observable universe, and you’d need some very special theory to argue that we are at the dead centre of the whole thing.
And we don’t know how many there are anyway.
So the UPB isn’t anything at all sensible anyway you look at it. It’s just another bit of bamboo faux gadgetry in the cargo cult.
I do think Dembski is aware of this. In fact, I think Dembski is smart enough to see where all the problems are, which might account for his new book, which doesn’t seem to be filling the faithful with optimism.
He just seems to think ID is true anyway.
Joe G seems to have noticed this thread:
Elizabeth Liddle is the eleP(T|H)ant.!
Didn’t Dembski admit a few years back that his explanatory filter didn’t work and he was abandoning it? Then when the screams from his IDiot followers became too loud (i.e. he was going to lose their business on book sales) he did the sprint backpedal and said the EF was still valid. Or am I misremembering?
What a wonderful christian that JoeG dude is. I wonder why he’s so angry. Aren’t atheists supposed to be the angry ones?
Dembski did indeed, very modestly, backpedal like a cartoon coyote heading for a cliff:
It’s interesting that Joe agrees that the specified complexity formula is “useless” because P(T|H) is not known. Ok, maybe not interesting, but it’s evidence that design proponents are capable of learning and sometimes discard ideas which have been discredited.
And Joe chimes in:
It’s the evolutionists fault that dembski has invented a useless method for detecting design.
HAHAHAHAHAHAHA.
It’s bad enough that keiths drags trash from the UD bin over here, but bringing the fetid refuse of that deservedly obscure blog into a nice place like this suggests you aren’t properly housebroken.
Naughty mathematician! No biscuit!
Where’s my rolled up newspaper….
ETA: For the record, this was intended humorously. I would never hit a dog.
Joe G,
Actually, no you can’t. Otherwise you would, it’s as simple as that really.
If you could, you would, there would be many examples of usage and UD would be the centre of a new revolution in determining design from non-design.
As there is not, ergo, you cannot.
Oh, so now you’re calling him a dog…
Glen Davidson
On the Internet, no one knows.
Dembski wants to show that the probability p is so small that np is less than 1. If np is small, 1 – (1-p)^n is in fact approximated by np.
And np is in fact an upper bound on 1-(1-p)^n, though sometimes that upper bound is above 1.
So I don’t think Dembski’s use of np as an upper bound on a probabilitiy is an error. The gigantic problem — the EleP(T|H)ant In The Room — is the need to calculate P(T|H). What ID types need is a way to show that it is extremely small. Instead they declare that the observation that CSI is present proves Design, without noting that this requires that you already have shown that P(T|H) is very small. At which point adding CSI into the argument is redundant.
There is gpuccio’s argument that protein coding sequences have no intermediate fossils.
I wonder how the DI’s search for censor of the year is going.
Because the inn-house competition is fierce. Or did they magnanimously exempt their side because of “conflicts-of-interest”? Or maybe they’re exempt because it’s pretty much policy for them…
Glen Davidson
Trying to keep score among the UD regulars; no swampers:
1. Y YNYY
2. Y YNYYN
3. N NNNN
4. N?NNNN
5. Y?YNYY
6. N NNNN
7. Y YYYY
8. X Y?Y?
9. N NNNN
10. N NNNN
ID in a nutshell :
(1.) It looks designed.
(2.) Therefore p is very small.
(3.) (the “advanced math” part) For a string of length L with K choices per position, the negative log to base two of p = K^(-L) equals 500.
(4.) Therefore it is designed.
nullasalus:
nullasalus,
That might be plausible if you were actually able to refute our arguments. Instead, you hide behind the moderation on a heavily-censored site, objecting if anyone dares mention TSZ.
I am singularly unimpressed with your (lack of) intellectual courage.
I certainly invite Nullasalus to come here.
I would really like to know his answers to the questions in the OP, and, if he does not have any, why not.
Because they aren’t important?
Because Dembski’s argument doesn’t work?
A cursory reading of paper shows that all parameters have been described.
see page 21:
The crux of the paper is clearly summarized in page1
coldcoffee,
The logical next step would be to compute the probability. 🙂
Winston Ewert admits they can’t calculate the probabilities, would like to argue from improbability anyhoo: http://www.uncommondescent.com/intelligent-design/where-do-we-get-the-probabilities/
http://uphillwriting.org/wp-content/uploads/2011/12/circular-reasoning-works-because.jpg
Dembski has calculated the probability of flagellum in ‘No Free Lunch’:
Quoted from secondary source. I don’t have access to Dembski’s book.
is the estimate by physicists of the number of elementary physical particles in the visible universe;
is roughly the number of Planck-time intervals in one second;
is more than ten million times the age of our Milky Way galaxy in seconds. Thus the universal probability bound is 
Note: The universal probability bound is calculated as below:
One thing is always clear whenever an ID/creationist copy/pastes stuff like this; the copy/paster has no clue what it means and whether it has anything to do with reality.
Another thing is always clear whenever an ID/creationist does this; he doesn’t have a clue that the people reading it know what it means and know it is complete crap.
Perhaps Mr coldcoffee can tell us the justification for such a calculation. Why does he think molecules behave the way these calculations assume they behave?