I see that in the unending TSZ and Jerad Thread Joe has written in response to R0bb
Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI. Even Stephen C. Meyer says CSI is not amendable to compression.
A protein sequence is not compressable- CSI.
So please reference Dembski and I will find Meyer’s quote
To save Robb the effort. Using Specification: The Pattern That Signifies Intelligence by William Dembski which is his most recent publication on specification; turn to page 15 where he discusses the difference between two bit strings (ψR) and (R). (ψR) is the bit stream corresponding to the integers in binary (clearly easily compressible). (R) to quote Dembksi “cannot, so far as we can tell, be described any more simply than by repeating the sequence”. He then goes onto explain that (ψR) is an example of a specified string whereas (R) is not.
This conflict between Dembski’s definition of “specified” which he quite explicitly links to low Kolmogorov complexity (see pp 9-12) and others which have the reverse view appears to be a problem which most of the ID community don’t know about and the rest choose to ignore. I discussed this with Gpuccio a couple of years ago. He at least recognised the conflict and his response was that he didn’t care much what Dembski’s view is – which at least is honest.
Joe
In fact we often experience tornadoes in the UK, about 50 per year. An extreme example: http://www.telegraph.co.uk/topics/weather/9252146/Tornado-spotted-in-Oxfordshire-as-storms-batter-southern-England.html
But I never said when this event happened. It was some time ago. I can’t say more then that (security!), but if your excuse is that “there was no tornado so this could not have happened” then so much for SETI! But it did, so will you help or will you not apply the “design detection skills” that you claim to have?
Gpuccio,
Really? So if we find a structure on Mars made of glass you’ll deny design until you know what it’s function is? Or would that “obviously” be designed? This seems circular to me. You cannot make a design inference unless you can determine the function it was designed to provide? Really?
The same could be said for any string. If you happen not to know the function then all strings look the same, right? So Hamlet is designed because you can read and understand it but if you lack that you are stuck? Does ID not have more robust design detection mechanisms then that?
The problem is I only have enough money to do that for one of the documents. If only ID could provide a way to determine which of those documents I should study.
Again, the same problem, which document to choose?
Again, the same problem.
So you determine design by taking the blueprint and building something from it? By definition blueprints refer to designed objects. And your claim is that all proteins are designed, so if a protein is the end product then design is a given?
Is that the only possible way that ID can come to a design inference for long strings of data like this? What if I told you it was a signal from space. Would it automatically become design then? Or would we still have to examine proteins?
Sigh. Then why don’t you start there? I’ve already made it clear that the fact it was originally on paper but that is irrelevant, the data is what is important. And if all ID can say about this situation is “well, those sheets of paper with printing on, they are designed they are!” then forgive me for being singularly unimpressed.
Joe,
Seems that Kariosfocus disagrees with you:
http://www.uncommondescent.com/intelligent-design/id-foundations/the-tsz-and-jerad-thread-continued/#comment-436715
So given that the Lederbergs shows that the mutations were not due to environmental clues (if you actually read the paper this is obvious) they were not built-in responses.
Put simply, if they were built in responses that mechanism is not working very well because the mutations happen regardless of the environment.
So even if the “response mechanism” is built in, it’s faulty because it acts regardless of the environment.
So whence comes the mutations?
As KF says: “First, if there is design at work, but he pattern shown is one that exhibits the statistics of a chance based random process i.e. a probability distribution, the filter will infer to chance contingency.”
So the pattern of mutations observed exhibits the statistics of a chance based random process and as such does not represent any “design” at all.
Except of course, your “designed to evolve” fallback of (almost) last resort.
But given that this is achieved by imperfect replication you are once again adding unneeded entities. Why don’t you discuss your point with KF, explain to him how Zachriel is wrong about this (classic) experiment.
Another day, another bizarre interpretation by Joe. He invokes ‘quorum sensing’ by bacteria as evidence that they adapt to antibiotics using a pre-specified capacity, combined with some mechanistically unclear communication method.
You can take bacteria that are killed by low levels of antibiotic, and plate them out on concentration gradient made of strips of gel. All the bugs at or above the lethal concentration die. There is nothing in the population capable of immediate response to this supposed ‘environmenal cue’. They grow only n the antibiotic-free portion.
So you sit back and watch. A tiny offshoot ‘probes’ the gradient at the next level, and and spreads laterally from this point. From this new front, another point seeds the next advance. And so on, until the bugs can cope with a gel at sturation poiint – you can’t physically dissolve any more antibiotic.
Where on earth does quorun sensing communication come into this? What is being communicated to or from the rest of the population by these mutants that are able to cope with the higher concentrations? And where is the adaptive capacity located in the non-mutated organisms? You can certainly tell they are mutants, by the simple expedient of sequencing them. So this is Joe’s “maybe in the cell wall” computer program that generates adaptive mutations to order. Maybe it’s just random mutation.
And … if articifial ribosomes don’t function, how come one can Google numerous papers on functional artificial ribosomes? http://www.technologyreview.com/news/412471/creating-cell-parts-from-scratch/
I look forward to the meta-commentary – “and Allan Miller chimes in with …”. It amazes me the extent to which Joe can mangle scientific concepts and receive not a word of ‘correction’ from his peers. Do they really all think he’s the science expert that he evidently does?
It’s not “wrong”. It may be superfluous, as you said effects “like NS”. We were clarifying that point. As Lenski demonstrated, drift can be important in adaptation.
Not sure we’ve seen your math. Of course, standard population genetics were worked out generations ago by Fisher et al. Do your results differ?
Oh? Why is that? Indeed, natural selection should tend to purge the extraneous over time.
Your claims nearly always are general claims about the evolution of complexity. The mammalian middle ear is an excellent example as it is familiar to most readers and combines embryological, fossil and molecular evidence, along with a good scientific detective story.
That’s funny. Of course it’s relevant. Embryological data predict the fossils. That’s hugely important from a scientific vantage. When you can make those sorts of predictions independent of evolutionary theory, then maybe you will gain some scientific currency.
Actually, your arguments seem to be about the evolution of complexity, for which we have strong evidence. Instead, you retreat into the most ancient transitions, which left no fossils. It’s a gap!!
In any case, small changes to certain genes can be shown to cause relevant changes to the mammalian middle ear.
Mallo, Formation of the Middle Ear: Recent Progress on the Developmental and Molecular Mechanisms, Developmental Biology 2001.
That recombination is important in traversing rugged landscapes is a mathematical result. Try running a few evolutionary algorithms.
Because simple point mutation algorithms will climb the nearest peak and stop. If there are billions of peaks, then you have to start with billions of initial sequences in order to have a decent chance of finding the highest peak. Recombination can largely overcome this problem.
It’s an empirical statement. Adequate proteins evolved, even without recombination.
Gpuccio at UD
My understanding is that both documents are functional in some way. I just don’t know what that function is.
Can you give me an example of just one “biological string” and what it’s “function” is and how you determined that function is “the” function.
It’s a thought experement. It’s abstract. I don’t really have an office. There was really no tornado. That was all for Joe “literal” G’s benefit.
So when ID is presented with a set of unknown strings and is asked to choose which is the more interesting with no further data we have to “toss a coin”?
As before, what is the function of HIV and what it’s it’s dFSCI?
Then what is the function of HIV?
Credible to who? You? Let me rephrase the question. Does either of those two documents have “functional complexity”? If so, how much.
So ID can only be applied in the specific case of DNA sequences by building proteins and seeing if they are “functional”? This is quite different from the version of ID usually given.
As yet we’re not at that point. The point we’re at is “Can ID do any better then tossing a coin when determining which of these two sequences are worth investigating, given that only one can be in this example”. The answer, so far, is no.
If you can explain to me how to “do the work” then I’ll happily do it. But so far the choice is simple – which of the two documents would *you* given you information/design expertise choose to examine in detail and why.
It’s a thought experiment. I would have thought that did not need to be explained. It’s a test. Here are two documents. I’m heavily implying that ID should be able to tell us something about each of them. So far it’s all been excuses. If you don’t want to play, that’s fine, but simply saying “well ID can’t do anything of practical use but personally I’m satisfied that it explains the origin of life” is not even trying.
http://www.uncommondescent.com/intelligent-design/id-foundations/the-tsz-and-jerad-thread-continued/#comment-436722
Joe:
Then the question is: Was agency involvement required in the creation of either of those data sets?
So go ahead and attempt to “detect design” in either of those two documents.
Dare you!
If you can’t determine functionality without doing the chemistry, then design without evolution is impossible. There is no faster way to find the wholistic optimum of all interrelated systems than by fecundity and selection.
Gpuccio
That is part of the game. One is, one is not. Or neither are. Or both are. Can’t ID tell?
No, that’s what it does. It’s function is something quite different. For example, a car turns petrol into heat and gas. That is what it does. It’s function is something quite different.
In that case the function of the data contained within the documents is “to see if ID can tell us anything at all about the data”.
Perhaps you should explain the concept of irony to Joe. He’s the one that says you can’t investigate the data without examining it in person.
No, I asked which of the strings we should analyse in detail. You can in fact perform whatever level of analysis you like of course. If you want to examine both, please feel free to do so.
No, once again, that’s what it does, it’s function appears to be quite different.
Yes, so simple that millions of hours of effort gone into curing it and still no cure.
That’s not ID research nor anything like it. What is the link between Uniprot and ID please?
So investigate them already and stop with the excuses. If you worked at SETI you’d give up on day one, as until interesting sequences are identified it’s all just noise.
Condescend much? So design is detected by your ability to immediately understand the message? Hey, it’s written in English so it’s probably designed….
Who said anything about nucleotide sequences or functional proteins? Who said anything about what the data represents. This is all the baggage and preconceptions you are bringing to the code, it’s nothing about the code itself.
Once more, I don’t have the to do that for both. Can ID suggest which document is more “interesting” then the other?
So pick one and investigate it already.
Yet here we are.
Yet the way KF talks it’s the simplest thing in the world with “billions of examples” generated every day. Yet when we get specific, nothing.
Joe,
Which data sets?
http://pastebin.com/jezkjckt
and
http://pastebin.com/kamEBjR3
Would you like to play a game Joe?
Joe,
Then which, if any, of those data sets had agency involvement in their creation?
Without drift, some adaptations are not even possible. However, as a first-order approximation, it makes some sense.
That would have been a good place to put the link.
Your nomenclature is poor. Darwin identified the existence of vestigial structures. Darwin would be, presumably, a darwinist. Generally, darwinists (those who think natural selection is the primary mechanism of evolution) have resisted the idea that the genome is mostly junk. However, polyploidal genomes and some amoeba with genomes far larger than human genomes tends to indicate that some genomes contain a lot of redundancy.
So we have an almost unbelievable prediction from embryology, that the irreducibly complex structure of the mammalian middle ear evolved from reptilian jaw bones. Astoundingly, we find fossils of intermediate structures buried in the rocks. And, we even have evidence of that small changes to genes directly affect the related structures.
Your claim was that recombination was “wishful thinking”, when we know from mathematical studies that recombination is effective in rugged landscapes. You reject a plausible mechanism without evidence.
Xia & Levitt, Roles of mutation and recombination in the evolution of protein thermodynamics, Biophysics 2002.
Bittker et al., Directed evolution of protein enzymes using nonhomologous random recombination, PNAS 2004.
Sure. It’s good for the health. But it doesn’t address the point that even lacking one of the primary mechanisms of evolutionary novelty the experiment still resulted in adequate function. This is expected when exploring a rugged landscape.
That’s fine, but if you didn’t know the origin of nylonase, you would still conclude design.
Joe,
No, I and others had something to do with it.
Yes, that’s right. I made it happen. But that’s not the point. By definition all letters printed in a book or on a screen are there via some agency. But none of this speaks to the content of the data itself. If those letters were scratched on a monolith on the dark side of the moon that they were put there by “an agency” would be the least interesting thing about them. What they mean would be far more interesting. Yet it seems you would be happy to leave it at that.
Now we are getting somewhere. Yes, we know that you claim that DNA was designed. That’s of no relevance here. I’m asking about these two data sets specifically. Beyond *my* “agency involvement” of getting them to appear on the internet, was there an *agency* involved in their creation?
If you are happy to leave it at “there was an agent involved in getting them to appear on my screen from the internet” then that’s fine. I can just put you down for “Joe tells me something I already know about the data, that I, an agency, was involved it getting it onto the internet in a format he could read”.
Do what? What does “my position” have to do with what ID can tell us about those two documents. And what is a start anyway? All you’ve said so far is “if the data represents DNA sequences there isn’t any evidence that blind and undirected processes could produce either of those” – well, that’s only true if they represent DNA sequences. Do they? Will you step down on at least one side of the fence on that then? It’s not much, but it would be progress. As “if” is no good to anybody – it was you that said ID looks for agency involvement. So far all you’ve done is hedge your bets and refused to stake a claim. And that’s what this game is all about. If this if that if the other, no good. Say something about these datasets.
Summary so far.
Gpuccio had a go, which was great. He thinks the data represents DNA and as such we need to instantiate it, see what it does and that’ll determine “design or not”. Once instantiated if there is any function at all then the original data was designed, as function is so rare in the total space that finding any function at all is a strong indicator of design.
So far this is the best idea, with at least an outcome that either indicates design or not. So it’s doable.
Joe also had a go but no testable proposal, unlike Gpuccios which is at least feasable, so I’ll hold off on assigning him an answer just yet.
Kairosfocus also had a go, he quoted me without name in this post: http://www.uncommondescent.com/intelligent-design/id-foundations/the-tsz-and-jerad-thread-continued/#comment-436715
and says:
If you are reading KF, would you be able to apply this test to my datasets and determine if they are inside/outside that resource limit you mention? That would be an interesting test. Other then that he’s ignoring the game. I wonder why, of all of them he seems best equipped to come to some determination. He can do it for billions of messages a day, inferring design by calculating probabilities in possibility space, but not apply his publicly stated as usable methodology to two specific documents when asked? Why not I have to wonder.
I find it somewhat amusing that gpuccio’s method for determining functionality turns out to be chemistry and selection. He hasn’t elucidated any necessity for a designer other than to produce saltations.
The word saltation seems rather quaint, and not many people seem to know its history or what it means. Basically it’s a Behe hop, a large, improbable mutation that leaps over Behe’s Edge. The concept really disappeared from biology until Behe revived it.
Gpuccio’s theory is nothing more than the molecular equivalent of no transitional fossils. It seems safer to people like Behe and gpuccio because molecules don’t leave fossils, or at least not for long. The latest research indicates that all DNA degrades within a few million years, even if frozen.
Please substantiate that claim. When have we minimized the importance of recombination in traversing rugged landscapes?
That’s irrelevant with typical rugged landscapes. Randomized genomes will quickly climb local peaks.
KF’s summary is parichical because he equates the knowledge of how to build biological adaptations with already existing straws in a 1,000 LY cubical haystack. As such, he thinks Darwinism would represent a vast series of one astronomically unlikely events after another, after another, etc. As far as he is concerned, it’s absurd.
However, I’m suggesting that this view is mistaken. Darwinism genuinely creates non-explanatory knowledge. As such, to use KF’s analogy, there was no straw already there that evolution lands on.
IOW, probability simply isn’t applicable in this case as knowledge creating processes represent a different kind of unknowability. This makes the application of probability limited to very specific cases.
Another example of the impact of this unknowability can be found in this TED 2011 TED talk. In fact, Darwinism becomes an even better explanation when we integrate it with our current, best, universal explanation for the grown of knowledge.
For example, dividing knowledge (useful information that tends to remains when placed in a storage medium) between explanatory and non-explanatory allows us to make significantly more progress than merely making the statement that evolution is “random, but not random”.
Non-explanatory knowledge is created when genetic variation occurs in the absence of a problem to solve. Cells cannot conceive of problems or explanatory theories. Nor could they test those variations for internal consistency because only explanatory knowledge can be constant or inconsistent with itself. However, these adaptations would be tested by the environment.
Genes are biological replicators. The do have “problems” of getting copied into the next generation. But only we can conceive of this as a problem in the necessary sense. So, in the case of Darwinism, we can be far more specific: conjectured genetic variations are random in respect to any specific problem to be solved.
There is nothing in a tiger that contains explanatory theories about how different patterns of stripes (camouflage) could help them obtain more food. Nor could those cells conceive of it as such if they did. Nor would those cells have previously contained the knowledge of how to perform those adaptations.
Non-explanatory knowledge is genuinely created when conjectured genetic variations occur that influence a tiger’s stripes and some of those conjectures are refuted by natural selection – but that conjecture occurred in a way that was random to the problem of obtaining more food via different forms of camouflage.
So, when we integrate evolution with our current, best universal explanation for the growth of knowledge, Darwinism becomes an even better explanation. This includes the growth of knowledge used to improve biological organisms.
Mung,
No, not at all. Why don’t you come here and ask me myself instead of putting words in my mouth.
I’m simply asking can ID tell us anything at all about the strings in question.
I’m not asking you to infer design, calculate CSI or anything at all like that. If you see my original post, I’m simply asking can ID influence my decision one way or the other by providing some currently unknown information about each document.
If you want to infer design, that’s fine.
If you don’t want to and then later make a design inference, that’s also fine.
But if Seti were ever to post a signal they want the world to help decode it’ll be quite clear what’ll happen at UD with regards to it.
Nothing. At. All.
Gpuccio,
Fair enough. I did not ask you to do that. I made no claims about the sequences, nor their similarity. You attacked the problem in the way you thought best. Good on you for trying.
Fine. Great. So nothing in it between them for you. For all I know they are just two random strings. I’ve not developed a skill set like you lot at UD to even begin to work it out. So thanks for trying. I’ll put you down for “toss a coin”.
Of what? That what you propose is feasible? Of course it is. Where we differ would be on the results. You’d infer design from “function” and I would not. The simple fact is that you are wrong with your opinions about protein domains and the probability of their origin etc. You will never accept it because it forms such a central plank of your “why ID is true” belief system but nonetheless you are wrong.
If you don’t look for the evidence because you don’t believe it exists then you’ll never find it, hence providing “evidence” for your original thought.
Not as you mean it, no, given that you are wrong.
Confused on this. If the transition from A to A1 is naturally selected, then why is the probability 1:2^150? In a large population, beneficial mutations will reach fixation 1/2s, where s is the selection coefficient.
Larry Moran is not a darwinist. From what we can see, Myers uses the term darwinist ironically. You might want to cite Dawkins, who really is a darwinist, and as such, is considered somewhat dated by many modern evolutionary biologists.
Some organisms have been observed to double their genomes in a generation, such as many species of flowering plants. That’s a lot of redundancy. Onions have larger genomes than humans. Not sure what information you want?
But we can see how the complex structure evolved in incremental, selectable steps. There’s no barrier.
Yes, it’s supported by studies of evolutionary algorithms and how they work on rugged landscapes. And it’s supported by various studies of protein-space.
Sure you did.
The new function wasn’t designed. It evolved.
Mung,
I’ll put you in the same category as Joe then? Strings that were on paper are designed. I thought you were capable of more. But perhaps I overestimated you.
But no, it’s not a reference to 2001. And so you are 1 out. It really is only 2000 characters.
You can move on, you can do whatever you like. You can leave that as your final answer, if that’s your desire. Fine by me, but if you ever want to update your answer do let me know.
Science is really that easy? Gah, I’ve been doing it all wrong!
Petrushka:
Only less so. ‘Fossil transitionals’ aren’t elbowed out of existence by the very process of evolution. But so-called intermediates on a path of molecular amendment, outcompeted by fitter descendant sequences or simply being the eliminated sequence in a stochastic fixation process – where do GP/Mung etc think that these ‘intermediates’ ought to have been preserved, ‘if evolution were true’? The unavoidable consequences of the theory are twisted into something inexplicable and embarrassing!
Dead DNA is gone, gone, gone. History, in biology more than anything else, is written by the victors. All we have are the descendants of survivors, mutated and filtered.
Bullshit. For my part, I never shut up about recombination. It is a very important force. And it has clearly been of great historic significance, as witness the many recurring sequences, in both sense and antisense orientations, in functionally unrelated parts of the genome. If one is lukewarm about common descent, of course, one will argue that these are all the same or similar due to common design. But ‘lateral’ within-genome duplication makes exactly the same prediction as whole-genome duplication in descent: a nested hierarchy of markers. The same techniques of phylogenetic tree-building yield the same very strong support for either:
1) Common Descent
2) Common Design by a designer to whom fooling us into thinking it’s common descent appears much more important than simply designing the damn thing without such unnecessary restriction.
Rereading, I appear to flip here from simple reshuffling of genes to duplication. It’s all recombination, of course. Just to be clear: anything that changes the physical sequence of bases on a chromosome, or swaps whole or part-chromosomes, or merges related or unrelated sequences from separate organisms, is recombination. One can trace the relationships between sequence and uncover a lot of history, because recombinational events make excellent markers, in addition to being a powerful mechanism of evolutionary ‘exploration’ in themselves. Unlike point mutations, which only have 3 options available, and a reasonable probability of returning to their start point in 2 steps, recombinational events are highly unlikely to ever occur twice, and even less likely to reverse. Their signal slowly decays, but this simply erases that particular marker, rather than invalidating the ones that can still be reliably identified.
Joe,
But you are not inferring design at all. You are simply saying “all data on the internet is designed as data cannot get on the internet without human intervention”. So by that definition every string I might present is designed. If I write down how many birds fly over in a day, or the frequency of radioactive decay detection according to you that data is “designed” simply because it was written down. ID is not very useful is it? All you seem to do is walk around pointing at things saying “yep, designed”.
So you proclaiming “victory” is somewhat premature. You don’t even have to examine the string itself before saying “design”. What good is that?
You think that ribsomes have a non-physical component but can’t prove it.
You contract yourself. You’ve established design in both my documents (they are on the internet!) but have failed to look for “meaning”. So given that your detection of design was in fact trivial (it was on paper = design) do you want to have a go at the meaning of the documents instead?
Then why don’t you prepare for that real world case by doing what you’d do there on my documents? Get a bit of practice in?
Its amazing how many excuses you lot come up with to avoid doing the thing that you claim not only can be done but is done day after day.
Let’s say an archaeologist found two tablets with those strings on. They just go “yep, designed” and move on? No, but that’s what you do.
Joe,
This is how UD defines what ID is: http://www.uncommondescent.com/id-defined/
So what I’m asking, in essence is that you test or evaluate my documents in the same manner as scientists daily test for design in other sciences. So it seems that nobody is able to recognize patterns arranged by an intelligent cause for a purpose, if those documents indeed contain such a pattern. Just knowing that one did and one did not for example would essentially solve my problem but it seems despite this being the self proclaimed reason for ID’s existence nobody can actually do it!
So it seems that this is just an empty claim, when faced with actually doing it ID folds.
The reptilian middle ear is much less complex than the mammalian middle ear.
Isn’t a watch just complex stuff that’s been shifted around? In any other context an ID advocate would be claiming that the arrangement of parts to create a new function would be proof of ID.
Can’t seem to resolve the apparent contradiction between the first statement and b).
Also, we’re still left with your leaky bucket explanation. See keiths’ description.
gpuccio responds to Zachriel:
gpuccio,
Don’t let your emotions get in the way of a learning opportunity. My bucket analogy highlights a serious flaw in your dFSCI argument:
In case it’s not already obvious, here’s the problem:
a. You want to use the fact that something is in the bucket (i.e. has dFSCI) as an indicator that it is designed (that is, not the result of ‘necessity mechanisms’).
b. Before you put it in the bucket, you have to rule out known ‘necessity mechanisms’ as the cause.
c. To rule out known ‘necessity mechanisms’, you can’t look to see if the object is in the bucket, because you haven’t decided whether to put it there yet.
d. Therefore, in order to decide whether to put it in the bucket, you have to use some criterion other than whether it’s already in the bucket. Obvious, right?
e. But if you’re using some other criterion, then it’s the other criterion that is doing all the work. You only put something in the bucket after the other criterion is met.
f. So the fact that something is in the bucket (has dFSCI) is just a restatement of what we already knew by other means. The label of dFSCI adds nothing, so we might as well ignore it.
Mung
Is “pretty improbable” a technical term in ID then? Consider yourself lumped in with Joe.
Gpuccio,
Joe and Mung say it’s designed.
Gpuccio says it’s not.
Joe,
No need to do all that, just say it’s “pretty improbable” and leave it at that.
But that’s trivially true of any piece of data on the internet. If I take a picture of a rock pile then you will claim that it shows design because “pictures require agency involvement”.
So ID has it easy. When asked “Is X designed” you can say “The fact that you are asking me that means that agency involvement was present and that’s all I have to do”.
So your claims that ID is like forensic detective work or archaeology don’t add up. Neither of those activities stop when “agency involvement” is detected.
If there was really a “science of ID/design detection” you’d all come up with the same answer for my, frankly trivial, exercise.
Joe,
Testing is a large part of science. I’ve tested you. And the results are, well, as expected.
I’m not trying to “refute ID”. That can’t be done. There is nothing to refute.
What I’m trying to do is show how the grand claims of “design detection” I quoted from the UD “What is ID” section are just lies.
Mung,
What if I told you that the letters represented wind directions (N,S,E,W) and had become transposed with the letters used in the document?
Now simply recording the way the wind was blowing at 1 second intervals means that the way the wind was blowing was designed. You just said so yourself.
I realise you realise the absurdity of your position, you are a poe, but others take the same position in all seriousness and I thank you for saying what they are too afraid to say.
It also means that you can look at any segment of DNA and say “yep, pretty improbable, designed”.
So any two documents that are the same length where the content uses the same characters are designed? Regardless of the actual content? Or how many other potential “documents” are out there?
I hope you realise how foolish this is making you look, especially as the 3 ID supporters that have braved my trivial challenge can’t actually agree on any aspect of the challenge.
Gpuccio,
I thought you did not want to play any more? Now you are calling me a liar for reporting what you are all saying?
Whatever.
I never asked for a determination of design/not design. Here is what I asked originally:
http://theskepticalzone.com/wp/?p=1352&cpage=2#comment-16703 If you would like to revise your answer in light of that please do so.
Which was not what I asked for. You answered the question you thought was asked. I made it clear that the container of the data is not relevant, but nonetheless you make it relevant.
Mung and Joe have done so, on the basis that the string(s) are “pretty improbable” they have concluded design. You have concluded the opposite. Therefore how am I a liar?
I am doing so with my little game.
Then you win! It’s simple! The fact remains that some of you are concluding design because “things don’t get printed on paper on their own” and I’m reporting on that and you don’t like it.
If “design detection” really existed you’d all come to the same conclusion quite quickly about my two documents.
Yet you cannot even agree on the question that’s being asked despite it being very plain.
Yet the fact remains that Joe and Mung say design and you do not. If you considered different questions is not really my problem, I only asked one – which of the two documents are more interesting, can they be categorised differently on the basis of their contents (and not the paper they are printed on!).
So call me a liar if it makes you feel better but it does not cover the fact that of the 3 of you that have answered I’ve had two different answers (design/not designed).
Gpuccio,
Ah, I see what you are getting at. I say that nobody can do what UD says ID can do (but it seems despite this being the self proclaimed reason for ID’s existence nobody can actually do it!) and you say I am a liar because people at UD have attempted my challenge.
You have misunderstood me. What I’m saying is that *I know the answer* to my little challenge, and so far nobody has used ID to solve it. Nor even come close.
So when I say that nobody can do it I mean that nobody has done it yet correctly. Yes, attempts have been made but yours was the only serious one. But nonetheless you failed, and that’s what I’m getting at. So when I say that nobody can do it, it being the reason for ID, then that’s still true. You’ve not done it, Joe’s not done it neither has Mung.
And you’ve all come up with different answers, that much is true….
Okay. So we’re working with a trichotomy. It’s really just another restatement of the Explanatory Filter.
The specific problem is that evolution has both random and deterministic aspects. Gpuccio will argue that evolution alternates the two mechanisms, therefore is excluded. That argument doesn’t work, though, because the test for “highly functional information” only precludes completely random sequences, not incremental increases in functional complexity.
The problem is that the origin of “high functional information” is the very thing being contested. Large amounts say nothing about its origin.
Among other problems, the length of a gene sequence says absolutely nothing about how many steps removed it is from a non-functional precursor. And nothing at all about its history.
Gpuccio,
I know what I am, but what are you?
I never said you did. I said that you’ve infered design for *all strings printed on paper* exactly as you said yourself. Now, for the particular strings in question (rather then their container) you have not infered design which I have already mentioned.
Great! So that’s essentially a “pass” really. Which is fine, you can’t be wrong with a pass as you point out.
So all proteins start out as not-designed until you find their function and then they become designed? Got it.
Congratulations, you did not fail! You did not succeed either, so perhaps next time.
Well, that depends. So far “ID Theory” has told me that strings printed on paper are designed, which I never disputed and specifically mentioned as irrelevant from the start. Furthermore Joe and Mung are saying design and you are not. So when “ID theory” makes up it’s mind feel free to let me know. In the meanwhile you may continue to call me a liar, whatever makes you feel better.
Yeah, ID ain’t ruling out anything except that where you find manufactured paper you’ll find a paper mill.
Mung,
The only evidence I’ve seen so far of your understanding of “ID Theory” when presented with a puzzle that should be trivial for “ID Theory” to solve is:
So frankly, your opinion of what I do and do not understand with regard to “ID Theory” is irrelevant until and unless you can prove that you can actually do something with “ID Theory” that does not revolve around your misunderstandings of evolution.
Gpuccio,
And then presumably you’ll explain how the Intelligent Designer achieved the same?
Let me save you the trouble, Joe already told me!
They were designed that way!!!!
What gpuccio fails to address is the rather basic question of how the Designer knows the properties of yet to be created molecules.
KF punts this question by asserting that the Designer must have capabilities beyond Venter’s.
Gpuccio asserts the Designer must be non-material.
Can anyone say ad-hoc?
Exactly how useless is an invented, imaginary sky-fairy having whatever attributes and capabilities and motives needed to explain the gaps that present themselves today, and which will acquire whatever attributes are needed when gaps are closed or new ones discovered?
One cannot argue against imagination. As Critical Rationalist points out, science advances by imagining explanations.
The difference between science and fantasy is that science limits its imagination to testable propositions. This is why, even in hard sciences like physics, conjectures that have no testable entailments are considered to be puffery. Sometimes interesting, but not science.
The problem with ID is not that it is proven wrong, but that it doesn’t lead to useful research. Consider Douglas Axe. How useful is it to assert that we don’t know the detailed history of proteins? Or that the specific history, if known, would appear improbable. Like the list of winners of lotto.
How probable is the specific ancestry of any human being? It would seem that anyone familiar with mathematics would know that the probability of something that has already happened is one.
What physical law is violated by the string of improbabilities that led to your ancestors meeting? Or the specific lotto winners? Retrospective astonishment is not good mathematics and not science.
It also seems to me Gpuccio is another of the “video evidence or it did not happen” crowd. Whatever evidence you might present is never good enough.
Great strides have been made recently in understanding the origin of protein domains yet Gpuccio knew yesterday, knows today and will know tomorrow the explanation already. Before any research was done at all, he knew the answer. Regardless of how much research will be done, he knows the answer.
http://www.els.net/WileyCDA/ElsArticle/refId-a0020202.html But blah blah blah eh Gpuccio? You want this
A step by step video essentially. For stuff that happened in the deep deep past. And without that you’ll simply dismiss every other bit of evidence that is produced for a natural origin for whatever spurious reason you think of at the time.
The only saving grace is that before too much longer it seems that for some problems that currently seem intractable due to computing power limitations those limitations will be lifted somewhat. So perhaps you’ll get your start to end video recording then, but even then you’ll just say “but it’s a simulation, it proves nothing”.
So my arguments with you Gpuccio do not have the intent of getting you to change your mind, you did not make it up on the basis of evidence so evidence won’t be able to change it.
I just want to illustrate the stark depth between claim and reality in the ID community.
Gpuccio would be a hoot on a jury.
Asked to provide the best explanation for events, he would have to say, in the absence of videotape, that the best explanation would be intervention by non-material entities.
And of course, videotape is just a simulation and could be faked.
GP will protest that humans are intelligent agents and potential causes of crimes. That’s an empirical fact.
I would point out that evolution is also an intelligent agent capable of creating new function.
What you lack in biology and in jury trials is the detailed, step by step history. You have to infer the details and come to the best explanation.
I would also point out the utter, complete lack of any entity capable of designing biological molecules, other than evolution.
When you are on a jury, you ar generally bound in your theory making, to whether a specific person was the agent. You won’t get far with imaginary, invisible, immaterial agents.
In fairness, I should note that the circularity problem did not originate with gpuccio. Gpuccio’s dFSCI is just a modified version of Dembski’s CSI, which has been plagued by circularity since its inception. Unfortunately, gpuccio failed to notice and correct the problem he inherited from Dembski.
Here’s the circularity in Dembski’s argument:
1. To safely conclude that an object is designed, we need to establish that it could not have been produced by unintelligent natural causes.
2. We can decide whether an object could have been produced by unintelligent causes by determining whether it has CSI (that is, a numerical value of specified complexity (SC) that exceeds a certain threshold).
3. To determine whether something has CSI, we use a multiplicative formula for SC that includes the factor P(T|H), which represents the probability of producing the object in question via “Darwinian and other material mechanisms.”
4. We compute that probability, plug it into the formula, and then take the negative log base 2 of the entire product to get an answer in “bits of SC”. The smaller P(T|H) is, the higher the SC value.
5. If the SC value exceeds the threshold, we conclude that unintelligent processes could not have produced the object. We deem it to have CSI and we conclude that it was designed.
6. To summarize: to establish that something has CSI, we need to show that it could not have been produced by unguided evolution or any other unintelligent process. Once we know that it has CSI, we conclude that it is designed — that is, that it could not have been produced by unguided evolution or any other unintelligent process.
7. In other words, we conclude that something didn’t evolve only if we already know that it didn’t evolve. CSI is just window dressing for this rather uninteresting fact.
Though the details are slightly different, the same circularity undermines gpuccio’s dFSCI argument.
I didn’t follow that. What your list doesn’t say (as far as I can tell) is that the calculations themselves rest on foregone conclusions. Given sufficient knowledge of conditions, it’s in principle possible to determine that that something was unlikely. Given the near-infinity of interdependent variables inherent in reality, it’s pretty safe to say that nearly everything that happens is vanishingly unlikely. We don’t need even much of sample of these variables to do the calculation close enough to establish this.
And this in turn means that one simply cannot induce “design” from looking at an object or event. One must identify and operationally define the design mechanism, and then WATCH it happen. We are wading through a sea of CSI every which way all day long. This is what I’ve called the “every bridge hand is a miracle” fallacy. Clearly, all bridge hands are chock full of CSI – they’re complex, they’re fully specified, they are all vanishingly improbable.
To be fair, gpuccio doesn’t conclude it’s beyond RMNS unless there’s really lots of CSI.
Flint,
They’re complex and improbable, but not “fully specified” in the way Dembski and other IDers intend. For a bridge hand to be specified, there has to be some independent reason that it is special to the “semiotic agents” involved, apart from the mere fact that it happened to be dealt to you.
For example, if I predict ahead of time that I will receive a specific bridge hand, and then I receive exactly the cards I predicted, then that bridge hand is clearly specified, even if it is a thoroughly average hand by normal bridge standards. You would rightly suspect that the dealer and I are in cahoots, that something fishy is going on, or maybe even (if you had ruled out the more mundane possibilities) that I was prescient. You wouldn’t think it had happened by chance, particularly if I was able to repeat the feat.
However, if I received the same improbable hand without specifying it in advance, it would be a thoroughly unremarkable event, and no one would take notice.
IDers fall prey to many fallacies, but the “every bridge hand is a miracle” fallacy is not one of them. At least, not one that Dembski and gpuccio fall prey to.
I think that one computes (in Dembski’s argument) bits of SI, not bits of SC. SI is a concept originated by Leslie Orgel, the C part comes in as an all-or-none assessment that there are at least 500 bits of SI. If it is present, you say there is CSI.
That value was chosen to be one that could not show up even once in the whole history of the Universe by pure random happenstance. (Personally, I am willing to acknowledge the meaningfulness of SI as a concept in simple genetic algorithm models, and the reasonableness of saying that a value of SI high enough to constitute SC is implausible as having originated by pure mutation, in the absence of natural selection. Don’t everybody boo at once.)
Where keiths is asserting circularity is where natural selection is ruled out as a source of the SI. Dembski did it differently. He had his Law of Conservation of Complex Specified Information (LCCSI). That was supposed to show that there could be no combination of deterministic and stochastic processes that could generate SC. It has been disproven on two different grounds, by Jeff Shalit and Wesley Elsberry, and by me.
If gpuccio and others who use SI and SC do not rely on Dembski’s LCCSI theorem, they then need to have some other way of ruling out that natural selection made the SI high enough to be SC. That is where gpuccio invokes the ruling-out of deterministic natural causes, and where there seems to be circularity as he does so.
(As an aside, yes, Dembski also had a step where deterministic natural causes were ruled out, but he seemed to only invoke that to get rid of rather simple and trivial natural forces. The heavy lifting in arguing that NS could not be responsible for the SI was done by the LCCSI.)