VJ has written, by his standards, a short post distinguishing order from complexity (a mere 1400 words). To sum it up – a pattern has order if it can be generated from a few simple principles. It has complexity if it can’t. There are some well known problems with this – one of which being that it is not possible to prove that a given pattern cannot be generated from a few simple principles. However, I don’t dispute the distinction. The curious thing is that Dembski defines specification in terms of a pattern that can generated from a few simple principles. So no pattern can be both complex in VJ’s sense and specified in Dembski’s sense.
At the heart of this the problem is that Dembski has written a paper that most IDers would find unacceptable if they took the trouble to understand it. But they don’t quite have the courage to say they disagree. That is why this comment from Eric Anderson made me chuckle:
Second, why is it so hard for some people to get it through their heads that the issue is not “order”? Is this really hard to understand, or just that so many people haven’t been properly educated about the issues?
I wonder which one it is in William Dembski’s case?
Appendix
TJ has written a 4000 word appendix to this post. I haven’t time to read it all but it appears that he accepts that some of his original OP was wrong. It is rare for people to admit they are wrong in heated Internet debates so all credit to him. In particular he appears to accept that there is considerable confusion about order, complexity, and specification within the ID community (why else the need to propose his own definitions?).
What would be really nice would a similar admission from Eric Anderson that the ID community needs to sort out its own definitions before complaining that others are incapable of understanding them.
It seems to me that ideas such as ‘order’ and ‘complexity’, and ‘function’ and ‘purpose’ for that matter don’t have precise definitions. They’re fine for casual conversation but when one is discussing difficult topics this imprecision can cause confusion and erroneous ideas…and IDers use this confusion to their advantage- it helps get IDs foot in the door.
Dembski’s use of Kolmogorov uncomplexity is strange. It implies that we are to regard a life form as uncomplex, and therefor having specified complexity [?] if it is easy to describe. Even though that life form may then be extremely unfit.
Thus a hummingbird, on that view, has not nearly as much specification as a perfect steel sphere. The hummingbird can do all sorts of amazing things, including reproduce, which the steel sphere never will.
It would make more sense to define specification on a scale of fitness, rather than of algorithimic (non-)complexity.
Yes, by Dembski’s definition a chladni pattern would be both specified and complex. However, it would not have CSI because it is highly probable given a relevant chance (i.e. non-design) hypothesis.
In other words it doesn’t have CSI because we know how it was produced. Although presumably an intelligent agent stretched the membrane/built the guitar back.
Eric Anderson really needs to read Dembski properly before accusing other people of not understanding the concept.
It has to be both easy-to-describe, and one of a small subset of as-easy-or-easier to describe from a much larger set of patterns made from the same elements (or drawn from the same distribution of elements).
Yet again, Kairosfocus tries to claim that “Chi” is calculable, linking to one of his own worked examples in which he fails to compute P(T|H) for anything other than the null of “blind chance”, citing Durston et al’s “Fit” calculations which assume random draw.
He still hasn’t seen the eleP(|TH)ant in the room. Despite repeated correction. 🙂
KF – please note!
V J Torley seems to be forgetting that fractal patterns are non-repeating, even though they can be simply described.
So even if the requirement that a pattern be non-repeating as well as easy-to-describe was to be added to the definition of a Specification, it still wouldn’t eliminate false positives.
Unless, of course, the IDers learn to compute P(T|H) properly, in which case all false positives vanish!
KF makes two unwarranted assumptions. First is the isolated island assumption. That’s just an assertion that is largely contradicted by actual experiment.
But the second is just dumb. However great the distance between islands, it is less than the distance from random to the current configuration.
Even if he wants to assert the distance between islands is great, he needs to use that distance in his calculations and not the length of the genome.
But that would require acknowledging that many mutations are neutral or nearly neutral, and islands are not isolated.
https://www.google.com/search?q=youtube+chladni+patterns&oq=youtube+Chladni&aqs=chrome.2.57j0l3.6087j0&sourceid=chrome&ie=UTF-8
The concept of order is well defined in mathematics. A set S is ordered if there is a binary relation < satisfying the following postulates.
1. For every a, b, and c in S,
a < b and b < c implies a < c,
i.e. < is transitive.
2. For every a and b in S exactly one of the following relations holds:
a < b, a = b, b < a,
i.e., < satisfies the trichotomy law.
There have been a number of techniques for specifying disorder; one of which is the number of permutations required to bring the elements of such a set back into the original, specified order. That specification can be the kind of order we find in, say, the integers, or it can be convention, such as what we specify in days of the week, months, or letters of the alphabet.
In crystalline arrangements, deviations from order are often referred to as “dislocations.” One way to specify disorder in such a system is to count the minimum number of transpositions required “repair” all unit cells in the structure back to their original configuration.
In the case of atoms and molecules, this gets a bit more difficult when there are various allotropes of the element, because one has to decide which allotrope is at the lowest energy state.
In the case of more complex molecules – such as the molecules of life – one has to deal with hierarchies of order. There is the primary order or structure which is a specified sequence of atoms or molecules making up a structure.
Then there is secondary order or structure in which the primary structure takes on an additional shape, such as a spiral or pleated structure, due to internal stresses in the primary structure and its interactions within itself and with its environment.
Tertiary order or structure starts when the secondary structure begins to fold back on itself forming other shapes.
Quaternary order or structure occurs when two or more tertiary structures bond and form higher level structures.
One could specify order by the number of specified operations required to undo each of these structures and take them back to their simplest state. This is where the word “complex” starts getting conflated with order/disorder. But in this case, it doesn’t refer to order/disorder as much as it does to the fact that the resulting structures require more operations to undo or more words to describe how they are assembled.
The word “complex” is often associated with low probability. It is related to the fact that a specified event out of a large number of events occurs with low probability assuming a flat probability distribution for that set of events. This meaning of “complex” is associated with “mixedupness.” This use of “complex” seems to be the more common use among ID/creationists and their beliefs that the molecules of life emerge out of ideal gases of inert objects.
That usage by ID/creationists also shows up in their misconceptions about entropy and the second law of thermodynamics.
VJ Torley comments:
Well, Dembski defines complexity as high Shannon Information. But if complexity is defined simply as “low probability” then sure, if we know a pattern is highly probable, then it isn’t complex. But why not just say “highly probable”?
A Chladni pattern has as much Shannon Information as the filings/flour/sugar before they’ve been jiggled. But I agree that the pattern has more order.
VJ Torley also responds that he begs to differ re fractals being non-repeating, and illustrates his answer with the mandelbrot set. But the interesting thing about the mandelbrot set is that it doesn’t exactly repeat – there’s self-similarity but not self-identicality. Very simple algorithms can give rise to patterns that are repetitive, random, and Strange. All can be complex in Shannon terms. Some you might call “ordered”. All can be produced with probability =1
So I don’t really think that VJ Torley has introduced anything very illuminating with his concept of “order”.
Apparently Torley would agree that Fullerenes are also designed.
Yet they are found in Nature.
The issue of “functionality” comes up again over in Torley’s comments.
“Functionality” is temperature dependent. If we take a system below a certain temperature, most, if not all, of it’s dynamic behavior ceases. Raise the temperature too far, and the system comes apart.
You can’t salvage this by asserting that such a system would have function if it were at the proper temperature.
There are at least two reasons for this; (1) it admits that function is driven by the energy bath in which the system is immersed, and (2) superconductivity would have to count as “functionality” because a very delicate dance of organization and behavior take place once the temperature drops below a certain level.
In fact, superconductivity has many features that are analogous to the behaviors of complex molecules in a heat bath. There are lots of activities going on; and a suitable “sensory system” imbedded within them would show this activity.
How does a superfluid “know” how to crawl out of its containing vessel? How do the electrons and atoms in these systems “know” how to coordinate their movements so precisely?
This gets us back to the question that nobody in the ID/creationist community has grasped, let alone explained; namely, where in the chain of complexity of condensed matter do the laws of physics cease to be enough and intelligence has to step in to get the job done?
I’m still waiting to see if Torley, or any of those regulars over at UD can scale up the charge-to-mass ratios of protons and electrons to kilogram-sized masses separated by distances on the order of meters and then calculate the energies of interaction in units of megatons of TNT. I am still waiting for them to fold in the rules of quantum mechanics and then justify why ID/creationists use tornados in junkyards or battleship parts or letters as stand-ins for the behaviors of atoms and molecules.
It’s a high school level calculation; but they can’t seem to do it or see its implications.
As usual, Torley is quite confused (although at least a little less verbosely than usual).
In the Kolmogorov setting, “concisely described” and “concisely generated” are synonymous. That is because a “description” in the Kolmogorov sense is the same thing as a “generation”; descriptions of an object x in Kolmogorov are Turing machines T together with inputs i such that T on input i produces x. The size of the particular description is the size of T plus the size of i, and the Kolmogorov complexity is the minimum over all such descriptions.
Pulling a Dembski and resorting to a description that doesn’t uniquely identify the object x, but rather identifies a set S of objects of which x is a member, doesn’t help him here, because x can always be uniquely described by producing the test for membership in S, together with (for example) the index j in lexicographical order that specifies x. Then a program to generate x just tests each string in turn, checking if it is in S, and counting until the j’th one is reached. The extra cost is at most O(1) + the log of the size of S.
Torley also seems to confuse the computational complexity of generating x with the Kolmogorov complexity. These two concepts are not related at all, since a Kolmogorov description could run for a really long time (it does not figure at all in the vanilla theory), and an efficient construction of x could be done by a very very long program.
Torley is also quite confused about periodicity and “repetition”, which has a really precise mathematical sense. Fractals are not periodic.
All this is the typical behavior of ID advocates who are blowhards on subjects they have little or no training in, and want to pretend to be experts by reading popular sources such as Wikipedia articles.
Absolutely – and Dembski makes this very point in the same paper!
VJ finishes:
VJ may I suggest that if you are sincere about stimulating discussion that you make further comments on TSZ. As you know most of us are not permitted to comment on UD and this parallel lines discussion is not conducive to clear debate.
I wholeheartedly endorse the invitation to Dr Torley to post here.
Thirded.
gpuccio also hasn’t seen the eleP(T|H)ant yet.
I’ve bolded the eleP(T|H)ant, gpuccio 🙂
How do you compute that search space?
I see Gpuccio has also commented on UD (this business of parallel commenting is so frustrating). That brings together the two most (perhaps only?) worthwhile contributors to the IDist argument.
GP I am going to address this to you in the hope that you are reading. You write (talking about VJ’s post):
I am glad the you accept there are problems with Dembsk’s definition of specification. I wonder if TJ agrees?
However, I cannot see how you can agree with VJ’s comments that there is a difference in Dembski’s work between concisely described and concisely generated. Section 4 of the paper (pp 9-12) explains that they are the same thing with copious examples (I quoted briefly from this above). Or perhaps you are just saying there is a difference but it doesn’t apply to Dembski’s definition of specified?
The reason I pursue this is because there are frequent comments from the ID side to the effect that the definition of “specified” and “complex” are obvious and why can’t your opponents see this (e.g.Eric Anderson’s comment in my OP). But here is a confusing dispute over the meaning of the terms within the ID community itself.
PS I think you are over simplifying our little argument about functional specification but I certainly don’t want to start it again.
Yes, I do wish VJTorley and gpuccio would pay us a visit, and even stay around! It seems so odd to insist on posting in a forum where many of the contributors to a discussion can’t post.
I know that discussions can become, let’s say “robust” here, but no more so than at UD, I’d say. And the more ID proponents who post here, the less they will be in a minority 🙂
So a pattern has CSI if it:
If it has A and B but not C, it is “ordered”, but not necessarily designed.
If it has A and C but not B, it is unordered, and not necessarily designed.
If it doesn’t have A it doesn’t have C, whether or not it has B
If it has all three, it is Designed.
So: given a pattern with A and B, how do we compute C?
crossposting my reply (currently in moderation) to gpuccio at UD:
Still no sign of my post. I’m not sure how long it normally takes the system (Barry?) to approve new users, so I’ll give it another day.
I was trying to find the post in which Barry had said he wouldn’t mind if I posted again. I tried to post at the time, but it seems that Barry doesn’t reinstate accounts, you have to make a new one (with a different email!) Fortunately, I have a middle name and an alternative email. So, we’ll see. I don’t intend to spend much time there, but it would be useful to be able to cross post when the OPs there specifically reference this site.
My account was magically reinstated. Same login, same password! Didn’t ask, didn’t email. A cynic might wonder whether they want soft, layman critics but not incisive and patient professionals!
Ah that’s interesting.
I just tried my old account, but no joy. I can log in, but nothing happens if I try to post.
I can’t see why I’d be more of a threat than you, but I probably posted more 🙂
I can log in but haven’t tried to post.
Lizzie,
Looks like you are authorised. I might try the same thing.
[cross posted in the UD thread]
Sal: That is only true in specialized cases and is not universally true. Mark Perakh and others tended to use this claim, but it is inaccurate.
Well, it’s a question of definition, rather than accuracy! Dembski, in his 2005 Specification paper defines specification as “easy description of pattern”, and, more formally, as algorithmic compressibility, citing Chaitin, Kolmogorov and Solomonoff.
So Perakh is being perfectly accurate if he is using Dembski’s definition.
But I would agree with you that it is not a good definition if you want to exclude patterns generated by mechanical processes. Presumably this is why Vincent has suggested a way of eliminating obviously compressible patterns that are obviously not designed, calling them “ordered”.
However, to be fair to Dembski, he deals with this by including in his formula for “chi”, the parameter P(T|H), which means he can still use his Kolmogorov compressibility method, and highly “ordered” sequences (produced, for instance, by some kind of oscillator) will still produce a low chi value because the probability P of the Target T given the “relevant chance hypothesis”, H, will be high, as that “relevant chance hypothesis” will include oscillatory hypotheses.
But here Vincent and Dembski meet the same eleP(T|H)ant in the Room 🙂
P(T|H) is fine to compute if you have a clearly defined non-design hypothesis for which you can compute a probability distribution.
But nobody, to my knowledge, has yet as suggested how you would compute it for a biological organism, or even for a protein.
[cross posted]
Joe: That’s wrong. The heritable variance has to be happenstance, ie unguided/ undirected.
Well, not according to Darwin. He didn’t know how variance was generated, and at one point favored Lamarck’s theory.
It’s not essential to his theory that the variance is directed, but nor is it essential that it is undirected. The variance certainly has to come from somewhere. And all evidence suggests that variants are drawn from a really very narrow distribution of fitnesses, with a peak close to that of the parent organism or sequence.
tbh, I think misunderstandings like this (for which “Darwinists” must take their share of blame) are at the bottom of a lot of the arguments over ID.
ID proponents (and a very few Strong Atheists) take the view that evolutionary theory essentially rules out Design. They may correct that there is no Designer (I would be inclined to agree) but it is not correct to say that evolutionary theory rules it out. It merely does not require it.
For the Darwinian algorithm to result in an adapted population, all that is required is heritable variance in reproductive success. You can artificially introduce variance (by genetic engineering, for instance, or by pre-filtering your variants in a GA, I guess) but it isn’t necessary. “Happenstance” variation works fine. But the core of the theory isn’t that the variation is Happenstance, but that heritable variance in reproductive success will result in an increasing prevalence of the more successful variants – by definition. So much so that some people dismiss it as “tautological”. It isn’t – it’s just a near-syllogism (and only a “near” syllogism, because the adaptation isn’t absolutely inevitable, because of drift).
So I think that “guided” variance is a perfectly decent hypothesis – if a mechanism for that guidance to be effected could be postulated! (i.e. some force to move the nucleotides around). And indeed, although I wouldn’t call it “guidance”, there’s no reason, under Darwin’s theory, to restrict the unit of selection to the organism – it can happen at the level of the population, and thus variance-generation mechanisms likely to produce robustness to environmental change will tend to be selected at population level (or, rather, to keep Eric sweet: “population adaptation with heritable variance in adaptive success will result in an increased prevalence of populations with variance-generating mechanisms that tend to promote rapid adaptation”, but that’s a bit of a mouthful!)
So even under standard Darwinian theory, you’d expect to see variance-generation mechanisms evolve in such a manner as to optimise adaptation.
How would you design a protein?
ooh pick me I know!!!!
I have added a short appendix to the OP above in response to VJs’ rather longer appendix to his.
Here is gpuccio making the fundamental misconception of ID/creationists.
Everything in ID/creationist land has a uniform probability distribution.
The second bolded assertion even says it twice in the same sentence. To paraphrase, “Everything has the same probability because everything has the same probability.”
“curious thing is that Dembski defines specification in terms of a pattern that can generated from a few simple principles. ”
But that doesn’t mean the pattern can’t be algorithmically complex. If I said a “simple” principle is a java virtual, the principle is succinct to state, but it is certainly not a simple pattern.
Dembski does not mean only algorithmically simple (kolmogorov simple) patterns. Specifications may or may not be kolomogorov complex. When he referred to message that can be decoded via a Caesar cipher, there is no constraint on the message being algorithmically simple.
When he said the explanatory filter applies to copyright infringement cases, the intellectual property in question is not required to be algorithmically simple.
Critics are right to object to the usage of “complex” to describe CSI. A better rendering was suggested by Dembski himself calling it Specified Improbability. Unfortunately, I no longer have the reference for him saying that. I would prefer that he went with that — Specified Improbability…