Here is a pattern:
It’s a gray-scale image, so it is just one 2D matrix. Here is a text file containing the matrix:
I would like to know whether it has CSI or not. Here is Dembski’s paper, in which he gives the formula:
Specification: The Pattern That Signifies Intelligence.
There are 658 x 795 pixels in the image, i.e 523,110. Each one can take one of 256 values (0:255). Not all values are represented with equal probability, though. It’s a negatively skewed distribution, with higher values more prevalent than lower.
Tell me what it is first, so I’ll know if it has CSI. Offhand, it looks organic (like a closeup of a butterfly wing), so it is chock full of CSI.
Is it fair to ask whether randomizing a few values would destroy the functionality?
I want CSI not FSC or any of the other alphabet soup stuff. So a good start would be finding the simplest way of describing the pattern, and count how many patterns drawn from the same distribution have as simple or simpler a description.
I’ve tried running a loop where I draw randomly from the same distribution, but I haven’t seen anything other than grey goo yet.
Feel free to guess what it is. I shan’t say for a while 🙂
Am I confusing the map with the territory? Do we want to know how unlikely the image is in comparison to all possible grey scale images? Or is it that we have to apply some rule for deciding what possible clear (functional, I guess) images can be generated by the total number of possible matrix configurations? Presumably we could point a camera at all sorts of things and voilà we have a greyscale matrix that is functional. But how rare is it going to be? There could be unknown combinations of numbers that could be functional images? How would we know without displaying them as an image?
Or is it about the object in the image?
Ah! So functional images are rare in sequence space! Are you randomly generating a complete matrix from scratch or starting with a viable image and introducing some limited variation, then choosing from the most pleasing (or some other criterion) result and reiterating?
Well, if I’m understanding Dembski correctly, his claim is that we can look at any pattern, and if it is one of a small number of specified patterns out of a large total possible number of patterns with the same amount of Shannon Information, then if that proportion is smaller than the probability of getting it at least once in the history of the universe, then we can infer design.
So to be generous, we can assume that non-design processes are processes that draw each pixel independently from the distribution of pixel values in the observed patterns.
Clearly it’s going to take a billion monkeys with pixel writers a heck of a long time before they come up with something as nice as my photo. But I’d like to compute just how long, to see if my pattern is designed 🙂
Is it something produced by water? Erosion or deposition of sediments?
No, I’m generating my pictures from scratch each time, by drawing each pixel independently from the distribution of values in the original.
tbh, I think there are loads of ways of doing this, and some will give you a positive Design signal and some will not.
It all depends on p(T|H) which is the thing that nobody every tells us how to calculate.
It would be interesting if someone at UD would have a go, though.
That’s what a lot of his writings sound like. But the 2005 “Specification …” paper does not say that the probability that is needed is the probability that the pattern occurs in patterns chosen at random (or generated by mutation). We apparently must now also take into account the probability that natural processes, including natural selection, would produce the pattern. If they could, then it does not have CSI.
Considered this way, that version of CSI is not an independently-assessable property. It does not answer the question of whether the pattern could arise by natural processes; instead we must determine that ourselves before we can determine whether CSI is present.
Yes indeed. It would be interesting if an ID proponent who is not yet convinced of the circularity of CSI would have a go, though, because either they will have to assume that p(T|H) is tiny, and get a positive (and I’m not telling anyone what this is yet!) or assume that it is large and get a negative.
But if they assume it is large, then they need to justify the tiny value they assume when calculating the various alphabet-soups they do.
Well, I would not call this version of CSI “circular”. We answer the basic question (whether the observed phenomena could arise by natural processes), then CSI is just another name for the conclusion we have already drawn. It does not assume itself, it just lazily lets us do all the hard work.
CSI itself is not circular, but the argument that X could not have evolved because X has CSI certainly is. And that’s exactly what ID proponents do.
Could we have an example of a sequence that could not have been generated by evolution?
The interpretation that many of us made of CSI was that it was an independent assessment of whether natural processes could have produced the adaptation. And that Dembski was claiming a conservation law to show that natural processes could not produce CSI.
Even most pro-ID commenters at UD interpreted Dembski’s CSI that way. They were always claiming that CSI was something that could be independently evaluated without yet knowing what processes produced the pattern.
But now Dembski has clarified that CSI is not (and maybe never was) something you could assess independently of knowing the processes that produced the pattern. Which makes it mostly an afterthought, and not of great interest,
I haven’t been following Dembski too closely. Do you know when and where he made that concession?
I always thought the key feature of CSI was his claim that “By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause.” Without that, CSI is a useless metric even if it could be calculated.
Perhaps a more practical challenge will interest the ID supporters? Can the CSI of the,Yonaguni Monument be calculated, and if so, provide an answer to its origin?
Add the Voynich manuscript.
I’ve read that this has recently been attempted.
Well, he actually makes it in that manuscript, although not in so many ways. He says that the H “is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms”. The other hint of a concession is his indignation at Joe for not reading his later work in which he implies that he no longer needs CSI. Although he still refers to it in that talk they’ve got up at UD.
I started to transcribe it, but lost the will to live before the end:
I want to talk to you today about Intelligent Design [unclear] the debate on Origins, and really describe the state of play – where are we in the culture on these things. If you listen to somebody like Kenneth Miller, for instance, an anti-Intelligent Design person, he will say that Intelligent Design “collapsed” (that’s his term) back in 2005, the Kitzmuller Dover trial, now I think that’s a mainly a rhetorical flourish on his part, the research continues very well, I would say. In fact I would say we had the stronger argument, and this is across the board, both regarding the atheistic evolutionists, theistic evolutionists, and I’d say even with the Young Earth Creationists, unfortunately I think there are two streams of Young Earth Creationism. There’s one that sees Intelligent Design as an ally, and tries to understand the nature of the Intelligence that’s there, and there’s another that sees it as a competitor, and I’ll get into that in a little bit, but I think we’ve got the better argument, I’ll say something about why I think we do, but the challenge we face it seems to me is that even if you have the better argument you still have to sell the argument, you still have to get people to accept it, and there’s a lot that stands in the way of that, it seems to me, in our culture, there’s an old New Yorker cartoon, that shows an attorney with a client across from him, and the attorney tells the client, “you’ve an excellent case, Mr Jones, now, how much justice can you afford?” And I think that’s really the challenge, we have the case, but getting the case out there, getting final acceptance, is going to be difficult.
So let me just back up a little bit, just to say a bit about Intelligent Design, what is it, what isn’t it, and why I think we are in a good place, intellectually, and scientifically, on this topic, and then some of the challenges we face, so, what is Intelligent Design? If I give you a one-sentence definition, it’s that things are so complicated that we have no idea how it might have been produced, therefore God did it. No that’s not the definition! [laughter] The definition in fact is this: Intelligent Design is the study of complicated things that give the appearance of design, sorry, I’m distracted! Intelligent Design is the study of patterns in nature that are best explained as the products of intelligence. Let me repeat that: Intelligent Design is the study of patterns in nature that are best explained as the products of intelligence.
Now the reason I gave you that funny definition at the start is this has been journalistic boilerplate for the last twenty years, so complicated, can’t understand it in scientific terms, therefore God did it. That’s what Intelligent Design is. That’s precisely what Intelligent Design is not. It’s not that this is an argument from ignorance, it’s precisely because what we know about biological systems, and know about the logic of working with the statistics and information involved in these systems that we have some positive knowledge about these systems being a product of design, so it’s not an argument from ignorance, that’s what my definition stresses: the study of patterns in nature that are best explained as the products of intelligence.
Now, when you look at that definition, actually there are many things that fall under it already, well-established special sciences, forensic science, photography, random number generation, archaeology- is that an arrow-head or a random chunk of rock?Is that a burial mound, or just a randomly formed mound? So these are all questions that we pose in many special areas – we in fact pose that even to keep science honest, so if you think for instance of data falsification in science, data falsification, plagiarism, all these things are actually big problems in science, there are tremendous incentives to get work published, and to get research grants, and one way to expedite that is by falsifying your results, making it seem like you are doing much better and more interesting research than you actually are, and you know, how do you keep people honest? Well, that’s by looking for certain patterns of cheating, that can arise, in fact we see this all over the place, so credit cards, I mean how often do you get some text message that’s asking whether you made a certain purchase because of all these pattern checkers that are looking for divergences from your usual buying behaviour. So these sorts of methods for design detection, these sort of task of sifting intelligence from natural causes, has been around for a long time, many well-developed areas of knowledge, science, are devoted to this, but things get controversial when you start applying this to the natural sciences, why is that? Well the question is, who or what would the intelligence be in that case, it’s one thing to be looking for human intelligence, or even extra-terrestrial intelligence, the search for extra-terrestrial intelligence, that would be an example of an Intelligent Design research program, and yet, even there, if you’re a materialistic scientist, you’ll say that well, that alien intelligence, if it exists, evolved by some sort of naturalistic Darwinian, presumably Darwinian, means.
But if, life itself, or the universe itself gives evidence – I stress that word evidence, because that’s what we are looking for, evidence of design, evidence of intelligence, then who or what could that intelligence be? And very quickly, we are pushed to the realm of theology, at that point, because such an intelligence could not be an evolved intelligence, and from a materialistic perspective, that’s what intelligence has to be, intelligence has to be an evolutionary afterthought, it has to be something that’s the product of natural forces, and history, given enough time, sifted through some sort of evolutionary mechanism, and out come beings like ourselves who can then discuss the relative merits of intelligent design and evolutionary theory. That’s what it is from the evolutionary vantage. From an Intelligent Design perspective, I would say, that conclusion is, it, it’s not, that is not the only conclusion, and there are different options, Intelligent Design says there are these patterns that could be pointing us to intelligence, only look at nature, maybe it gives no evidence of intelligence, but it might be that it does, I think this has been one of the fallacies of criticisms of Intelligent Design, that somehow Intelligent Design shoehorns our inductive and inferential processes into forcing there to be design in nature. It doesn’t. Basically it says that there are these patterns that reliably signal intelligence, and what happens if we should find them in biological systems, it may be that we don’t find them, but in fact it seems that we are. Now if we are, and if these are reliable patterns of intelligence, what conclusions can we draw, and as I said, it seems that this very quickly pushes us to theological conclusions, because who or what could that designer be. I don’t think we are going to get from this sort of a design argument, design inference, the infinite personal transcendental creator God of Christianity, logic won’t take you there, but it will take you to an intelligence of tremendous capacities, tremendous technological innovation, which is well beyond anything that human engineers have come up with or are likely to come up with. So that’s, so Intelligent Design is getting us some distance toward theology, it’s saying that the materialist program can’t work, that the idea that the material world is just closed to any sort of evidence of intelligence, that won’t work, it’s also isn’t going the route of the theistic evolutionists which basically says that when we look at science, look at nature, that any sort of design there is undetectable, scientifically, it’s a bit like, Ken Miller will say that, that design is scientifically undetectable (if it is, how does he know?), will say we are arguing that it is detectable, well, what you detect then, is the work of an intelligence in a finite, materially embodied, thing, like ourselves, or the universe, or various structures within the universe, and when you look at that, these systems, they’re finite material objects, and how do you get to infinite personal transcendent creator God from looking at the design of finite material objects? There’s really no, I would say, inferential process that takes you there, yet it takes you some distance, it takes you to an intelligence, a marvellous that’s responsible for these things. And that becomes a work …a theological work of integration, to say well how does the creator God that we know from Christianity that relates to this Intelligence. I stress that because I am talking at an ETS meeting, and I want to say that Intelligent Design is not just a re-christianated (?) form of natural theology. Natural theology, if you think of William Paley’s natural theology, subtitle evidences for the existence and attributes of the deity from the appearances of nature, so he’s trying to argue for the existence of God, and the attributes of God, the … the goodness, the power of God, all these attributes, how does he get there? By looking at the appearances of nature. So he’s really trying to go the full route. Intelligent Design isn’t trying to do that. It’s trying to be, in the first instance, a scientific program that looks for evidence of intelligence, in the universe at large, especially in biological systems, that’s where most of the action seems to be, most of the controversy, evidence of intelligence in biological systems, and really leave it there, and allow theology to do its proper work. So, I think it really is calling for work of integration, rather than, if you will, a full concordism, or basically trying to hand off to science what basically has in the past been the work of theology.
So I think that, in a nutshell, is what Intelligent Design basically is. Now let me say a little bit about why I think we have the better argument, ok, why I think the program actually is thriving as an intellectual and scientific project. Let me just review some of my own work in this area. I came into this not because I wanted to make the world safe from Darwinism, or because I had these grand aspirations to be an apologist, in fact I’ve had an interest in apologetics ever since my conversion back in 1979,1980, but that wasn’t really what got me going, in this field, it was really during my PhD work in mathematics in the University of Chigago, that I got into this whole question of randomness, the nature of randomness, how do we know something’s random, what does it mean for something to be random. And I found that it does not really make sense, randomness doesn’t really make sense as a concept in and of itself. It only makes sense as a negation of non-randomness. We see patterns, and when we see patterns, we eliminate randomness. We can never be sure that something is random. I could show you, if I were to give one of my powerpoint presentations, what looks like a random inkblot, and then if you see it in a certain way, you’ll see that it’s a cow looking at you, and all of us have this experience, we are looking at something and it could be some random assembly and then you look at it the right way, and all of a sudden you realise that, oh, that’s a fish, that’s a 3D illusion, that’s a dinosaur running at you or something. And once you see it, you don’t unsee it again. And that’s the nature of randomness, there’s this sense of patternlessness, until we see the pattern. And once we see the salient pattern, then we don’t unsee it, then it becomes non-random. This has been the problem with random number generators, traditionally, where it seems to pass all these randomness tests, and then you come up with a new randomness test, then it doesn’t pass it any more. Years and years ago, there was something called a linear congruential generator, and when you form that generator, when you put the numbers that were being generated in triplets, and mapped them in 3D space, you would see planes. Well, if they were really random points in 3D space there should be no planes there, they should be all mixed up. And so suddenly you saw that it wasn’t random. And so that’s how I got into this question because I wrote a paper which I called “Randomness by Design” and the questions kept coming – well, what is the nature of these patterns that we use to eliminate chance, and not just to say it’s not chance, but to infer design. That led to my project of The Design Inference. Well, it turned out what was crucial for detecting design was this what I called Specified Complexity, that you have a pattern, where the pattern signifies an event of low probability, and yet the pattern itself is easily described, so it’s specified, but also low probability. And I ended up calling it Specified Complexity, I don’t want to get into the details because this can be several lectures in itself, but so there was this marker, this sign of intelligence, in terms of specified complexity, it was a well-defined statistical notion, but it turned out it was also connected with various concepts in information theory, and as I developed Specified Complexity, and I was asked back in 05 to say what is the state of play of Specified Complexity, and I found that when I tried to cash it out in Information Theoretic terms, it was actually a form of Shannon Information, I mean it had an extra twist in it, basically it had something called Kolmogorov complexity that had to be added to it. So my point is just that it seemed that one thing leads to the other. Now that idea of Specified Complexity has since transmogrified if you will into a whole field of evolutionary informatics where we look at targets, those are the items that have specified complexity, and search, and search for targets, in various search spaces. Well, it turns out that evolution can itself be conceptualised as a search. I mean if you go on google, you’ll find the term “evolutionary search” all over the place. “Search” doesn’t have to be put in purely teleological terms where there is an intelligence searching for things, although it can be that, it makes sense to talk about evolution in terms of search, and then you can start asking, well what’s the information that’s required for evolutionary searches to succeed in finding their targets.
This is well understood in the field called evolutionary computing, my main collaboration these days is with engineers at Baylor, I’m no longer on faculty there, but I get to work with people there, and so this has emerged into the field of evolutionary informatics, we can go online to http://www.evoinfo.org and you’ll see that we have now got probably about ten papers either accepted or under submission in top engineering journals, this is mainstream peer-reviewed press, where we’re looking at the obstacles that face various searches and the information required for searches to succeed, and can I just illustrate this for you in a simple way, because I’m probably talking to many theologians, and who don’t have a lot of familiarity with these technical aspects, but think of this, if you’ve got a huge, acres and acres, and you have hidden some easter eggs, let’s say that the easter eggs are well hid, and there are not many, and the area is huge, let’s say a hundred by a hundred mile area, ok? How are you going to find them? An exhaustive search isn’t going to work, you don’t have the time or resources to do that. Random search isn’t going to work, if you just kind of flip a coin to decide where to go and you can just hop around anywhere, I mean exhaustive search could work if you could go inch by inch over the whole property, but that’s the nature of these searches, we have limited resources with these type of needle in the haystack problems. So how are you going to find them? Well, one way, is for somebody who knows where the easter eggs are, to say: warmer, colder, colder, now warmer, warmer, hotter, hot you’re burning up! Now, if you do that, now what’s happening, how is it that you are finding that easter egg? Well, it’s because you’ve been given information, right? I mean, that’s what you’ve been given, through this warmer, colder, this is basically helping you with the search, to find the target.
Well, this is what I think has been one of the great fallacies about evolutionary thinking, that somehow Darwinian processes can get rid of the teleology in evolutionary search – they don’t. Richard Dawkins, for instance has a very famous example, which has been recycled endlessly, and some of my critics have said, well why does he keep focussing on this example, because it’s been discredited, but it hasn’t been discredited, I think, certainly not within those circles, and top researchers, most recently Michael Yaris, he’s written a book on the origin of life, has recycled it, only the target phrase in his case is not Methinks it is a like a weasel, which is what it was in Richard Dawkins’ case, but Nothing in make sense in biology apart from evolution, that’s his target phrase, but where I’m going with this is Dawkins gives a computer simulation in which he asks, how could we get a phrase like “methinks it is like a weasel” through some sort of evolutionary process. And basically what he does is he starts with a random string and then, as elements in this random string vary randomly but get closer to the target, closer in one sense, letter by letter match, then eventually, actually in very short order, evolve to this target string much faster than you could by pure random search. Now Dawkins will say, aha! See, evolutionary searches can get you to these targets much faster than just purely random search. But the question is, how did he get the information which said, this is closer to the target than some other string?
So that’s really, he’s slipped in, smuggled in, the information, into these evolutionary processes. In fact what I would argue, and what my colleagues and I have argued, is that evolution, insofar as it’s successful, in as it were navigating biological configuration space, that it introduces, it requires a lot of information. And so the question is, where did that information come from? So it really hasn’t answered the question, I mean, if you will, let’s say I came to you, and I said, look you’ve got this easter egg hunt, there’s no way you’re going to find them, I’ll just tell you warmer, colder, I’ll get you to the target, and now you’ve explained it without any need for intelligence. Now, wait a second, the information you’re giving me is something that you had to come up with as an intelligent agent. It’s not something that just arose through some sort of blind material process, it seems that’s exactly what we’re dealing with in evolution itself.
Bob Marks and I , Robert Marks is a professor of electrical and computer engineering, at Baylor, we have a paper and a massive book that Bruce Gordon and I did, on the Nature of Nature, and so we’ve gotten our .. called Life’s Conservation Law, and … why natural selection cannot create biological information.
Really the most interesting results connected with this work, our Conservation of Information result, really seems to be ground-breaking about the nature of information, because what it says is, that as we try to understand the information that allows searches to succeed, the information problem only gets more difficult, as it were back-tracks. I say this, ok, I give you this example, this will make it clearer even than the easter egg hunt. Imagine that you are looking for treasure on a big island.
if you’ve got a huge, acres and acres, and you have hidden some easter eggs, let’s say that the easter eggs are well hid, and there are not many, and the area is huge, let’s say a hundred by a hundred mile area, ok?
And this is where ID smuggles in the premise — that the field must contain only a few well hidden eggs. If that premise is false, however, then random searches are practical, preempting the need for exhaustive searches. ID proponents need to present a theory (with supporting evidence) that explains why their premise must be so.
…And they also need to show that there are no natural means to restrict the search field – or more precisely, why the natural restrictive forces we already know of should not be considered.
JoeG determines a value for the CSI challenge:
The last line is presumably the value.
No doubt it is also “chock full” and “replete” with FSCO/I as well.
Right, now all he has to do is compute the compressibility of my image, and the proportion of other possible images with the same amount of Shannon Information that are as, or more, compressible.
Then he can get its CSI! As long as he assume that p(T|H) is the same as p(T|Independent draws). If he doesn’t assume that, then I’ll be interested to see how he computes p(T|H).
rhampton (quoting Dembski from Lizzie’s transcription of his recent talk):
Dembski and Marks have been making this argument for some years now. It assumes that the fitness surface, the dependence of fitnesses on genotypes, has a few good fitnesses scattered about the surface, and all the other genotypes are extremely bad. Now ask yourself: if we were in such a case, what would be the effect of random mutation? Answer: it would carry you to as bad a genotype as if you changed, not one base in the DNA, but all of them at the same time.
It’s very clear that although single mutations are mostly bad (or neutral), they aren’t anywhere near as bad as that. So the fitness surface for actual life is not like the easter-egg hunt. It does in effect “say” to you “warmer, colder”.
For Dembski and Marks, this proves that a Designer has set up those fitness surfaces. I have been making, in response, the point that the smoothness of those surfaces is mostly a result of the laws of physics. You will find this in 2009 in two of my postings on Panda’s Thumb here and here and in the sections on “Smuggling” and “Evolvability” in my 2007 article in Reports of the National Center for Science Education (here). My objection is also related to the arguments many other people have also made against Dembski’s No Free Lunch argument.
The laws of physics show that action at a distance is weak: if a gene functions in my eyeball and another in my toe, most likely these do not interact (much). If the genes act at different times in my life, then that too is reason to suspect that their effects will not be tightly dependent on each other. Different parts of our chemical metabolism also don’t interact tightly. A digestive enzyme cutting up proteins in my gut and a glycolytic enzyme in the Krebs cycle within a cell do not show tight dependence of fitness effects of changes in their sequences.
In the null-hypothesis world that Dembski and Marks use, every feature of the real world interacts tightly with any other. Any change of your genotype is disastrous. But physics does not work that way. When I type this sentence, the movement of my fingers does not cause your roof to start leaking. When I move a pebble in my back yard, the trees, fences, and grass there do not instantly totally rearrange themselves into a meaningless jumble. The physics that we have and the chemistry that we have do not show tight interaction between everything and everything else.
So, as rhampton implies, Dembski has made a terribly strong assumption in using the easter-egg analogy, or the treasure chest analogy. Physics works otherwise, and if Dembski wants to argue that this is evidence of a Designer, he has to argue about where the laws of physics come from. Not waste everyone’s time by implying that biologists have a wrong understanding of evolution.
(As a footnote I will also point out that a fitness surface is not just a plot of fitnesses against discrete genotypes, It is a population that is evolving, and we plot fitnesses against gene frequencies, The fitness at any combination of gene frequencies is a weighted average of fitnesses in all the genotypes that you could construct, each weighted by its genotype frequency. That makes for further smoothing of the fitness surface. But the point above is sufficient to do away with the easter-egg analogy.)
Damn. I thought I had that one figured out.
By the way, after I posted my previous comment, it ended up in the thread before O’Magain’s comment which was some hours older.
Lizzie’s reply to O’Magain also appears before the comment it replies to. Maybe we do live in a totally chaotic universe …
Joe G is throwing a major hissy fit on his blog, screaming and cursing but still unable to do any CSI calculations.
He let me post a few comments but now he’s reverted back to deleting selected posts with questions he can’t answer. Don’t expect any honest discussion there.
ETA: That’s weird. This was a reply to OM below but it showed up above his comment (?)
You sought out and actually posted on Joe’s blog? Here I am wondering if even participating here and at AtBC is giving the IDCists too much attention — thanks for making me feel better!
“An alcoholic is someone who drinks more than you do.”
Good ol’ Joe. The Goodwill Ambassador of ID
Here’s one fairly simple way of specifying the pattern:
I’ve computed the lag-1 autocorrelation for each row and column of the image, and get a mean correlation of .89. This is obviously very improbable under the null of random independent draws for each pixel value, so I’m currently working out the distribution of correlations under the null.
That should enable me to figure out what proportion of images have as high an autocorrelation or higher.
Cant seem to make this post sit in its right place! I used to have a trick, but seem to have lost it.
Well, I got a z score of 1.4492e+03 for my image (mean fisher-transformed autocorrelation under the null = -0.0014, standard deviation = 9.8607e-04)
I get precision overflow problems though trying to compute the bits. I just get Inf.
Anybody able to calculate it?
So they’ve found a match over at UD.
Its so very very telling that one tool they didn’t even attempt to use was the ‘explanatory filter’. No CSI calculation was performed, they used image matching software.
What does that say about CSI, the EF and the ID argument in general?
The image looks to me like the surface of plywood that’s been weathering out in the open for a while. It’s possible to create simulated woodgrain images in Photoshop, in a procedure which starts out with Photoshop’s RNG-dependent ‘Clouds’ filter; it might be interesting to see how much CSI exists in such an image at each stage or its creation.
So does a perfectly black, (and very compressible) jet black image. Joe, failing at the basics again. Count the letters of the recipe of teh_caek!
Joe is living proof that the Big Tent will accept anyone.
That kairosfocus character thinks taking logarithms of Np makes CSI more sophisticated.
That comment pretty much tells it all. What more needs to be said?
Oh, I’m pretty sure Joe has whole posts about you on his blog! 😉
That’s very impressive! And isn’t it awesome? It’s the Skeiðarárjökull Glacier in Iceland, showing evidence of successive eruptions of Grímsvötn as bands of black ash.
Anyway, it turns out that the z score for a p value of 1/(10^150) is only -26, so my picture (z score of -4000 odd) is way above the threshold, under the null of random independent draws.
So it’s got CSI, unless we adjust p(T|H) to take account of “Darwinian and other material processes”).
How do we do that, ID proponents?
That, it seems to me, to be the key question 🙂
Joe G makes a key point:
Joe should realise that VJT’s in a cleft stick here. IIRC, he had a go at calculating CSI before and after a gene duplication event, and had to concede that CSI was added (although he later tried to handwave it away)
So unless one tries to say (as Joe sometimes does) that gene duplications and by extension other genome-altering events are themselves designed – which is not supported by any evidence (no, not even in Spetner’s book) – one has to accept that CSI measurement is not useful in detecting design when there’s no design history.
And a key plank of the ID structure rots away. Maybe it’s not after all Darwinian evolution that’s the “tottering edifice”
As a side note, I see that kairos “chi-squared” focus has resolutely refrained from addressing the growing body of work indicating that “protein space” is much more readily explorable than he claims.
I’d just like to put to rest one objection to my challenge – that because it’s a pixelated photograph it is by definition designed!
Well, duh. Pixelating the glacier simplifies the thing, it doesn’t complexify it! The thing itself is perfectly “digital” – it consists of molecules arranged in a pattern. Just as DNA is. But people don’t claim that DNA has CSI because somebody wrote it down with symbols for each nucleotide! Obviously that – the written representation – is “intelligently designed”. I just coded my glacier so that it became a tractable problem. You could do it on the real thing if you wanted, but you’d get a bigger value, not a smaller one, because there are a heck of a lot more molecules in that glacier than pixels in my photo!
The point is that it’s a pattern, and it’s a pattern found in nature, just as a string of amino acids is a pattern, or a DNA sequence.
And yes, you do have to consider the material processes that might have produced the pattern, in this case the iterative processes of glacier formation and volcanic eruption, in order to compute p(T|H), and thus its CSI.
That’s the bit I was waiting for someone to tell me how to do.
I have a
stalkerardent admirer? How cute!
I’ll put visiting Joe’s blog on my to do list, right after “experiencing the scent of hantavirus”.
KF is upset:
“F/N: I see that despite explicit use of the explanatory filter in inferring not-designed, some over at TSZ — RTH, this means you in particular — are unable to recognise it in action. Sadly but unsurprisingly revealing. KF”
No, let’s be honest. Someone banged it into image recognition software. That is all. The end. If bible versus were CSI calculations, UD would be a hot bed of science. They aren’t, and it isn’t.