Back in 2016, William Dembski officially ‘retired’ from ‘Intelligent Design’ theory & the IDM. He wrote that “the camaraderie I once experienced with colleagues and friends in the movement has largely dwindled.” https://billdembski.com/personal/official-retirement-from-intelligent-design/ This might have come rather late after Dembski’s star had already started to fade. Indeed, it was more than 10 years after the Dover trial debacle and already long after I personally heard another of the leaders of the IDM at the DI in 2003 say he no longer reads Dembski’s books. Yet no doubt Dr. Dembski was one of, if not the leading voice of the IDM for almost 2 decades. Here’s one UK IDist lamenting Dembski’s statement: https://designdisquisitions.wordpress.com/2017/02/19/william-dembski-moves-on-from-id-some-reflections/ Yet when a new paycheck from the Discovery Institute was offered in the Bradley Center, Dembski seems to have gotten right back on the ideological bandwagon in Seattle & reversed his dwindling of IDist camaraderie.
Paul Davies, cosmologists, physicist and agnostic, with Sara Imari Walker, proposed a theory that information, and not chemicals, is at the very foundation of life…Here
Why? Continue reading
In my research, I have recently come across the self-assembling proteins and molecular machines called nano-machines one of them being the bacterial flagellum…
Have you ever wondered what mechanism is involved in the self-assembly process?
I’m not even going to ask the question how the self-assembly process has supposedly evolved, because it would be offensive to engineers who struggle to design assembly lines that require the assembly, operation and supervision of intelligence… So far engineers can’t even dream of designing self-assembling machines…But when they do accomplish that one day, it will be used as proof that random, natural processes could have done too…in life systems.. lol
If you don’t know what I’m talking about, just watch this video:
The greatest story ever told by activists in the intelligent design (ID) socio-political movement was that William Dembski had proved the Law of Conservation of Information, where the information was of a kind called specified complexity. The fact of the matter is that Dembski did not supply a proof, but instead sketched an ostensible proof, in No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2002). He did not go on to publish the proof elsewhere, and the reason is obvious in hindsight: he never had a proof. In “Specification: The Pattern that Signifies Intelligence” (2005), Dembski instead radically altered his definition of specified complexity, and said nothing about conservation. In “Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information” (2010; preprint 2008), Dembski and Marks attached the term Law of Conservation of Information to claims about a newly defined quantity, active information, and gave no indication that Dembski had used the term previously. In Introduction to Evolutionary Informatics, Marks, Dembski, and Ewert address specified complexity only in an isolated chapter, “Measuring Meaning: Algorithmic Specified Complexity,” and do not claim that it is conserved. From the vantage of 2018, it is plain to see that Dembski erred in his claims about conservation of specified complexity, and later neglected to explain that he had abandoned them.
Jonathan Wells, who is an embryologist and an ID advocate, has a very interesting paper and video on the issue of ontogeny (embryo development) and the origins of information needed in the process of cell differentiation…
Wells thinks that a major piece of information needed in the process of embryo development can’t be explained by DNA, and therefore may require an intervention of an outside source of information, such as ID/God…
If you don’t want to watch the whole video, starting at about 40 min mark is just as good but especially at 43 min.
On Uncommon Descent, poster gpuccio has been discussing “functional information”. Most of gpuccio’s argument is a conventional “islands of function” argument. Not being very knowledgeable about biochemistry, I’ll happily leave that argument to others.
But I have been intrigued by gpuccio’s use of Functional Information, in particular gpuccio’s assertion that if we observe 500 bits of it, that this is a reliable indicator of Design, as here, about at the 11th sentence of point (a):
… the idea is that if we observe any object that exhibits complex functional information (for example, more than 500 bits of functional information ) for an explicitly defined function (whatever it is) we can safely infer design.
I wonder how this general method works. As far as I can see, it doesn’t work. There would be seem to be three possible ways of arguing for it, and in the end; two don’t work and one is just plain silly. Which of these is the basis for gpuccio’s statement? Let’s investigate …
- ‘Information’, ‘data’ and ‘media’ are distinct concepts. Media is the mechanical support for data and can be any material including DNA and RNA in biology. Data is the symbols that carry information and are stored and transmitted on the media. ACGT nucleotides forming strands of DNA are biologic data. Information is an entity that answers a question and is represented by data encoded on a particular media. Information is always created by an intelligent agent and used by the same or another intelligent agent. Interpreting the data to extract information requires a deciphering key such as a language. For example, proteins are made of amino acids selected based on a translation table (the deciphering key) from nucleotides.
As Tom English and others have discussed previously, there was a book published last year called Introduction to Evolutionary Informatics, the authors of which are Marks, Dembski, and Ewert.
The main point of the book is stated as:
Indeed, all current models of evolution require information from an external designer to work.
(The “external designer” they are talking about is the modeler who created the model.)
Another way they state their position:
We show repeatedly that the proposed models all require inclusion of significant knowledge about the problem being solved.
Somehow, they think it needs to be shown that modelers put information and knowledge into their models. This displays a fundamental misunderstanding of models and modeling.
It is a simple fact that a model of any kind, in its entirety, comes from a modeler. Any information in the model, however one defines information, is put in the model by the modeler. All structures and behaviors of any model are results of modeling decisions made by the modeler. Models are the modelers’ conceptions of reality. It is expected that modelers will add the best information they think they have in order to make their models realistic. Why wouldn’t they? For people who actually build and use models, like engineers and scientists, the main issue is realism.
To see a good presentation on the fundamentals of modeling, I recommend the videos and handbooks available free online from the Society for Industrial and Applied Mathematics (SIAM.) “[Link]”.
For a good discussion on what it really means for a model to “work,” I recommend a paper called “Concepts of Model Verification and Validation”, which was put out by the Los Alamos Laboratories.
Yesterday, I looked again through “Introduction to Evolutionary Informatics”, when I spotted the Cracker Barrel puzzle in section 126.96.36.199 Endogenous information of the Cracker Barrel puzzle (p. 128). The rules of this variant of a triangular peg-solitaire are described in the text (or can be found at wikipedia’s article on the subject).
I did a talk recently on a new way of understanding Irreducible Complexity using computability theory. I’m curious to see what you all think about it.
Here, one of my brilliant MD PhD students and I study one of the “information” arguments against evolution. What do you think of our study?
I recently put this preprint in biorxiv. To be clear, this study is not yet peer-reviewed, and I do not want anyone to miss this point. This is an “experiment” too. I’m curious to see if these types of studies are publishable. If they are, you might see more from me. Currently it is under review at a very good journal. So it might actually turn the corner and get out there. An a parallel question: do you think this type of work should be published?
I’m curious what the community thinks. I hope it is clear enough for non-experts to follow too. We went to great lengths to make the source code for the simulations available in an easy to read and annotated format. My hope is that a college level student could follow the details. And even if you can’t, you can weigh in on if the scientific community should publish this type of work.
“Functional Information”—estimated from the mutual information of protein sequence alignments—has been proposed as a reliable way of estimating the number of proteins with a specified function and the consequent difficulty of evolving a new function. The fantastic rarity of functional proteins computed by this approach emboldens some to argue that evolution is impossible. Random searches, it seems, would have no hope of finding new functions. Here, we use simulations to demonstrate that sequence alignments are a poor estimate of functional information. The mutual information of sequence alignments fantastically underestimates of the true number of functional proteins. In addition to functional constraints, mutual information is also strongly influenced by a family’s history, mutational bias, and selection. Regardless, even if functional information could be reliably calculated, it tells us nothing about the difficulty of evolving new functions, because it does not estimate the distance between a new function and existing functions. Moreover, the pervasive observation of multifunctional proteins suggests that functions are actually very close to one another and abundant. Multifunctional proteins would be impossible if the FI argument against evolution were true.
I am working on a series of tutorials to cover the basics of Intelligent Design, especially the mathematics of it. This is my tutorial on Specified Complexity, and I would appreciate any thoughtful criticism of it.
True or false? If is the probability of an event, then the Shannon information of the event is bits.
I’m quite interested in knowing what you believe, and why you believe it, even if you cannot justify your belief formally.
Formal version. Let be a discrete probability space with and let event be an arbitrary subset of Is it the case that in Shannon’s mathematical theory of communication, the self-information of the event is equal to bits?
Given the importance of information theory to some intelligent design arguments I thought it might be nice to have a toolkit of some basic functions related to the sorts of calculations associated with information theory, regardless of which side of the debate one is on.
What would those functions consist of?
In the “Elon Musk” discussion, in the midst of a whole lotta epistemology goin’ on, commenter BruceS referred to the concept of a “Boltzmann Brain” and suggested that Boltzmann didn’t know about evolution. (In fact Boltzmann did know about evolution and thought Darwin’s work was hugely important). The Boltzmann Brain is a thought experiment about a conscious brain arising in a thermodynamic system which is at equilibrium. Such a thing is interesting but vastly improbable.
BruceS explained that he was thinking of a reddit post where the commenter invoked evolution to explain why we don’t need extremely improbable events to explain the existence of our brains (the comment will be found here).
What needs to be added is that all that does not happen in an isolated system at thermodynamic equilibrium, or at least it has a fantastically low probability of happening there. The earth-sun system is not at thermodynamic equilibrium. Energy is flowing outwards from the sun, at high temperature, some is hitting the earth, and some is taken up by plants and then some by animals, at lower temperatures. Continue reading
TSZ has made much ado about P(T|H), a conditional probability based on a materialistic hypothesis. They don’t seem to realize that H pertains to their position and that H cannot be had means their position is untestable. The only reason the conditional probability exists in the first place is due to the fact that the claims of evolutionists cannot be directly tested in a lab. If their claims could be directly tested then there wouldn’t be any need for a conditional probability.
If P(T|H) cannot be calculated it is due to the failure of evolutionists to provide H and their failure to find experimental evidence to support their claims.
I know what the complaints are going to be- “It is Dembski’s metric”- but yet it is in relation to your position and it wouldn’t exist if you actually had something that could be scientifically tested.
Michael Behe is best known for coining the phrase Irreducible Complexity, but I think his likening of biological systems to Rube Goldberg machines is a better way to frame the problem of evolving the black boxes and the other extravagances of the biological world.
On the left is a photograph of a real snowflake. Most people would agree that it was not created intentionally, except possibly in the rather esoteric sense of being the foreseen result of the properties of water atoms in an intentionally designed universe in which water atoms were designed to have those properties. But I think most people here, ID proponents and ID critics alike, would consider that the “design” (in the sense of “pattern”) of this snowflake is neither random nor teleological. Nor, however, is it predictable in detail. Famously “no two snowflakes are alike”, yet all snowflakes have six-fold rotational symmetry. They are, to put it another way, the products of both “law” (the natural law that governs the crystalisation of water molecules) and “chance” (stochastic variation in humidity and temperature that affect the rate of growth of each arm of the crystal as it grows). We need not, to continue in Dembski’s “Explanatory Filter” framework, infer “Design”.
I see long-time commenter at Uncommon Descent, Mung, in a thread entitled Backwards eye wiring? Lee Spetner comments, asks:
How do you calculate the size of amino acid sequence space?
As this seems somewhat off-topic there, I thought I’d attempt to answer Mung’s question. I’ll try and be brief. Continue reading