- ‘Information’, ‘data’ and ‘media’ are distinct concepts. Media is the mechanical support for data and can be any material including DNA and RNA in biology. Data is the symbols that carry information and are stored and transmitted on the media. ACGT nucleotides forming strands of DNA are biologic data. Information is an entity that answers a question and is represented by data encoded on a particular media. Information is always created by an intelligent agent and used by the same or another intelligent agent. Interpreting the data to extract information requires a deciphering key such as a language. For example, proteins are made of amino acids selected based on a translation table (the deciphering key) from nucleotides.
- Information is entirely separate from matter. The same media (matter) may contain data representing information for one or more users, or random noise if the same bits of data have been randomly configured. Furthermore, without a deciphering key, one user’s information is random noise to another (like bird songs to unrelated birds). Information can be encoded in different ways (like distinct languages), resulting in unequal data sets. The size of the data is [in practice] always larger than the information carried due to redundancy which is necessary to maintain the integrity of the carried or stored information.
- The biologic cellular system is strikingly similar to human built autonomous information systems and unlike anything else observable in the inert universe. Media can be anything including any collection of atoms and, without a decoding key, the same media can support an infinity of data. For instance, a DNA chain encodes one set of data when read left to right, another when read in reverse, yet another when read pair-by-pair, and so on. But in living organisms, DNA actually encodes specific information that is uniquely decoded with a key. Furthermore, the information in the DNA is also redundantly encoded to ensure its long term integrity. Aside from DNA and RNA, we can observe many other information systems in nature (with decoding keys such as pheromones, antigens, and hormones), but all are limited to the living.
- DNA mutations are wrongfully interpreted by some as spontaneous information generation, however the DNA limitations show that DNA is not ‘the code of life’, but only a configurable portion of ‘the code of life’. In addition, the adaptive mutations appear limited in range, reversible when the stimulus is removed, and repeatable, indicating their non-random character (as in “the peppered moth”, “Darwin’s finches”, and antibiotic resistance). This is exactly how advanced human designed computer systems behave – they have been built with adaptability in mind, therefore to the untrained eye these systems seem completely autonomous and infinitely auto-reconfigurable (”Artificial Intelligence” fallacy).
- Information cannot just pop into existence in the absence of an intelligent agent. That is why all noise-based information generating attempts including all “infinite monkey” experiments have failed and that is why “Artificial Intelligence” will never “rise”. Separating information from noise has been a very important human activity for thousands of years and success in this endeavor has always been based on two critical elements: deciphering key and redundant encoding.
- Information can exist for a long time without an intelligent agent. Information can be stored, transmitted and downloaded into machines that perform certain operations regardless of whether the intelligent agent is still around or not. Based on all our knowledge about information, not observing the intelligent agent at work should never lead to the absurd assumption that the information machine “arose without a designer”. It is no coincidence that teleological terms such as “function” and “design” appear frequently in the biological sciences.
- Data is everywhere (including fossil record and marks of past events such as asteroid impacts), but that data becomes information only to intelligent agents like us (organisms) and only when we learn to interpret it and to make predictions (answer questions). When we look at the sedimentation and erosion, we take that data and make information from it based on our knowledge. There is no information in the rocks, just data.
Summary:
- ‘Information’, ‘data’ and ‘media’ are distinct concepts
- Information is entirely separate from matter
- Biologic cellular systems are strikingly similar to human built autonomous information systems and unlike anything else observable in the inert universe
- DNA mutations are wrongfully interpreted by some as spontaneous information generation
- Information cannot just pop into existence in the absence of an intelligent agent
- Information can exist for a long time without an intelligent agent
- Data is everywhere (including fossil record), but that data becomes information only to intelligent agents
Links:
https://schneider.ncifcrf.gov/
https://plato.stanford.edu/entries/information-biological/
https://evolutionnews.org/2014/08/biological_info_1/
http://www.worldscientific.com/worldscibooks/10.1142/8818#t=toc
https://discourse.biologos.org/t/information-entropy/35327/21
Notes:
Con: Information is just entropy.
Pro: Shannon never said “Information = Entropy”. Wikipedia quote: “Entropy is a measure of unpredictability of the state, or equivalently, of its average information content. Hence Entropy is just an attribute of Information. In addition, information always requires a deciphering key and some redundancy, both of which reduce entropy. Information is meaningful only to the sender and receiver (and the spy). To all others it’s noise.
Con: Random number generators can open any lock.
Pro: The human opens the lock, not the random generator. The random generator is just a tool to the human.
colewd,
Sorry Bill, but I could not find a non-offensive way to describe those standards.
Later.
Corneel,
Do you mean the argument that you can add 500 bits of information into the genome by having 500 single bit adaptive mutations to 500 genes?
colewd,
Given Bill Cole’s track record, why do you grant any credence at all to Bill Cole’s evaluation of the evidence?
Not that I am aware of. The complete sequence length of the genome is known for a fair number of organisms nowadays, but there is is large variation in the proportion that is functional (including as you say, coding and regulatory sequences). I would guess you can use the summed length of the coding sequences or of the conserved sequences as a proxy. But I don’t know whether anybody has collected that type of data.
Yes, insects tend to have fewer genes than humans (for example Drososophila melanogaster has ~ 13,500 genes, humans have about 25,000), so I think it is safe to assume that humans indeed have a greater amount of functional DNA than insects. However, the number of genes may be misleading as well as it can rapidly increase by the expansion of certain gene families or polyploidization events, resulting in greatly redundant genomes.
So I guess I don’ t know. Sorry that I can’t give your questions a better answer. Perhaps some of our fellow TSZers are more savvy about this than I am.
The type that goes by the name of “functional specified information” (and a few other I believe). Still trying to come to grips with it, but I believe that when unfavorable allleles are being displaced by sequences that are superior in fulfilling that function, than the functional specified information has increased, not decreased.
Please correct me if you think that is mistaken.
Read a bit more carefully the sentence you just quoted, please: The miscarriage IS the process of purifying selection in action. Yes, that is the same process that allows healthy babies to be born. Are you really disputing this? Do I need to explain that “selection” means that some things get a pass and others don’t?
In fact: no they don’t. You can answer those questions whenever you want.
Not by having the mutations, but by having the frequencies of favorable alleles increase by natural selection. I was quoting that in order to prevent you from changing the subject from natural selection to the “origin of information”, i.e. mutations.
Just follow the link in the post you answered to. Joe wrote a nice OP explaining what he meant.
Your a priori beliefs and lack of curiosity cause you to short-circuit any meaningful consideration of the evidence.
You’d make a great detective, finding that each and every crime was due to design. It’s meaningless in forensics, and it’s meaningless in science. The importance of intelligence being behind matters can be involved, however (it’s good to know if there was a crime), which is why we recognize the lack of intelligence affecting life vs. how it identifiably affects manufactured items.
Glen Davidson
One of the bizarre beliefs of creationists is that mutation is the “origin” of the information and that changes of gene frequency don’t count. Which flies in the face of Information Theory.
Yeah what’s wrong with that argument anyway? 500 bits is 500 bits, whehter it is added all over the genome or constrained to a single gene. The amount doesn’t change.
Your argument is circular and begs the question. How is it that you so easily spot the flaws in the arguments of others but can’t spot those same flaws in your own arguments?
So?
ok, thanks for that answer. I’m not sure it narrows things down that much but let’s keep exploring. Joe mentioned that he wrote a paper in which he invented something called “adaptive information.” So we’re not talking about that? Because the talk of alleles being displaced by other alleles that are “superior” (and what a loaded term that is) it makes me think you are talking about “adaptive information.”
Hey, I’m not dissing you. I think a lot of people here are talking about “information” and no two of them are talking about the same thing at the same time, lol.
Why would it be bizarre to think that the origin of new information is mutations?
Is it because new mutations don’t actually change the entropy of the source such that the average information per symbol never actually changes?
So how does a change in gene frequency change the average information per symbol?
So here’s a quote from Shallit himself from the link provided earlier by Rumraket. Talk about equivocation and wiggle room:
So if you guys aren’t talking about “Shannon information” what are you talking about?
Would Shallit be above using one definition of information to show how creationists using a different definition of information are wrong?
Surely.
No it just points out the implied contradiction between the idea that information is immaterial, yet comes in physical form (DNA, tree rings, hard drives, electrical pulses in copper wire, vibrations in air, light, books etc.)
What sense is there to the creationist argument that evolution can’t create information if it depends on a particular definition of information?
What is superior about the definition of information used by creationists who claim evolution can’t create information? Where does that definition of information have any application besides in creationist arguments?
And no creationist has managed to show that evolution can’t create information using Shannon information, by the way, so that’s not going to be a help for you either.
Corneel,
So your claim is that purifying selection is a process. Can you describe the process?
Are you also claiming that selection and purifying selection are the same?
Corneel,
Is this because natural selection and origin of information are completely different subjects?
How would you compare and contrast natural selection and DNA repair as mechanisms?
Joe explained what he meant by that:
We need some “function”, some yardstick, that quantifies to what extent any particular sequence or configuration carries functional information. For biological organisms, I think it makes a lot of sense for that function to be fitness. But if you disagree, then that’s fine. Just tell me what you think what that function is. This is the question I put to Bill Cole as well, twice. So far, no answer.
No, that’s fine. Information theory is definitely not my field of expertise, so now is your chance to bluff yourself in. 🙂
Which 1978 paper was he referring to, do you know?
ETA:
But 30 years before Hazen and Szostak.
I’m not an expert either, but I don’t have to be to be able to spot creationist bullshit.
The Evolution-Lobby’s Useless Definition of Biological Information
You love to bury your nose in it. Admit it.
LOL. Are you asking me to describe natural selection? Here goes.
1) There is heritable phenotypic variation for certain traits among individuals in a population
2) There is variation in survival and reproductive success
3) Some of the variation in survival and reproductive success is the result of phenotypic trait variation
Presto, there will be a change in the genetic composition of the population. In this particular case the selective removal of harmful genetic variants. Usually, these get replenished by recurrent mutations and we’ll have ourselves mutation-selection balance.
Did I pass?
Yes, purifying, or negative, selection is a type of natural selection.
Mung,
I was going to interact with you, then I realised my mistake.
ktnxby
Yes [assuming you mean mutations by “origin of information”]
I wouldn’t. DNA repair deals with damage before it ends up in the germline. Natural selection needs genetic variation to be expressed.
Are you going to answer my questions too?
ETA: clarification
Yes it’s “1978. Macroevolution in a model ecosystem. American Naturalist 112 (983): 177-195. (JSTOR)”
Didn’t watch the video, did you? 😉
But that is because you have become quite the expert in spotting creationist bullshit 😉
Corneel,
Are you assuming that DNA repair does not exist in the germline?
Can you put a little more thought into this?
Sure. I am thinking about the relationship between fitness and function where one can describe something very specific and the other is a description of something more general.
Can we describe biological fitness as a measurement of the output (if you can) of a large set of interdependent biological functions?
For the sake of argument at this point can we limit a biological function to an individual protein coding gene as that would merge with gpuccio’s definition of function?
Corneel,
When he learns to spot evolutionist bullshit he will be the star of the show.
Human oocytes are arrested in meiosis during fetal development so no DNA repair happening at that front. It isn’t uncommon, in my experience, that the conversation on mutations often flip, without regard to distinction, between germline and somatic cells despite their obvious differences and lifestage ramifications.
That article argues why Shannon information is a poor use for determing functional information content. Because it argues that we need to consider the functionality of the information. That it is essentially only information if it is involved in some biological function.
I would actually agree with that. But nowhere in that article is it shown that evolution can’t produce new functional information. Just that Shannon’s theory of information is a poor theory for doing that.
So what theory of information should we use to calculate information content, Mung?
If I give you the DNA sequence for a protein coding gene that is known to be functional in some organism, will you do us a favor and calculate how much functional information that gene constitutes? Then we can proceed to analyze whether mutations that affect that function has an effect on the amount of information in the gene. Deal?
I admit it. In fact I’m proud that I dedicate such a large fraction of my free time to debunking creationist ignorance and stupidity. Somebody has to do it and some formerly creationist people have told me I do it well. I will probably be involved in it for the rest of my life. 🙂
Rumraket,
Do you mind if I post this at UD?
Thanks for the favor, I owe you one!
He’s an expert at that too.
Exactly this. It’s amazing how claims about “gain” or “loss” can be made without putting actual values out there. If you can’t say by how much it went down, how do you know it went down at all? Perhaps it went up!
colewd,
Perhaps you can ask at UD for a list of biological (or otherwise) objects that have had the amount of functional information calculated for them by ID supporters and what those values were?
I’d be interested to know.
Not at all. It’s been asked many times before so be prepared to see your friends dodge the challenge and start bringing up irrelevancies. I trust you will take them to task for trying to dodge such a simple request?
That’s what that whole MathGrrl debacle was about. Anyone remember that? Somebody pretending to be a girl went to UD and started asking how to calculate information content before and after gene duplication (for example), using Dembski’s “CSI” and never got an answer. Squirming, flailing, and dodging ensued.
The IDcreationists over there were all quite sure that mere duplication doesn’t increase information, but when it came time to demonstrate this with a simple calculation using Demski’s CSI, nobody was able to do it. They didn’t even know how to calculate the information content for one gene. How many bits of information is there in sequence X? No answer, just shitty excuses for not wanting to even give it a shot.
Fantastic really.
You’ll need to explain what you mean by “information content” before that question makes any sense.
Amount. Quantity. How much information there is. Like “caloric content” of some food. It has so many joules pr. gram.
I’d like to see a similar thing done for functional DNA, like a protein coding gene of some length. How much functional information is that? I’m assuming we measure it in bits.
Creationists like Gpuccio says evolution couldn’t possibly make more than 500 bits of it. So I’d like to know how to calculate situations before and after mutation, to see how and if that has affected the amount.
No, and I can’t read the paper either.
ok, sure. but doesn’t seem relevant.
https://hazen.carnegiescience.edu/research/complexity
Does anyone here know what they are talking about?
Are you giving those two references because you think those are the definitions of information we should be using to calculate the quantity of information in biological polymers like DNA?
No. They were given to show that no one here knows what they are talking about.
How do they accomplish that? And presumably that includes Nonlin who made the OP as also not knowing what he’s talking about?