Lots of heat surrounding this question.
My take is that a code must be a system for conveying meaning.
In my view, an essential feature of a code is that it must be abstract and and able to convey novel messages.
DNA fails at he level of abstraction. Whatever “meaning” it conveys cannot be translated into any medium other than chemistry. And not just any abstract chemistry, but the chemistry of this universe.
Without implementing in chemistry, it is impossible to read a DNA message. One cannot predict what a novel DNA string will do.
DNA is a template, not a code.
Go to it.
You mean against UB’s bare logical possibility?
Glen Davidson
UB has merely expressed a claim of incredulity along the lines of Behe’s claim that bacterial flagella could not have evolved, with the addition of some irrelevant “semiotic” blather.
Ironically, that is close to my view.
See:
What, the argument hidden behind a password only website?
William J. Murray,
The ‘semiotic’ thing is beside the point I was getting at. He was talking about protein AARSs. There is a perfectly respectable logical possibility that there were precursor enzymes with AARS capability that were not made of protein. That is the answer to that particular pseudo-logical conundrum – how could protein AARSs make themselves?
But on semiosis, the fact that the only known source of codes is intelligence is not compelling. The only known source of codes is people. Of course that’s not what you want to argue – there’s another chicken-egg state. So we have a couple of logical possibilities:
1) Intelligence is not restricted to people, nor even biological entities.
2) It is not that kind of code – the kind that needs intelligence to produce it.
I agree that we can follow the rules by simulating algorithms (how many programs have you debugged that way?), but I suspect that we only do so because we already understand natural language which is where the core of the issue lies. (That reminds me of a point Walto made to me some time ago on formal languages being grounded in natural ones which I never really followed; perhaps that is what he meant).
I understand the rule following issue as about avoiding an infinite regress (to follow a rule, you need a rule for following rules correctly, …). Kripke sees appealing to the community as the way out; Wittgenstein wants to dissolve the issue as a philosophical confusion.
I’m not sure if right and wrong have to do with truth.
I guess it depends on whether you mean truth in terms of the causal structure of the world (in which case I’d say no it is not that kind of truth) or in terms of the correspondence to a reality of norms (in which case I’d say … too complicated, maybe later).
Did Kripke have a theory of meaning? I know about the causal theory of reference, and according to SEP others (Devitt) have tried to extend that to meaning. What did you have in mind?
Now, back to the Mung coding wars.
This is the take that my friend has, who has some knowledge in this area:
According to Shannon:
– Information informs by reducing uncertainty in the receiver of a message.
– Uncertainty can be considered a question. Information answers a question.
– There are a finite number (N) of possible answers to a question.
– Each answer to a question can be assigned a symbol.
– The set of N possible symbols that can answer a given question is a Code.
– The Information content of a symbol can be quantified by how much it reduces the uncertainty for the answer to the question on its receipt.
– The probability of one of the N symbols being sent by the sender to answer the question is 1.0.
– If all answers are equally probable, the probability of a given answer is 1/N. That means that the information content of a given symbol is 1/N. as it reduces the uncertainty from 1/N to 0 on its receipt.
– If all answers are not equally probable, then then some symbols would reduce the uncertainty more than others. The more probable is a given symbol, the less information content it contains on its receipt.
– The number of bits needed to represent the full range of N symbols in a code is 2^N.
– The sending of a symbol from sender to receiver requires that bits are conveyed in some material fashion, such as pulses of energy or movement of matter.
– Stored information can be treated the same way, as it can be considered a delayed message.
. . .
And so in the case of DNA, if you take out the redundancy, the codons in a gene are a Code of 20 symbols. N = 20. The receiver of the information is the cell chemistry that performs the transcription/translation of the codons to produce amino acids.
18 of 20 symbols code for 18 amino acids, with 2 of the symbols coding for the end of the protein under construction and the start of the next one.
. . .
You could find other messages in DNA, such as the non-coding genes that regulate the expression of the coding genes. That is more complicated as it forms a very complex combinatorial network. The 20 valued code is an easy one, though.