What Is A Code?

Lots of heat surrounding this question.

My take is that a code must be a system for conveying meaning.

In my view, an essential feature of a code is that it must be abstract and and able to convey novel messages.

DNA fails at he level of abstraction. Whatever “meaning” it conveys cannot be translated into any medium other than chemistry. And not just any abstract chemistry, but the chemistry of this universe.

Without implementing in chemistry, it is impossible to read a DNA message. One cannot predict what a novel DNA string will do.

DNA is a template, not a code.

Go to it.

207 thoughts on “What Is A Code?

  1. Allan Miller,

    Well, I do dispute that it is, internally, a ‘mapping’. It can be represented as such in a matrix. But its simply one AARS having specificity for a set of tRNAs, another for another etc, which creates a pool of charged tRNAs covering most of the codon set. Those tRNAs have physical affinity for their complementary codon. One could analogise to a set of paint brushes with screwdriver ends – say Flat, Philips and Hexagon. 3 charging systems grope for their specific screwdriver ends and put paint on the other – each only has one kind of paint, and can only accommodate one bottom end.

    Then these charged bushes dock into recesses in a pattern of screws and daub the paint on the other end onto a sheet of paper. Because of the physical limitation on docking, a given set of screws will always produce the same pattern on the sheet. But I don’t see the shape, either that on the brush end or its counterpart, as ‘representing’ the colour it ends up associated with, any more than the paint colour represents the shape.

    We can easily create a mapping from this. We can draw the shapes – we can even represent them symbolically in 2 dimensions – but that does not make the represented association a mapping.

    This is, I realise, simply a semantic exercise.

    I think it’s a bit more substantive than that. What you are so clearly elucidating is that the mapping is, in fact, a map. It is a mental model, not the actual biochemical territory. Confusing the map with the territory appears to be a common variant of IDCist equivocation, as several people have pointed out over the past too many years since the Wedge Document.

  2. BruceS: So a taxonomy of ID arguments on finding a complex structure:
    1. It is irreducibly complex: it could not evolve by known mechanisms.
    2. It has a function, and a function requires a designer.
    3. It has a meaning like a language, and a meaning requires an intelligent agent. Further, the meaning is there, it was not added by us

    Yes, that seems to cover the waterfront. All thrown against the wall like pasta with the devout prayer for soundness anywhere.

  3. Patrick: I think it’s a bit more substantive than that. What you are so clearly elucidating is that the mapping is, in fact, a map. It is a mental model, not the actual biochemical territory. Confusing the map with the territory appears to be a common variant of IDCist equivocation, as several people have pointed out over the past too many years since the Wedge Document.

    This is my objection to calling DNA a code.

    IDists attempt to apply a syllogism where none is justified.

    If DNA is a code, and codes are designed, then DNA is designed.

    Not unlike the following:

    The Old Man of the Mountain is a stone face; [all] stone faces are carved by humans; therefore the Old Man of the Mountain was carved by humans.

    The conclusion is reached by assuming [all].

    This is beyond silly. It is holding your breath till you turn blue infantile.

    The [all] is the very thing you are trying to prove. They are slipping the conclusion into the assumptions.

    It is not unlike Behe’s formulation of the Edge of Evolution, or Dembski’s explanatory filter.

    All of them assume their conclusion.

    I don’t really care whether you call DNA a code. If you call a tail a leg, how many legs does a dog have? Enough word games.

  4. petrushka,

    When pressed, the IDist will resort to the weaker argument:

    (1) We know from past experience that codes are designed.
    (2) The genetic code is a real code.
    (3) Therefore, it seems likely that the genetic code was designed.

    The problem with this version of the argument is that (3) is no more than a hypothesis to be tested. The correct move is to then ask, “how can we figure out whether or not the genetic code was designed?” But instead of doing this, IDists assert that the burden of proof falls on the “naturalists” to show that non-intelligent processes can generate codes. Hence the endless debate about which side is shirking the burden of proof — IDists insist that we are, we insist that they are, on and on and on.

  5. Kantian Naturalist: But instead of doing this, IDists assert that the burden of proof falls on the “naturalists” to show that non-intelligent processes can generate codes.

    But that’s the goal of OOL researchers. They have taken up that burden.

    I once pointed out to Kariosfocus that several hundred years elapsed between Galileo’s discussion of gravity and Einstein’s general relativity (not to mention that Einstein is not the last word).

    I was instantly banned.

    But Newton and Einstein are still more useful descriptions of planetary motion than angles pushing.

    As evolution is a more useful notion than design.

    Evolution is useful, even in the absence of a pathetically detailed history of OOL.

  6. petrushka: But that’s the goal of OOL researchers. They have taken up that burden.

    Yes, and some of the suggestions are quite promising.

    ID is, at bottom, nothing more than a bet — a bet that there is no viable theory of abiogenesis that does not posit an intelligent being at some point in the process.

    For quite a while I’ve maintained that the true rival to design theory is not Darwinisim or any version of post-Darwinian evolutionary theory, but complexity theory, or the theory of self-organizing systems.

    If one situates Varela and Maturana’s autopoeisis theory of life against the background of Prigogine’s physics of dissipative structures, the conceptual barriers to abiogenesis dissolve. Then it becomes a matter of simply understanding that all one needs for a living system is an autocatalytic network contained within a semipermeable membrane.

    That’s not to say that we actually know how that came about, or that we will ever know. Maybe we will, and maybe we won’t.

    But it is to say that neither emergence of autocatalytic networks in an environment ‘friendly’ to dissipative structures, nor the emergence of the organism-environment relation based on containing the autocatalytic network within a semi-permeable membrane, seem to call out for the intervention of an intelligent being.

    In that light, design theory is a bad bet.

  7. Kantian Naturalist:

    For quite a while I’ve maintained that the true rival to design theory is not Darwinisim or any version of post-Darwinian evolutionary theory, but complexity theory, or the theory of self-organizing systems.

    If one situates Varela and Maturana’s autopoeisis theory of life against the background of Prigogine’s physics of dissipative structures, the conceptual barriers to abiogenesis dissolve. Then it becomes a matter of simply understanding that all one needs for a living system is an autocatalytic network contained within a semipermeable membrane.

    My concern would be this: OOL research as discussed in Lane’s books linked above is a scientific research program with deep attention paid to the biochemistry and thermodynamics of early earth as well as the implications of the physical fossils and their locations. It uses many evolutionary concepts.

    But, as best I can tell, the Varela stuff is better considered as a philosophical viewpoint which incorporates some high-level generalities from mathematics, but few specific, testable models. If there is an active scientific research program, it is relatively minor.

    Is that fair?

  8. I will simply explore the limb I have climbed out on.

    There is no analog to chemistry. There is no layer of abstraction whereby one can simulate novel properties of biological molecules.

    Unless this changes, design is cut and try. Evolution.

    I find it interesting that cut and try is becoming useful in design of complex electronics, things like circuit board layout.

    And management of power grids.

  9. BruceS: But, as best I can tell, the Varela stuff is better considered as a philosophical viewpoint which incorporates some high-level generalities from mathematics, but few specific, testable models. If there is an active scientific research program, it is relatively minor.

    Is that fair?

    From what I can tell, yes. I was only talking at a conceptual level — that autopoesis + complexity theory helps remove the conceptual barrier to abiogenesis.

    Whether we’ll figure out the empirical details of the causal process is another question. I haven’t read Lane’s book, or read any recent work in abiogenesis. If you recommend it I’ll put it on my reading list for a holiday.

  10. BruceS: I don’t accept the god-of-the-gaps argument. Why is your argument different?
    (One answer might be to show the problem is NP-complete. Perhaps it has been shown to be that?)

    Out of curiosity I googled protein folding and NP completeness.

    All the results I found assume that protein folding is NP complete. Several papers assert this is proven.

    http://arxiv.org/pdf/1306.1372v1.pdf

  11. BruceS,

    I hope I am not seeming to argue for excessive reductionism. The point about the genetic code is that it operates at a level where electrostatic interactions are very much a dominant force. That is not true for bee dances or many of our interactions, even if there is some level of grounding in the ‘electrostatic’ world. Such things have a latitude that molecular interactions do not.

  12. Kantian Naturalist,

    I haven’t read Lane’s book, or read any recent work in abiogenesis. If you recommend it I’ll put it on my reading list for a holiday.

    Lane’s an enviably good writer. Makes biochemistry seem interesting … ! 🙂 Steers a good course between accessibility and avoiding dumbing-down. I’d recommend Oxygen and Power, Sex, Suicide (which interconnect, though stand alone too).

  13. Erik,

    Fair enough. I had in mind some exception for analytic statements, but of course even those must be parsed.

  14. Kantian Naturalist: From what I can tell, yes. I was only talking at a conceptual level — that autopoesis + complexity theory helps remove the conceptual barrier to abiogenesis.

    Whether we’ll figure out the empirical details of the causal process is another question. I haven’t read Lane’s book, or read any recent work in abiogenesis. If you recommend it I’ll put it on my reading list for a holiday.

    Don’t get me wrong: dissipative systems, far-from-equilibrium thermodynamics, specific DST models, and self-organizing systems are all part of OOL, at least in how Lane explains the research program in his latest book The Vital Question . Those ideas are mainstream science, I think, not a new paradigm.

    What is more controversial is whether DST is an alternative to representationalism as updated by models like the predictive approach Clark favors. I suspect the two are complimentary, not in competition, with the right kind of computational and representational models showing implementation and explanatory mechanisms which underlie the success of specific DST models, which I think tend to be more like predictive and descriptive models.

    This interview with Habermas may interest you.

  15. Kantian Naturalist

    Lane’s book, or read any recent work in abiogenesis. If you recommend it I’ll put it on my reading list for a holiday.

    Yes, I (and all the reviewers I have read) recommend Lane’s latest the Vital Question, although as I have said in other posts I found some of the details tough going. His earlier Ten Inventions of Evolution is also good and an easier read but has a much less detailed view of the OOL stuff.

    One point I found fascinating: Based on his models, Lane thinks the conditions to produce simple life are likely to be widespread in the universe, but complex life, ie Eukaryotas, arose by a fluke and may be hence very, very rare.

  16. BruceS: One point I found fascinating: Based on his models, Lane thinks the conditions to produce simple life are likely to be widespread in the universe, but complex life, ie Eukaryotas, arose by a fluke and may be hence very, very rare.

    That will make conquest easier.

    Or perhaps harder.

  17. petrushka: Out of curiosity I googled protein folding and NP completeness.

    All the results I found assume that protein folding is NP complete. Several papers assert this is proven.

    http://arxiv.org/pdf/1306.1372v1.pdf

    I only looked at the abstract for that linked one and the last paragraph, which I read as saying current proofs (as of the paper 2013) failed because they were not biologically correct. However the authors did say they believed the problem was NP complete, it was just that a biologically valid proof was still lacking.

    My goto guy for NP stuff is Scott Aaronson, but the only reference I could find of his was this one, where he mentions that perhaps protein folding might somehow be used to solve NP complete problems, which I believe implies that it too must be an NP complete problem. But he concludes that it would be impossible in practice to use protein folding that way.

  18. BruceS: Don’t get me wrong: dissipative systems, far-from-equilibrium thermodynamics, specific DST models, and self-organizing systems are all part of OOL, at least in how Lane explains the research program in his latest book The Vital Question. Those ideas are mainstream science, I think, not a new paradigm.

    Ooh, now I’m really interested in Lane’s book!

    What is more controversial is whether DST is an alternative to representationalism as updated by models like the predictive approach Clark favors. I suspect the two are complimentary, not in competition, with the right kind of computational and representational models showing implementation and explanatory mechanisms which underlie the success of specific DST models, which I think tend to be more like predictive and descriptive models.

    I think that Clark (and also Michael Wheeler) would say that embodied-embedded dyanicism and neurocomputationalism are complementary. I myself have come to think that the anti-representationalism of enactive cog sci is actually not well-grounded. Mark Rowlands (in The New Science of the Mind) claims that Gibsonian affordances could be representational. I haven’t yet read Gibson, so I can’t respond to that claim directly, but it’s certainly very interesting.

    We can take these issues once I’ve finished Clark’s Surfing Uncertainty — hopefully by the end of next week. I finished Retrieving Reality today.

    This interview with Habermas may interest you.

    Definitely! Thank you for that!

  19. BruceS: One point I found fascinating: Based on his models, Lane thinks the conditions to produce simple life are likely to be widespread in the universe, but complex life, ie Eukaryotas, arose by a fluke and may be hence very, very rare.

    That certainly fits with my intuitions. I suspect that intelligence and consciousness are adaptations that are exceedingly rare in the Universe — if indeed they exist anywhere besides Earth.

  20. I see Upright Biped makes an appearance at Uncommon Descent in Eric Anderson’s thread on David Reznicks work on Guppies. (Dawkins highlighted his work in Greatest Show on Earth.)

    In this comment UB remarks:

    In a genuine translation system (like protein synthesis) the product of the system is not determined by the physical properties of the representation being translated.

    Templates, Upright Biped! You need to grasp the idea of templates and how biological molecules interact in an aqueous medium. They don’t talk to each other; they bind to each other depending on rules of physics and chemistry, not semiotics.

    PS what’s happening to the web site? It must be four years now and still under construction?

  21. Steve: DNA is a library, not a language, not a code.

    With its “overlapping multi level codes” (™ BA77) what sort of library would that be then? One where the books are connected together randomly and you can only check one out by checking all associated books out? Or a library where reading one book changes the meaning of all the others?

    Do go on.

  22. Alan Fox: PS what’s happening to the web site? It must be four years now and still under construction?

    Perhaps the world is just not ready for the undiluted genius that is UB.

  23. BruceS,

    One point I found fascinating: Based on his models, Lane thinks the conditions to produce simple life are likely to be widespread in the universe, but complex life, ie Eukaryotas, arose by a fluke and may be hence very, very rare.

    Yes, that’s my take too. Life was prokaryotic for 2 billion years, and had no compelling need to be anything else. Certain contingencies allowed for two kinds of genome-fusion: endosymbiosis and sex (haploid-diploid alternation). Many of the complexities of ‘big life’ on earth are a consequence of the change in dynamics that these two modes of genome-mingling wrought.

  24. Alan Fox:
    UB:
    In a genuine translation system (like protein synthesis) the product of the system is not determined by the physical properties of the representation being translated.

    AF: Templates, Upright Biped! You need to grasp the idea of templates and how biological molecules interact in an aqueous medium

    I could interpret UB’s statement as saying a general translation system involves following rules whereas DNA biochemistry on its own is a pure causal system which just unfolds according to laws of nature.

    Following rules is special because means it is possible to make a mistake even though the mistake itself was a product of a causal process. See my exchange with Allan Miller starting here.

    Just to show how important “following rules” is, note that people have made philosophical careers writing about what Wittgenstein meant by the phrase and whether Kripke got it right.

  25. Kantian Naturalist,

    That certainly fits with my intuitions. I suspect that intelligence and consciousness are adaptations that are exceedingly rare in the Universe — if indeed they exist anywhere besides Earth.

    That’s interesting. My intuition is exactly the opposite, based primarily on the principle of mediocrity. Well, that and a passion for science fiction.

  26. Alan Fox,

    UB remarks:

    In a genuine translation system (like protein synthesis) the product of the system is not determined by the physical properties of the representation being translated.

    I no longer give UD traffic. Does Upright Biped explain what part of protein synthesis is not determined by the physical properties of the system?

  27. BruceS: Just to show how important “following rules” is, note that people have made philosophical careers writing about what Wittgenstein meant by the phrase and whether Kripke got it right.

    A note on my take on “following a rule.”

    I distinguish between mechanistic rule following and purposeful rule following. I take Wittgenstein to have been talking about purposeful rule following. Rule following in mathematics is mostly mechanistic rule following, and clearly such rule following is possible.

    When teaching calculus, we teach students to decide when to use integration by parts. But the rules for that involve purposeful rule following. Deciding which “trick” to use isn’t something that can be easily mechanized. And students have trouble with learning those rules, while they can generally manage the mechanistic rules of doing addition.

    Oh, and it is purposeful rule following that requires intelligence. The mechanistic rule following done by computers is not anything that I would consider intelligent.

  28. Steve:
    DNA is a library, not a language, not a code.
    Problem solved.

    Yes. As Wagner says, it’s The Library of Babel.

  29. Neil Rickert: Oh, and it is purposeful rule following that requires intelligence. The mechanistic rule following done by computers is not anything that I would consider intelligent.

    I would argue that purposeful rule following is just following evolved rules rather than formal rules.

    At one level this is a distinction without a difference, but the distinction I wish to make is that purpose cannot (easily) be analyzed or reduced.

    Earlier, I asked if we could design a molecule that would modify birds’ migratory behavior.

    There are evolved things whose mechanisms and behaviors do not make sense in traditional engineering terms. We have some examples of evolved electronic circuits that could not have been designed using textbook rules.

    Our past understanding of causation was modeled as billiard ball interactions.

    Evolution produces networks of causes. Penrose seems to think this involves quantum entanglement, but I think network causation is just too complex to analyze. It’s an emergent form of causation.

  30. I googled network causation and found that it is an established field. The term is new to me.

  31. Patrick: I no longer give UD traffic.

    I’ve never managed the 12 steps. Though I do note that Alexa reports interesting statistics in comparing TSZ to UD.

    Does Upright Biped explain what part of protein synthesis is not determined by the physical properties of the system?

    Well, he makes the point that amino-acyl tRNA synthetases that charge the tRNA molecules with their appropriate amino acid residues have an arbitrary relationship to amino acids and their codons. One could theoretically use a whole new swathe of aa tRNA synthetases and then would need a different codon sequence to protein the same protein sequence. As far as it goes, it’s a fair point.

    ETA but it’s still templates all the way down!

  32. Kantian Naturalist:

    (3) Therefore, it seems likely that the genetic code was designed.

    The problem with this version of the argument is that (3) is no more than a hypothesis to be tested. The correct move is to then ask, “how can we figure out whether or not the genetic code was designed?” But instead of doing this, IDists assert that the burden of proof falls on the “naturalists” to show that non-intelligent processes can generate codes. Hence the endless debate about which side is shirking the burden of proof — IDists insist that we are, we insist that they are, on and on and on.

    Agreed.

    What can be formally falsified is whether such a system is exceptional or not. Short of the Designer actually showing himself (like seeing factory workers at work) it is only a circumstantial claim and educated guess. Trying to argue the inference is immutable and infallible — that only weakens the force of the argument.

    Some discovery might demonstrate that a not-so-remote special circumstance can suddenly invalidate our assumptions of how things can from spontaneously and naturally.

    If it is agreed that the phenomenon is indeed exceptional, then it raises the question whether any phenomenon could be exceptional enough to warrant a design inference or miracle or whatever. That was the issue raised here:

    Miracle or Privileged Observation?

    I argued, no one can possibly have a formal right or wrong answer, they can only make their best guess.

    My ID colleagues take exception to me saying this, but I respond, “you guys insisting you are right when you have no direct observation of the designer, no experiments that give a direct observation of the designer, you can’t possibly raise it to the same level of intuitive believability of man-made designed objects. You don’t help your case by insisting it is obvious. It may be true, but lets not pretend the case is obvious and that there isn’t a small element of faith at the core of the claim.”

  33. Alan Fox,

    Does Upright Biped explain what part of protein synthesis is not determined by the physical properties of the system?

    Well, he makes the point that amino-acyl tRNA synthetases that charge the tRNA molecules with their appropriate amino acid residues have an arbitrary relationship to amino acids and their codons. One could theoretically use a whole new swathe of aa tRNA synthetases and then would need a different codon sequence to protein the same protein sequence. As far as it goes, it’s a fair point.

    ETA but it’s still templates all the way down!

    If there were a different set of aa tRNA synthetases would he just use that as “evidence” of design? Every step in the pathway is still just chemistry. It seems he’s missing the step that comes before the conclusion of “Therefore, it was designed.”

  34. Patrick,

    If I were an ID strategist, I’d point my big guns against OOL. It’s still “God of the gaps” but there are still many questions and too few answers. Of course, a much better strategy would be to come up with alternative testable hypotheses. All-in-all, I’m happy not to be an ID strategist.

  35. Being able to produce a consistent phenotype is likely to be reproductively advantageous – no point in duplicating your DNA effectively but not your phenotype – so even if the initial mapping was highly ambiguous, the more specific each codon was to an amino acid, the more reliable the phenotype would tend to be – so there’s fairly clear selective pressure for a set of tRNA molecules that do not specify more than one amino acid for one codon.

    There’s no selective pressure for a set in which more than one codon specifies one amino acid, though – that could make the system more robust.

    But the redundancy needs to be that way round for reproductive reliability.

    So the arbitrariness of the mapping doesn’t impress me – a first-past-the-post system would tend to generate an abitrary mapping, because if having two tRNAs that map one codon to two different amino acids generates unreliable offspring, then an offspring that only produces one of them – it doesn’t matter which – will tend to have a more reliable lineage.

    Back to the evolution of evolvability! Selection can act on mutation rates themselves, but also on phenotypic fidelity.

  36. Elizabeth:..there’s fairly clear selective pressure for a set of tRNA molecules that do not specify more than one amino acid for one codon.

    Indeed, and as Allan Miller has often pointed out, one could envisage early organisms operating on a smaller set of amino acids.

  37. stcordova: …when you have no direct observation of the designer, no experiments that give a direct observation of the designer, you can’t possibly raise it to the same level of intuitive believability of man-made designed objects. You don’t help your case by insisting it is obvious. It may be true, but lets not pretend the case is obvious and that there isn’t a small element of faith at the core of the claim.

    It was a poorly-though-out strategy when Of Pandas and People was repackaged after Edwards v. Aguillard, and a poorly thought-out-strategy to pursue at Dover (though William Dembski, Stephen Meyer and John Campbell wisely withdrew). Why anyone persists with the claim that ID really, really is science is beyond my comprehension. On the other hand, I have no problem with the unfalsifiable claim of a creator god. In itself, the idea is harmless. The extension of that into a claim of moral authority is not so harmless.

  38. Alan Fox,

    Indeed, and as Allan Miller has often pointed out, one could envisage early organisms operating on a smaller set of amino acids.

    Huh! I was just about to point this out … again! 😉

    It all depends what early proteins were used for. People are bamboozled by lengthy multi-acid modern enzymes into thinking that’s the minimum spec for a useful protein.

  39. petrushka:
    I googled network causation and found that it is an established field. The term is new to me.

    You mean as in Bayesian networks? (eg if my lawn is wet either the sprinker was on or it rained with attached probabilities?). Or as in mulitple causes like: the catastrophic forest fire was caused by the long dry spell and the lightning strike. Or something else?

    In philosophy, just causation is enough get some people (ETA*) excited hot and bothered. Network stuff just makes it worse (or better, depend on your point of view).

    ———–
    * Maybe wrong kind of excitement before ETA. Maybe.

  40. UB@UD.

    In a genuine translation system (like protein synthesis) the product of the system is not determined by the physical properties of the representation being translated.

    Is it a ‘genuine translation system’? What would a pseudo-translation system look like?

    The amino acid and the anticodon are physically separated, at opposite ends of the charged tRNA molecule, but also physically joined by it. The chemical form of each is independent of the other, and assignments could be swapped (with some difficulty, because you’d have to retool the active sites of two AARSs to take each other’s shape). But the fact that there is (probably) no necessary association between anticodon XXX and amino acid @ does not lead to any particularly compelling conclusions regarding the evolution of that association. It’s like finding that mushrooms are always next to tomatoes on a kebab, with the only possible explanation being intentional choice. A little imagination could come up with others.

    There is no necessary (ie chemically compelled) association between a promoter/repressor and its gene either. But then again, I suppose that would be ruled ‘Programming therefore Design’ in the Court of Bad Analogy.

    As a side note, there are two classes of AARS, splitting the set between them. They appear to have evolved independently, but within each class, all appear related. One of the ‘chicken-and-egg’ difficulties often presented is that AARS enzymes are protein, but it certainly appears that AARS’s have evolved. All that is logically required is a primitive non-protein AARS whose function was taken over by first one then another protein version as the acid set and catalyic repertoire expanded. If one regards the commonalities within one AARS type as due to Common Design, one might wonder why there are two different designs for doing precisely the same thing.

  41. Allan Miller:

    Is it a ‘genuine translation system’? What would a pseudo-translation system look like?

    I think it is the same issue you and I had an exchange about.

    A natural language has meaning independent of the physical medium it appears in: spoken, written, bits on the internet, and so on.

    But a purely causal physical system does not. We cannot talk about how a system should behave under gravity, only how it does behave given the physical parameters.

    So is the genetic code a
    (1) a language which happens to have a DNA implementation but with a meaning independent of that, or
    (2) a biochemical physical system whose current operation and past development are only causal interaction depending on physical parameters?

    I’ll take door (2), Monty.

  42. BruceS,

    Biped has been pursuing this representational argument for years, but he does nothing to show that this actually is a ‘genuine translation system’. Of course it is routinely called translation, a convention I am more than happy to adopt (and could hardly be understood if I picked my own private convention).

    His answer to the logical possibility that the system had simpler precursors and non-protein versions of components is ‘demonstrate it’ – a dodge, in my view, since he advances a purely logical argument but then demands an empirical demonstration of the counter-argument.

  43. Neil Rickert

    Oh, and it is purposeful rule following that requires intelligence.The mechanistic rule following done by computers is not anything that I would consider intelligent.

    I’m not sure if you are using rule-following in the same way Wittgenstein (according to Kripke) means it, and so I am not sure how the purposeful versus mechanistic answer applies.

    I did say no more philosophy earlier in the thread, but it’s the weekend so I am going to just ignore that earlier commitment.

    As I understand the philosophical issue
    1. Rule following (as opposed to rule obeying) means there is a possibility of error. (This relates to language since if meaning relates to how an expression should be used, there must be rule following).

    2. But nothing inside of us, eg in our mental models, can determine the rule to follow for future cases. For all we have is past behavior and this is finite. Being finite, it under-determines what we should do. (This issue is not about what we actually do in any given case).*

    3. Kripke’s answer: It is correct that rule following is not something that comes from within us. Instead, we learn what we are warranted in asserting from growing up in a language community that corrects wrong usage. But that explanation has issues: What about a community determines “should” for it? What if there are different sub-communities where different justifications apply?

    4. What Wittgenstein thought was the right response is controversial (surprise!). One line of thinking is that he thought the whole question was another example of philosophical confusion. Rules are not like signposts which require interpretation. Minds can reach forward to make judgements about future cases. That is what minds do. But he does not try to explain how they do it.

    One way to address the how-minds-do-it issue is to try a naturalist answer: basically a variation of teleosemantic approach that has come up in other parts the thread. So that would involve evolution.

    I don’t know how that line of thinking relates to your post.

    ————————–
    * As an IT person, when I first read Kripke I thought the answer is obvious: we don’t apply a finite set of rules, we follow an algorithm. But that does not help. It just moves the goal posts without addressing the core issue. For following an algorithm itself requires following rules.

  44. BruceS: I’m not sure if you are using rule-following in the same way Wittgenstein (according to Kripke) means it, and so I am not sure how the purposeful versus mechanistic answer applies.

    I think I am making roughly the same distinction as Kripke, though I don’t like his way of putting it.

    A mechanism does as the mechanism does. So, in this sense, it cannot be wrong. However, I used “mechanistic” rather than “mechanical” for a reason.

    We can and do follow rules in mathematics. Basically, we do this by defining a logical mechanism which we emulate. There does not seem to be a difficulty with following rules in that way. However, it is possible to make a mistake (be in error) when doing that.

    Most rules in ordinary life are such that we perceive what we are doing, and use our perception to monitor whether we are following what we take to be the rule. That’s what I am calling “purposeful rule following.” And that’s where there is uncertainty as to what the rule really means (which I take to be Wittgenstein’s point). The trouble with couching it in terms of error, is that this presupposes some sort of background of truth. And I’d prefer not to depend on that. After all, determining what counts as true is something like the kind of rule following that is at issue.

    I’m also not a fan of Kripke’s view of meaning (again, because it is tied to truth).

  45. Allan said:

    His answer to the logical possibility that the system had simpler precursors and non-protein versions of components is ‘demonstrate it’ – a dodge, in my view, since he advances a purely logical argument but then demands an empirical demonstration of the counter-argument.

    Your reasoning is flawed. There is no reason to argue against something offered as a bare logical possibility. Currently, the only commodity we know to construct semiotic systems is intelligence. We have zero information indicating anything else can construct such a system.

    Offering a bare logical possibility is not a rebuttal to the argument UB presents.

Leave a Reply