Upright BiPed’s Semiotic Argument for Design

In a post here at Uncommon Descent, Upright Biped makes what he calls his Semiotic Argument for Design, which he has been challenging me to refute for some time now, but which I have been struggling to understand.  So it was good to see it summarised in one place, and I’d like to take a look at it piece by piece, and with your help, try to figure out what he’s getting at (I’m assuming he’s a he, which I don’t normally do, but I think he said something once that implied he was).

It’s a response to Larry Moran who dropped by from his Sandwalk blog to talk about onion genomes, but we don’t have to worry about onions too much, I don’t think, as UBP is making a more fundamental claim.

He says to Larry:

In your comments you refer to the use of the term “information” within nucleic sequences as a useful analogy, and you say that there is no expectations that it should “conform to the meanings of “information” in other disciplines.” I certainly agree with you that it conforming to other meanings would be a telling turn of events. And I assume your comment suggests that the nucleotide sequence isn’t expected to share any of the same physical characteristics as other forms of information – given that we live in a physical universe where information has physical effects. Ones which we can observe.

So UBP’s starting point seems to be that the “information” we say a genome contains is not different from “information” in other senses.

He says:

I think it makes an interesting comparison; the comparison between the physical characteristics of information transfer in the genome, versus information transfer in other forms. Just recently on this forum we were having a conversation about recorded information, and a question arose if a music box cylinder ‘contained information’. Speaking to its physical characteristics, the answer I gave was “yes”. Just like any other form of recorded information, the pins on a music box are an arrangement of matter to act as a representation within a system. No differently than ink on paper, or the state of a microprocessor, or the lines left on a recording tape, or an ant’s pheromones, or the tone of vibrations we make when we speak; they are all matter/energy arranged in order to represent an effect within a system.

A musical box is an interesting example, because unlike some other information transfer systems, like the pixels on your screen you are viewing right now and from which you are receiving information about the contents of Upright Biped’s post on UD, the sequence of pins on a music box cylinder are actually instrumental in making something else, in this case a melody, and thus bears a closer homology to the sequence of base pairs on a DNA molecule which, by a series of physical operations, results in the making of a protein.

He goes on to say:

It was also pointed out that a physical arrangement of matter (like the pins on a music box cylinder) cannot by themselves convey information – they require a second coordinated physical object. This second object is easily referred to as a protocol, but physically its is a rule (a protocol) established in a material object. The necessity of this physical protocol is something easily understood; for one thing to represent another thing within a system, it must be separate from it, and if it is truly a separate thing, then there must be something to establish the relationship that exist between the representation and the effect it is to represent (within that system). That is what the second physical object accomplishes, it establishes the relationship between a representation and the effect it represents, which is a relationship that otherwise wouldn’t exist.

UPB claims the music box represents “recorded information”, which implies that the information started elsewhere and was “recorded” on to the music box.  However, I think he is making the point that without a mechanism to get the “recorded” music box back into the form of music again, the information isn’t truly recorded, which seems fair enough.  After all, if I translated my post into unbreakable code, it wouldn’t really be recorded information because there would be no way of getting the information back out again.  So in the music box example, the music box is a way of “recording” a piece of music composed by someone, and getting that piece of music back out again, at a different time.  A phonograph recording (the old wax cylinder kind) would be an even better example.

Next he says:

There have been examples of this dynamic given in previous conversations. For instance, an apple is an apple, but the word “apple” is a separate thing altogether. Being a separate thing from the apple, there must be something that establishes the relationship between the two. In the case of the word “apple” we as humans have learned the protocols of our individual languages, and they physically exist as neural patterns within our brains. These neural patterns are material things, and they establish the immaterial relationship between a physical representation and its physical effect.

Hmmm.  OK, we have an apple – is the apple the information that is recorded?  Not really.  When we say “apple” we are not “recording” an apple, although we may be “recording” the fact that we are thinking about an apple.  Let’s say that’s it – I am the original source of the information that I have seen an apple.  I wish to transmit that information, so I record it in the sound pattern: apple.  That, presumably, is the equivalent to making a cylinder with a sequence of pins.  But the sound is useless in conveying the information that I have seen an apple unless my listener has some way of turning my thought “there’s an apple” into her thought “Lizzie has seen an apple”. UPD proposes that the mechanism by which that translation is occurred is analogous to the music-box mechanims – the sequence of sounds, like the sequence of pins on the music box cylinder, impinges on Lizzie’s hearer’s ears, is translated into neural impulses which are after a great deal of processing, evoke the thought “Lizzie has seen an apple”.

No, I don’t think that works.  For a start, “Lizzie has seen an apple” is not a recording of Lizzie’s thought, it is an inference about what Lizzie was thinking.  Lizzie’s hearer has indeed received information from Lizzie, but not exactly the information that Lizzie sent.  So it seems to me that language is a very different kind of information transfer system from a phonograph, or a musical box, or, indeed a reproducing organism. In these three examples, we start with a physical pattern of some kind (a performance, a melody, an organism) and we end with a recreation of that physical pattern (a rendering of that performance, the sound of that melody, a second organism, a second organism). In the case of language we do not.  There is no protocol that can create an apple from the word apple, although uttering the word may induce someone else to go fetch one.  This, it seems to me is because the word “apple” is a symbol, or, in Saussure’s term, a signifier that is linked to a signified (aka referent) in this case a specific kind of fruit. UBP appears to want to say that this linkage between signifier (word) and signified (fruit) is equivalent to the link between the sequence on a music box cylinder and the melody that emerges, and that therefore the music box cylinder (and therefore a base sequence in a polynucleotide too) is a symbol for the sound pattern that emerges from the music box in the same way as the word “apple” is a symbol for an actual apple.

But it clearly is not.  When I say the word apple, and you hear it, no apple is created, though you may reproduce my image of an apple in your own inner eye. But the referent for the signifier “apple”  is not “the mental image of an apple” but an actual apple. So the linkage between signifier and signified in language (the relationship of a “sign”) is qualitatively different from the relationship between a recorded  physical object or pattern and its reproduction.

UBP then says:

This same dynamic is found in all other cases of recorded information. I have previously used the example of a bee’s dance; a bee dancing in a particular way during flight is a separate thing than having the other bees fly off in a particular direction, and the relationship between the two is brought about by a protocol which physically exist in the sensory system of the bee.

In the dynamics of information transfer, the operative observation is that each of these physical things (the representations, the protocols, and their resulting effects) always remains discrete. This is one of the key observations that allows information to exist at all. The input of information is always discrete from the output effect, and the protocol that establishes the relationship between the two, remains discrete as well. They are three completely independent physical realities which share a relationship, with the protocol establishing the relationship between the representation and its effect within the system. In no case does the representation (or the protocol) ever become the effect.

Well, this seems a little circular. It seems to me that UBP is defining recorded information as something that requires a discrete protocol,  then  regards it as noteworthy that all instances of recorded information require a discrete protocol.  When we lived in Canada we had a deck with a table that had an umbrella hole in the middle.  One day it snowed heavily, and soon there was an inch of snow on the table, with a  hole in the middle.  Half an hour later, there was a foot of snow, with a depression in the middle.  Someone came in and said – “what’s that dimple in the middle of the snow?” and then said – “hey, it’s the umbrella hole”. In other words, the table – an object with pattern – was being replicated with each layer of snow, with sufficient fidelity that an observer could extract from the layer of snow the information that the table had an umbrella hole.  By evening there was about 4 feet of snow on the table, but there was still a dimple in the middle, indicating that the information that beneath the snow was a table with an umbrella hole had been faithfully recorded and transferred from snow-layer to snow-layer all afternoon. Yet in this case, the “representation” was also the “effect”.

However, that seems to me to be the least of the problems with UBP’s case.  The far bigger problem is that there is a qualitative difference between a sign (in the Saussurian sense), namely a linked signifier with signified pair, where the signified can be a physical object, and the signifier a symbol potentially renderable in a number of media, and where the transfer of information using the signifier does not result in the physical creation of the referent, and the information transfer in a musical box or in a reproducing organism whereby a physical pattern is recorded in such a way that it can be reproduced, which, at its simplest, can be layers of snow on a table.

This same dynamic is found in all forms of recorded information; including those used in the information processing systems created by intelligence. As an example, the first automated fabric looms used an arrangement of holes punched into paper cards (which acted as physical representations of the resulting effects within the fabric). Sensors and pins within the machine would sense where the holes were punched, and it would use that information to change and control the colors of threads being woven. In this instance, the configuration of holes served as the representation, and the configuration of sensors served as the protocol, leading to the specified effects. Each of these is physically discrete, while sharing the immaterial relationship established by the protocol.

Well, yes, but the discreteness is, as I’ve said, only arguably intrinsic to the concept of “recorded information” and in any case, does not render it semiotic.  At least not in the Saussurian sense.  Not sure of what other sense there might be.

So here we have a series of observations regarding the physicality of recorded information which repeat themselves throughout every form – no matter whether that information is bound to humans, or human intelligence, or other living things, or non-living machines. There is a list of physical entailments of recorded information that can therefore be generalized and compiled without regard to the source of the information. In other words, the list is only about the physical entailments of the information, not its source. I am using the word “entailment” in the standard sense – to impose as a necessary result (Merriam-Webster). These physical entailments are a necessary result of the existence of recorded information transfer. And they are observable. That list includes the four material observations as discussed in the previous paragraphs: a) the existence of an arrangement of matter acting as a physical representation, b) the existence of an arrangement of matter to establish the relationship between a representation and the effect it represents within a system (the protocol), c) the existence of physical effects being driven by the input of the representations, and d) the dynamic property that they each remain discrete. Observations of systems that satisfy these four requirements confirms the existence of actual (not analogous) information transfer.

OK, let’s take these in turn:

a) the existence of an arrangement of matter acting as a physical representation

Well, maybe, though it’s a bit imprecise.  But sure, information transfer is going to entail physical arrangements of matter.  And let’s allow “representation” to be the thing-that-is-read, like DNA, or the cylinder of the musical box, or even the pattern of sounds making the word “apple” and let that representation be of something (a whole organism; a melody; an apple).

b) the existence of an arrangement of matter to establish the relationship between a representation and the effect it represents within a system (the protocol)

Well, no.  In the case of the linkage between the signifier “apple” and its referent, the piece of fruit, there is no “arrangement of matter”.  There is some kind of “arrangment of matter” that links the signifier “apple” to the evocation of the idea of an apple in a hearer, but the “idea of an apple” is not the referent of the signifier “apple”.  What links the word “apple” to apples is shared agreement among a community of speakers that “apple” means apple. And that agreement is transmitted culturally by various means, not a single protocol, largely by usage, and sometimes trial and error (for years I thought the referent for “caution” was “luggage” because on the back of the seats of the bus I used to catch to school it said “caution racks overhead”, and I also knew there were “caution lorries”).  And even if we allowed this as the “protocol” UBP refers to, no amount of cultural agreement that “apple ” means apple will make an apple assemble itself when someone says the word “apple”.

c) the existence of physical effects being driven by the input of the representations

Well, I guess I would agree, not least because as a “materialist” neuroscientist I would regard the uttering of the word “apple” or even the activation of the motor program at sub-execution threshold that would result in the uttering of the word is driven by input of some sort, even if there is no-one except me to “hear” the word/thought.

d) the dynamic property that they each remain discrete

Well, I’m not sure what is premise and what is conclusion here. But let it pass for now.

These same entailments are is found in the transfer of information from a nucleic sequence. During protein synthesis a selected sequence of nucleotides are copied, and the representations contained within that copy are fed into a ribosome. The output of that ribosome is a chain of amino acids which will then become the protein being prescribed by the input sequence. The input of information is therefore driving the output production. But the input and the output are physically discrete, as evidenced by the fact that the don’t directly interact, and that the material output is not assembled from the material input.

 

Well, no.  The problem seems to be entailment b, as it always has been.  A semiotic system relates a signifier to a signified so that two members of a shared linguistic community can communicate ideas  – i.e. one member of the community can evoke in the mind of another member the idea s/he is currently entertaining.  It is also key to  thought itself.  It is not a system that records a physical pattern then reproduces it – or rather, in the only sense it is such a system (I can partially reproduce in your brain the thoughts I am currently having by means of this written symbol system), it doesn’t map on to either a musical box or protein synthesis.  The referents of my signifiers are not my thoughts, but real-world objects, and abstract concepts.  Those real world objects and abstract concepts are not brought into actuality when I utter a word.  Unfortunately.

 

The exchange of information (from input to output) is facilitated by a set of special physical objects – the protocols – tRNA and its entourage of aminoacyl synthetase. Acting together they make it possible for the input to alter the output, and they do so by allowing them to remain separate. The tRNA physically bridges the gap between the input and the output, acting as a passive carrier of the physical protocol. It accomplishes this by being charged with the correct amino acid by the synthetases (the only molecules in biology which actually hold the rules to the genetic code). The synthetases accomplish their tasks by being able to physically recognize both the tRNAs and the amino acids. They charge the tRNAs with their correct amino acids before they ever enter the ribosome. The actions of the synthetases are therefore completely isolated from both the input and output. In other words, the only molecules in biology that can set the rule that “this maps to that” are physically isolated from both the input and output, while the input and output remain isolated themselves.

Sure.  As in a musical box.  But not as in a semiotic system.

These observations establish that the entailed objects (and dynamic relationships) exist the same in the translation of genetic information as they do in any other type of recorded information (in every example from human language, to computer and machine code, to a bee’s dance). These observations have been attacked as being as a misuse of the definition of words (a semantic word game, as you call it). But I have already produced the definitions of the words from a standard dictionary; I’ve restated the observations using those definitions in place of the words themselves; and I have asked the question: “If in one instance we have a thing that actually is a symbolic representation, and in another we have something that just acts like a symbolic representation – then someone can surely look at the physical evidence and point to the distinction between the two. There is also the simple fact that there is nothing about the attachment of cytosine to thymine to adenine that intrinsically means “bind leucine to a nearby polypeptide” as an inherent property of its matter. That is a quality beyond its mere materiality, one it takes on by being in a system with the correct protocol to cause that effect from that arrangement of matter.

But not as in human language.  Nor in computer code for exactly the same reason.

There has also been the profoundly illogical objection that because these things follow physical law (and can be understood), they cannot be considered symbols or symbolic representations. Not only does this deny the existence of any symbol in the extreme, it fails for the obvious reason that everything follows physical law. If something can’t be true because it follows the same laws as everything else, then we have entered the Twilight Zone​.

The objection is not that because the information transfer follows physical laws it cannot be considered in terms of symbols or symbolic representation.  I think symbols and symbolic representations are also the outcome of the operation of physical laws.  The objection (or mine, at any rate) is that in a cell, or in an musical box, a physical pattern is recorded, and later reproduced, by means, as UBP says, of a physical protocol (I’m not bothered about whether there is a discrete intermediary or not, that doesn’t seem to me to be the crucial issue), that links the original to the reproduction.  Whereas in language, we have signs in which a vocal signifier/symbol is linked to a real world signified/referent  enabling information about the referent to be exchanged between speakers; they do not does result in the reproduction of the signified.

So going back to your comment, a fair reading suggests that the information transfer in the genome shouldn’t be expected to adhere to the qualities of other forms of information transfer. But as it turns out, it faithfully follows the same physical dynamics as any other form of recorded information. As for “disciplines”, you will notice that these observations are very much in the domain of semiotics. Demonstrating a system that satisfies the entailments (physical consequences) of recorded information, also confirms the existence of a semiotic state. It does so observationally. Yet, the descriptions of these entailments makes no reference to a mind. Certainly a living being with a mind can be tied to the observations of information transfer, but so can other living things and non-living machinery. It must be acknowledged, human beings did not invent iterative representative systems, or recorded information. We came along later and discovered they already existed.

And so we see where UBP makes his error, having started from a false premise, or at least false analogy.  A semiotic system does indeed require a mind, because the link between signified and signifier is one that enables information about the signified to be transferred between members of a linguistic community of minds.  But the kind of information transfer UBP calls “recorded information”, and observes in musical boxes and cells is a quite different kind of information transfer system, in which a physical original is reproduced at a later time, by means of a system of “protocols” that can be (IMO) as simple as a direct template (the reproduction in one iteration becomes the template for the next) or as complex as self-replicating modern cell, in which the original, or “parent” is reproduced as a “daughter”.  Whether or not something like like the genetic code is involved or whether “discrete” physical intermediaries like a specific set tRNA molecules are required for faithful reproduction seems to me irrelevant to the question of whether a mind is required.  A mind is certainly required for a semiotic system, but no matter how clever the mapping of codon to amino acid is, the result is not a Saussurian “sign” – there is no mapping of “signifier” to “signified” within a linguistic community of minds, simply a mapping of one molecule to another, the final result being the physical replication of the original, not the evocation of the original in another mind.

Therefore, the search for an answer to the rise of the recorded information in the genome needs to focus on mechanisms that can give rise to a semiotic state, since that is the way we find it.

Well, no.

We need a mechanism that can cause an arrangement of matter to serve as a physical representation. We need a mechanism that can establish within a physical object a relationship between two discrete things. To explain the existence of recorded information, we need a mechanism to satisfy the observed physical consequences of recorded information

Sure.  But that does not require positing a mind, because “recorded information” as defined by UBP does not require a semiotic system.

I’ll invite Upright BiPed over to discuss this, and I do hope he will be willing to discuss it here.  It’s a conversation that has looped over many threads at UD, and has been difficult to follow as a result (not helped by the new nested comments system, which is making me wonder whether I should abandon nested comments here….)

 

 

41 thoughts on “Upright BiPed’s Semiotic Argument for Design

  1. “The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are
    correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one
    selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design.”

    It was difficult for me to follow the argument, so I’m not sure what all this is about. Assuming “semiotic” ~ “semantic,” I quoted Shannon (1948) in what I regard as a very unfortunate statement about the irrelevance of semantics to the fundamental problem of communication. I think Shannon simply wants to proceed w/o considering semantics because it’s a very difficult subject, as the exchange between UPD and Elizabeth illustrates. IOW Shannon doesn’t want to open that whole can-o-worms. I don’t blame him, but I also think its avoidable. I think the problem is implicit in his statement about what is the “fundamental problem of communication.” The first question I would ask of Shannon is why reproducing a message “either exactly or approximately” is a problem.
    My answer would be that the “fundamental problem of communication,” high-fidelity reproduction of information, is due to the fact that “frequently the messages have meaning,” i.e., “are correlated according to some system with certain physical or conceptual entities.” With the emphasis on “some system.” (A niggling point: On other grounds, namely Kolmogorov’s theory, we have reason to believe that the set of messages with any “semantic” content is a vanishingly small subset of all possible messages. IDers prick up your ears, because I know you are interested in vanishingly small probability arguments.)
    It is the very fact that messages sometimes have meanings and whether or not they have meaning, and even when the meaning is understood by us as ranging from the trivial to the profound, that the fundamental problem is the prior requirement to fidelity. I don’t think anyone really thinks that the faithful reproduction of meaningless information is problematic viz fidelity of reproduction. Meaningful information is a “difference that makes a difference.” Meaningless information makes no difference, so what difference does it make if such information is reproduced with any level of fidelity?
    IOW communications theory, information theory (and probability theory upon which they are largely grounded) is all about semantics. The fundamental problem is due to the fact that we require our system of semantics, the correlations, the correlations between physical and physical, physical and conceptual, and conceptual and conceptual to be faithful, true—high fidelity. So its all about semiotics, Upright Biped.

  2. Rock: It was difficult for me to follow the argument, so I’m not sure what all this is about. Assuming “semiotic” ~ “semantic,” I quoted Shannon (1948) in what I regard as a very unfortunate statement about the irrelevance of semantics to the fundamental problem of communication. I think Shannon simply wants to proceed w/o considering semantics because it’s a very difficult subject, as the exchange between UPD and Elizabeth illustrates. IOW Shannon doesn’t want to open that whole can-o-worms.

    I am commenting as a mathematician and computer scientist. When Shannon wrote about the irrelevance of semantics, I see him as actually making a point about the irrelevance of semantics (to communication). Shannon might well have agreed that semantics is a difficult subject. But his point was that he was solving an engineering problem, and the complexities of semantics played no role in that engineering problem.

    In our ordinary use, for example in speech, we communicate by forming a representation of the semantics in the form of a sequence of symbols. Those symbols could be phonemes (verbal communication) or letters (written communication). The problem for the engineer, is one of accurately communicating the sequence of symbols. Neither the initial encoding of semantics into symbolic form, nor the later interpretation of the symbolic representation, play any role. Those are outside of the engineering problem. It suffices for the engineers to accurately communicate the sequence of symbols.

  3. Excellent topic Elizabeth – a good mix of the philosophical and the practical. Let me see if I’m grasping the gist of your argument, and please tell me if I’m missing any important detail.

    UBP’s argument is that all symbolic (semiotic) communication follows a similar protocol in that the input maps to the output through a symbolic intermediary that is independent of either input or output, and therefore the analogy between biological information transfer, and language is accurate. Furthermore something other than physical necessity ( a mind) is required to create the symbolic representation.

    Lizzie points out that there is a real distinction to be made when what is communicated is not just a physical mapping from input to output (DNA to protein), but an abstract concept, like “apple”. A mind is required of the latter, but not a requirement of the former, because even though “apple” is a semantic and symbolic representation of a physical object, it is not a specific apple, but the concept of “a white fleshy fruit that grows on trees and can be red, green or yellow, and tastes like (this taste) and smells like (this smell)” illustrating that even the semantics describing the apple are inadequate to convey the taste and smell that are invoked in my mind when I hear or read the word “apple”, and even the taste and smell that are evoked are not the same as that evoked in my mind by real sensory input.
    If so, I agree that it is an important distinction.

  4. Well, I think it’s a crucial distinction when it comes to UBP’s argument, if I am finally understanding it correctly (which I may not be).

    But he seems to be saying: cell-reproduction is information transfer, and information transfer is semiotic, and semiotics require minds, therefore cell-reproduction requires minds.

    But his analogies don’t map correctly

    Signified: protein, melody, apple
    Signifier: DNA sequence, cylinder pins, word “apple”
    Protocol: tRNA and ribozome; cylinder pins pinging comb; soundwave form “apple” impinging on ear and brain.
    Readout: protein, melody, oops…..

    In the first two cases the readout is a reproduction of the signified.
    In the language case it isn’t. And that’s because the readout in the last case is something to do with a mind. In other words, the very thing that makes the semiotic case different is the very thing that Upright BiPed wants to generalise to the other two cases.

  5. Elizabeth: In the first two cases the readout is a reproduction of the signified.
    In the language case it isn’t. And that’s because the readout in the last case is something to do with a mind.

    But doesn’t the melody case also have something to do with a mind? How is a melody by itself, without being interpreted or experienced by a mind, really anything at all? It’s a just sequence of variations in air pressure.

  6. Well, I think that’s the part that misled Upright BiPed. Yes, of course, the music box is designed, and, because it’s designed (by something with a mind) the input (the “signified” in UBP’s analogy is something mind-made (a melody) and the output is that melody.

    But we could take something else – a photocopier perhaps, or any machine that takes a physical object as input, and produces a simulacrum as output. And we could have the usual argument about whether such a machine could possible have come about without a designer.

    But this doesn’t seem to be UBP’s argument. He is make a “Semiotic Argument for Design”. My point is that a replication system is not a semiotic system. And while I’d readily agree that a semiotic system necessarily requires a community fo minds, because a living cell is not a semiotic system, it does not necessarily require a community of minds.

    So his argument fails. And we are left with a real issues which is: how did could the first self-replicators have come about? (Because of course cells are much more interesting than music boxes, which only reproduce the input, they do not reproduce themselves.)

  7. Elizabeth: And we are left with a real issues which is: how did could the first self-replicators have come about?

    Right. In fact, the whole notion of DNA as information doesn’t make any sense to me. Sure, if one (by “one” I mean a person with a mind) understands the mapping between codons and amino acids and proteins then you can think of DNA as a kind of information, but it’s only information when viewed that way by us. In practice (i.e. during the processes we call life), there is no information processing going on; it’s not like enzymes are “interpreting” the DNA and shuffling off to create proteins. It’s all just necessary chemistry.

    On the other hand, I’m never certain what IDers mean when they use the word “information”. It’s a word that seems to mean a lot of things and (ironically) can create a lot of confusion.

  8. b) the existence of an arrangement of matter (or/and energy) to establish the relationship between a representation and the effect it represents within a system (the protocol)

    Re. your objections for b) I added (or/and energy)

    Patterns of matter and energy which establish relationship between sound “apple” and “yummy round red edible object” certainly exist in our brains. We have been trained in our childhood in establishing these patterns. We can consider these patterns as protocol.

    For ex. before learning English sound “apple” wouldn’t mean much to me. I was using different protocol then (Croatian) so sound “jabuka” would trigger search in my brain’s relational database and find “yummy round red edible object”.

  9. Hi Eugen,
    (and hi Elizabeth & everybody else – I’ve been a fond lurker of your blog until now and an ardent admirer of your stamina at UD; some people might recognize me as madbat089 from PT or molch from UD, where I mostly lurk also (PhD theses are huge time-suckers), but occasionally comment)

    … but I digress, back to Eugen, you said: “Patterns of matter and energy which establish relationship between sound “apple” and “yummy round red edible object” certainly exist in our brains. We have been trained in our childhood in establishing these patterns. We can consider these patterns as protocol.”

    Well, as Elizabeth pointed out earlier, the arrangement of matter and energy in our brains connecting the sound “apple” to any association can only be considered a protocol in the sense that it is a shared agreement upon the use of symbols among a group of language users. Hence, it is not a precise protocol as the protocol required e.g. in a music box or in protein biosynthesis. It is subject to large cultural and individual variation. And it explicitly encompasses a lot of this variation because it is a categorical symbol, as most symbols, i.e. words, in human made languages are. This variation is a direct consequence of a symbolic, instead of physical or chemical relationship. And large variation, accomodating the sorting of objects into categories, is only sustainable BECAUSE the interaction in question has no direct physical or chemical consequence. If somebody hearing me say “apple” would actually make the object physically appear that corresponds to the idea “apple” in the listeners brain (say, a large, shiny, green, tart Granny Smith from the grocery store), it would most likely not be the same object I had in mind when I said “apple” (say, a red, mealy, dented fruit from the feral apple tree in my back yard, waiting to turn into cider). Both these objects fall into the agreed upon symbolic category “apple”. But in the majority of cases, the speaker would want the kind of apple they had in mind when they were saying the word, if somebody else hearing them say it actually resulted in the physical creation of an apple. Thus, the use of the word “apple” would likely be largely abandoned in favor of precise, specific protocols. Just like the music box doesn’t produce just categorically “music”, it produces a specific, precise melody, according to the precise protocol inside. If the protocol isn’t precise, the melody will be distorted. In our symbolic information transfer, called language, the distorted melody will still fall into the category “music”, but it’s not the kind of music we want. The way we can tell it’s not the kind of music we want is by the actual physical effect created, by experiencing the faithfully produced melody. We’ll call the repair man to fix it. We know if he fixed it successfully, if the melody we listen to is exactly what we want it to be. The fact that I can tell you, and you can understand, according to a shared use of language, that the box plays “music” is not helpful in the least in determining whether the repair was successful. Thus, the “protocols” our brains create for the use of language have really nothing in common with the protocols in physical/chemical relationships between representations and effects.

  10. Sorry, “I don’t blame him, but I also think its avoidable.” I don’t think its “avoidable.” The problem of semantics is unavoidable. The problem, and at least part of the solution, is written all over Shannon’s theory.

    Why the requirement to fidelity?

    Fidelity to what?

  11. madbat089: an ardent admirer of your stamina at UD

    Ditto, and I really appreciate the tireless work of Elizabeth and Dr. Bot in confronting KF and his non-stop avalanche of bafflegab. Great work!

  12. Ya’ll are skeptics, right? And reasonable. I’m skeptical of Shannon’s ability to formulate a theory of communicatons sans semantics. I invite you to share my skepticism.

    So what is meaningful 2U?

    And how are you going to communicate that 2me?

    Very precisely? As precisely as the media of communications and the ingenuity it requires to convey any such information allows?

    Why so? Why is it so important to as nearly as exactly as is possible convey such information? Why is it an enginnering problem? A problem in design. Is it problematic in any other case? In any other case where info is involved?

    What is meaningful?

  13. Madbat,
    thanks for your reply. I agree with most of your points except one.

    Database type relationships in our brains could be considered software protocols then if not physical (hardware) protocols found in a music box example. Similar situation is with electronic memory where patterns are stored via arrangements of electrons in the matrix. Definitely different substrate but similar principle.
    Re precision of mapping of category of objects to sounds, again I’ll use an example of my first language (considering it a software protocol). On hearing sound “apple” I would completely miss a category in my younger days, before knowing English.
    So wrong protocol as part of information system regardless if that protocol is implemented by hardware or software, will produce nonsensical or disastrous results.

  14. O lny srmat poelpe can raed tihs.
    I cdnuolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg. The phaonmneal pweor of the hmuan mnid, aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, t he olny iprmoatnt tihng is taht the frist and lsat ltteer be in the rgh it pclae. The rset can be a taotl mses and you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Amzanig huh? yaeh and I awlyas tghuhot slpeling was ipmorantt! if you can raed tihs psas it on !!

  15. Troy:
    O lny srmat poelpe can raed tihs.
    I cdnuolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg. The phaonmneal pweor of the hmuan mnid, aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, t he olny iprmoatnt tihng is taht the frist and lsat ltteer be in the rgh it pclae. The rset can be a taotl mses and you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Amzanig huh? yaeh and I awlyas tghuhot slpeling was ipmorantt! if you can raed tihs psas it on !!

    I wouldn’t say I can read it without any problems – and my spell-checker is having a fit – but I can read it. Our ability to do so seems to operate broadly on the same principle that underlies memory, namely, pick out salient features and then make educated guesses on what fits in between.

  16. Cool stuff Troy. I have seen it before, it’s amazing. I had no problem reading it.

    I guess our brains apply a form of error correction which reminds me of forward error coding (FEC) method.
    FEC is normally used in one way communications and data storage.
    Basic principle is easy to understand. http://en.wikipedia.org/wiki/Forward_error_correction

    Now the interesting part is there is a variant of FEC built into one very important cell process.

  17. Hi Eugen,
    thanks for your reply also.

    I am not sure what you mean by “database type relationships in our brains”. I agree that what goes on in brains when they receive sensory data, analyze it, and produce behavioural output is more similar to what goes on during data input, analysis, and output in a computer than to the relationships between input, protocol, and output in a music box or in protein biosynthesis. I am not sure though if calling what connects the sound apple in our brains to an eventual behavioural reaction a “software protocol” is particularly meaningful or helpful (let alone accurate – we don’t really understand our brains all that well yet). I guess it depends on where you are going with this line of reasoning?

    I completely agree with your addition to my point on information transfer via categorical symbols: if the associations between symbol and referent object are either wrong or absent, the information transfer fails (I completely sympathize with it too, as a non-native English speaker myself – waving from Germany to Croatia). The difference between a wrong protocol in a physical / chemical information transfer and a wrong association in a semiotic context is that the wrong protocol is immediately detectable by the wrong physical output, whereas the wrong association can remain hidden for a long time, until it is either corrected by a newly received association or reveals itself in a clearly wrong behavioural reaction.

  18. Madbat

    Brief descriptions of these ideas can be very unclear. It takes whole paragraph, instead of few words to describe concept fully just like you did above.

    First I agree with you the brain (in my case half brain) 🙂 operates rather similar to electronic computer than mechanical setup like a music box. It’s good to keep in mind the basic principles are the same regardless of domain being mechanical, electro-mechanical, electronic or biochemical. That is where I would like to go with my reasoning, more or less.

    Where confusion comes from is difficulty in recognizing and delineating components of the information processing setup. Two groups are present during the process: 1) logical flow and 2) real components (made of substrate from whatever domain we are analyzing).

    Logical flow can be studied, extracted and mapped. Logic will not always map directly to real components even thou arrangement of real components is enforcing it. Therefore we will have two maps: logical – describing process flow and physical -describing arrangement of real components.

    While substrate is needed, it’s just a playground for information.

  19. Eugen:

    Where confusion comes from is difficulty in recognizing and delineating components of the information processing setup. Two groups are present during the process: 1) logical flow and 2) real components (made of substrate from whatever domain we are analyzing).

    Logical flow can be studied, extracted and mapped. Logic will not always map directly to real components even thou arrangement of real components is enforcing it. Therefore we will have two maps: logical – describing process flow and physical -describing arrangement of real components.

    While substrate is needed, it’s just a playground for information.

    Well, I think confusion mostly comes in when you look at it from the direction you do. I think you are looking at it completely backwards when you say substrate is just the playground for information.

    I would say: substrate, i.e. physical entities just are what they are. Physical entities have physical and chemical interactions with each other. The particular interactions depend on the particular combinations and proximities they are found in. Those interactions will result in new physical entities, i.e. products (or “output”) of the initial entities (“input”) and the interactions between them (what you call “the protocol”). What you call logical flow seems to simply be a description by human observers trying to categorize and define what they observe.

    So it seems a lot more fitting to say that information is the playground the human mind uses to organize observations of physical interactions.

  20. Madbat

    Music box will turnout to be of more help here. Music box is a mechanical sequencer. Interestingly a very popular device in the music industry is electronic sequencer. This is a perfect example of where the real components from different domains can have the same or similar function when arranged logically. Another interesting point is that map of logic (process) flow would be the same or similar whether we are looking at mechanical or electronic sequencer.

    Substrate, domain whatever you want to call it, in itself is not important, key to understanding is in the arrangement of real components. Most interesting would be to find out what causes are capable of arranging components logically to be functional, basically looking for general principle. This principle obviously applicable in our example of sequencer/music box.

    I dare to imagine it is possible to create a chemical sequencer with certain arrangements of chemical units. Sequencers are also indispensable in automation industry controlling batch processes, appliances or traffic lights for example.

    If you are interested in seeing a practical example of logic and real component maps I’ll be glad to show you.

  21. Eugen:

    you said: “Another interesting point is that map of logic (process) flow would be the same or similar whether we are looking at mechanical or electronic sequencer.”

    Of course it is – because the map of logic flow is a map that we as observers create according to the function we can discern (or the function we defined in advance): if the function is equivalent, the logic map is equivalent.

    “Most interesting would be to find out what causes are capable of arranging components logically to be functional”

    That question really only makes sense in cases where the function in question is pre-specified. In any case where we detect a function arising from the arrangement of components after the fact, the question is obviously not meaningful.

  22. Elizabeth,

    You said, “My point is that a replication system is not a semiotic system.”

    I think this is more accurate:
    Some, but not all semiotic systems are replication systems. And some, but not all replication systems are semiotic systems.

    So UBP is wrong to conclude that because DNA is a replication system, it must also be a semiotic system.

    However a semiotic system can also be a replication system. Just change the signified in your example from “apple” to “plant an apple tree”

  23. Upright BiPed’s response on UD:

    Dr Liddle,

    You asked me to respond to a post you made elsewhere. This is that response, but I make it here where you left our last conversation. Your counter-argument is based on a major false premise and several instances of a failure in conceptualization. These permeate your comments. I’ll deal with that false premise, but first a couple of observations about your response.

    So UBP’s starting point seems to be that the “information” we say a genome contains is not different from “information” in other senses.

    Different information systems are different by virtue of their implementation into different substrates, and also in the types of effects that result, but the physical objects involved in the transfer share the same dynamic relationships.

    A musical box is an interesting example, because unlike some other information transfer systems, like the pixels on your screen you are viewing right now and from which you are receiving information about the contents of Upright Biped’s post on UD, the sequence of pins on a music box cylinder are actually instrumental in making something else, in this case a melody, and thus bears a closer homology to the sequence of base pairs on a DNA molecule which, by a series of physical operations, results in the making of a protein.

    The pixels on your screen are “making something else” as well. This is evidenced by the simple fact you know the contents of my post. The substrates change, the effects change, but the dynamics of the transfer stay the same.

    UPB claims the music box represents “recorded information”, which implies that the information started elsewhere and was “recorded” on to the music box. However, I think he is making the point that without a mechanism to get the “recorded” music box back into the form of music again, the information isn’t truly recorded, which seems fair enough. After all, if I translated my post into unbreakable code, it wouldn’t really be recorded information because there would be no way of getting the information back out again. So in the music box example, the music box is a way of “recording” a piece of music composed by someone, and getting that piece of music back out again, at a different time. A phonograph recording (the old wax cylinder kind) would be an even better example.

    If you took the cylinder out of a music box and then lost the box, the representations would still be “truly recorded” even if the box was lost. If you never found the music box, or a replacement, then that would simply be a melody you’ll never hear. That doesn’t change the representation in the cylinder.

    Translating your post into an unbreakable code poses some problems. An unbreakable code, as a matter of principle (practicality may differ) is a code without rules, and a code without rules isn’t a code at all. Moreover, translating your post into a code with no rules has nothing to do with the word “translate”. How would you accomplish it? To say that “it wouldn’t really be recorded information because there would be no way of getting the information back out again” is to make an observation about non-existent entities that have nothing to do with the topic (recorded information transfer). It’s fluff.

    In my last post to you I made the point that you repeatedly try to smuggle a mind into the conversation. The reason for this is obvious; it primes the pump that the observations are anthropocentric, and therefore flawed. But it doesn’t work. The fact that humans are symbol makers is not a question; of course we are. But even if we weren’t, the dynamics of information transfer among humans (as in non-human transfer) wouldn’t change one iota. The anthropocentric flaw is not being able to remove yourself from the sample.

    No, I don’t think that works. For a start, “Lizzie has seen an apple” is not a recording of Lizzie’s thought, it is an inference about what Lizzie was thinking. Lizzie’s hearer has indeed received information from Lizzie, but not exactly the information that Lizzie sent. So it seems to me that language is a very different kind of information transfer system from a phonograph, or a musical box, or, indeed a reproducing organism.

    Lizzie is a symbol-maker saying “I’d like an apple” to another symbol-maker. That fact doesn’t change the dynamics of the transfer in any way; it only changes the effect of that information in the hands of the a free-agent receiver. What you describe as a “very different kind of information transfer” is only a very different kind of effect, coming as the result of a free agent being the receiver. But, the dynamics of the transfer haven’t changed. Like I said, you have to remove yourself from the sample.

    In these three examples, we start with a physical pattern of some kind (a performance, a melody, an organism) and we end with a recreation of that physical pattern (a rendering of that performance, the sound of that melody, a second organism). In the case of language we do not. There is no protocol that can create an apple from the word apple, although uttering the word may induce someone else to go fetch one.

    Okay, so maybe it didn’t occur to you that the effect of the sound pattern “apple” is not the sudden appearance of an apple coming from the pattern of the sound. The fact remains that the word “apple” has an effect, and the actualization of that effect (from the sound pattern of the word) follows the same dynamics as any other form of recorded information transfer.

    Again, remove yourself from the sample. Stop injecting issues that only pertain to you as a symbol-maker. Observations having to do with what a free agent can do with information does not change the physical dynamics observed in the transfer.

    This, it seems to me is because the word “apple” is a symbol, or, in Saussure’s term, a signifier that is linked to a signified (aka referent) in this case a specific kind of fruit. UBP appears to want to say that this linkage between signifier (word) and signified (fruit) is equivalent to the link between the sequence on a music box cylinder and the melody that emerges, and that therefore the music box cylinder (and therefore a base sequence in a polynucleotide too) is a symbol for the sound pattern that emerges from the music box in the same way as the word “apple” is a symbol for an actual apple.

    But it clearly is not. When I say the word apple, and you hear it, no apple is created, though you may reproduce my image of an apple in your own inner eye. But the referent for the signifier “apple” is not “the mental image of an apple” but an actual apple. So the linkage between signifier and signified in language (the relationship of a “sign”) is qualitatively different from the relationship between a recorded physical object or pattern and its reproduction.

    Well, it was obvious from the start this was where you were heading, and you’ve done me the favor of encapsulating your error in a single sentence: “But the referent for the signifier “apple” is not “the mental image of an apple” but an actual apple.” So my question to you is simple:

    Do you have an apple in your head -or- Do you have a “mental image of an apple”?

    Really, Dr Liddle. Have you been taught that when an animal communicates it doesn’t know it’s communicating, so it expects apples to appear as it gestures? And will you please take special note; none of this anthropomorphism has anything to do with the observed dynamics of information transfer, instead it revolves around a certain (repeating) disciplinary issue.

    I say again, you are a natural symbol-maker. You transfer information. This is what you do. Accept that, then to the best of your ability, remove yourself from the sample. Recorded information goes in a lot of different directions. It’s an anthropocentric error to continually describe a particular aspect of being human as if that aspect alters the observed dynamics. It doesn’t. I suspect that you probably know this, but are left to ponder the sudden appearance of apples. This is what the evidence of your rebuttal would indicate.

    It seems to me that UBP is defining recorded information as something that requires a discrete protocol, then regards it as noteworthy that all instances of recorded information require a discrete protocol.

    This is a question of the structure of the system. In order to make your case, it requires you to deal with what you’ve ignored in your objection. Recorded information is an abstraction (within a system) which is represented in an arrangement of matter/energy. For one thing to represent another thing within a system, it must be separate from it. If it is a separate thing, then there must be something that physically establishes the relationship between the two. That is what the protocol does. The dynamic involved is that all three of these physical things remain discrete, and this has been validated by observation.

    And finally, describing the parts of a system does not result in a circular argument.

    In other words, the table – an object with pattern – was being replicated with each layer of snow, with sufficient fidelity that an observer could extract from the layer of snow the information that the table had an umbrella hole. By evening there was about 4 feet of snow on the table, but there was still a dimple in the middle, indicating that the information that beneath the snow was a table with an umbrella hole had been faithfully recorded and transferred from snow-layer to snow-layer all afternoon. Yet in this case, the “representation” was also the “effect”.

    Your table wasn’t being replicated (or represented); that was just snow. What you say was a representation, wasn’t a representation. A representation is an arrangement of matter in order to cause an effect within a system. It wasn’t a representation you saw outside, it was table covered in snow. It had a hole in the center of it, which left a dimple in the snow. That dimple made you think of the hole. You then end this anthropic adventure by concluding the “representation was also the effect”. It wasn’t. The representation was a neural pattern going to your visual cortex and beyond. The end effect was “There’s a hole in the table”. Those are not the same thing – and – you’ve put yourself right back into the sample, making observations that only matter to a human.

    However, that seems to me to be the least of the problems with UBP’s case. The far bigger problem is that there is a qualitative difference between a sign (in the Saussurian sense), namely a linked signifier with signified pair, where the signified can be a physical object, and the signifier a symbol potentially renderable in a number of media, and where the transfer of information using the signifier does not result in the physical creation of the referent, and the information transfer in a musical box or in a reproducing organism whereby a physical pattern is recorded in such a way that it can be reproduced, which, at its simplest, can be layers of snow on a table.

    Here you say there is a difference between:

    a) A Saussurian sign [signifier+signified] where the signified can be an object and the signifier can be a symbol.

    …and

    b) where a “signifier does not result in the physical creation of the referent”

    ..and

    c) a music box or an organism where something can be reproduced, like layers of snow on a table.

    I respond:

    a) Firstly, a Saussurian“sign” [signifier+signified] is a linguistics concept that does not invalidate biosemiotics or information theory. In any case, a signifier cannot result in a signified without a protocol. That protocol may exist in a living interpreter (such as a human, or a bee), or it can be instantiated in a machine (such as a music box or a fabric loom). In each of these cases, the protocol will be separate from the signifier and the signified, and it will establish the relationship between the two.

    b) There is no principle involved which would require a representation to result in the production of a physical object; only a physical effect. This is at the central false premise of your objection. When a bee dances in flight in order to direct the other bees to the feeding grounds, it is not nectar that results from the dance, just a change in flight plan (which is an effect, not an object). And once again, you’ve injected yourself right back into the observation.

    c) A representation leads to an effect within a system, and those systems vary, as does their effects. And thoughts of layers of snow becoming a “representation”, is simply anthropocentric.

    BIPED: In this instance, the configuration of holes served as the representation, and the configuration of sensors served as the protocol, leading to the specified effects. Each of these is physically discrete, while sharing the immaterial relationship established by the protocol.

    Well, yes, but the discreteness is, as I’ve said, only arguably intrinsic to the concept of “recorded information” and in any case, does not render it semiotic.

    Here you say that discreteness is not intrinsic to recorded information, but is only arguably so. You also used the word “concept” which is a cognitive term, one which we generally use in order to know anything at all, so I will leave it aside. (If the existence of recorded information is in doubt, then that can be addressed separately).

    Now to your objection: Over the course of this conversation I have given many examples of the discreteness observed. These observations have been given in coherent terms. In all of those instances you have never shown that the observation is incorrect. This suggests that the ‘discreteness’ is inherent based upon logical observations, and is only arguably non-inherent (and is therefore falsifiable by any contrary evidence available). I have told you of the physical entailments which are evident in the transfer of recorded information. One of those qualities is a discreteness among the physical objects involved. You then return to me to say “that doesn’t make it semiotic”. But I have already challenged that objection, and am awaiting a reply. You may remember the question:

    If in one instance we have a thing that actually is a symbolic representation, and in another we have something that just acts like a symbolic representation – then someone can surely look at the physical evidence and point to the distinction between the two.

    BIPED: a) the existence of an arrangement of matter acting as a physical representation

    Well, maybe, though it’s a bit imprecise. But sure, information transfer is going to entail physical arrangements of matter. And let’s allow “representation” to be the thing-that-is-read, like DNA, or the cylinder of the musical box, or even the pattern of sounds making the word “apple” and let that representation be of something (a whole organism; a melody; an apple).

    …or a neural pattern related to an apple, resulting in a pattern of impulses being sent to the chest and larynx.

    BIPED: b) the existence of an arrangement of matter to establish the relationship between a representation and the effect it represents within a system (the protocol)

    Well, no. In the case of the linkage between the signifier “apple” and its referent, the piece of fruit, there is no “arrangement of matter”. There is some kind of “arrangement of matter” that links the signifier “apple” to the evocation of the idea of an apple in a hearer, but the “idea of an apple” is not the referent of the signifier “apple”.

    Translating first sentence: ‘In the linkage between the word apple and the fruit apple, there is no arrangement of matter.’

    If that is true, then each time you say the word “apple” you have the uncanny good fortune of inventing it from scratch. Otherwise, there is a pattern(s) in your brain that maps your knowledge of the fruit to the word and potential downstream effects on your vocal chords. And once again, you’ve plopped yourself right down in the middle of the observations. And still, none of this changes the dynamics of the transfer in any way. The apple is not the word, and neither of them is the pattern in your brain. Again, get out of the study.

    Translating second sentence: ‘There is an arrangement of matter that links the word apple to the thought of an apple in the hearer, but the thought of an apple is not what lead the speaker to use the word.’

    Again, do you have an apple in your head? You are going in circles, Dr Liddle, and I am feeling rather done with this.

    What links the word “apple” to apples is shared agreement among a community of speakers that “apple” means apple … And even if we allowed this as the “protocol” UBP refers to, no amount of cultural agreement that “apple ” means apple will make an apple assemble itself when someone says the word “apple”.

    Speechless.

    The problem seems to be entailment b, as it always has been. A semiotic system relates a signifier to a signified so that two members of a shared linguistic community can communicate ideas – i.e. one member of the community can evoke in the mind of another member the idea s/he is currently entertaining.

    When a “semiotic system relates a signifier to a signified so that two members of a shared linguistic community can communicate ideas” they exchange arrangements of matter (voice patterns) that represent effects within a system (an evocation: apple) and those arrangements of matter will achieve that effect by a second arrangement of matter – a neural pattern – which is the physical instantiation of an agreement among the participants that the sound of the word “apple” represents the red fruit with the white center and the little black seeds.

    So, the voice is not the thought, and the agreement is neither of those. Either that, or there is zero physical distinction between knowing what an apple is, and not knowing what an apple is.

    The thing you need to acknowledge Dr Liddle, is that this same dynamic happens in any transfer of recorded information, not just among members of a “shared linguistic community”. Again, remove yourself from the observation.

    The referents of my signifiers are not my thoughts, but real-world objects, and abstract concepts. Those real world objects and abstract concepts are not brought into actuality when I utter a word. Unfortunately.

    This is becoming silly. You apparently think that when you speak the word “apple” there is an apple in your head prompting you to say the word. This ridiculous deduction comes directly from someone who specifically disavows that neural patterns prompt her words – only, she says, real-world objects can accomplish that task. Well, I am a different person. I only have my sensory/cognitive systems prompting my words.

    Moreover, this is simply wallowing in an anthropocentric malaise. My background in research is certainly different than yours (we are humans measuring humans, so we tend to get out of the way). Consequently, this is not something I will continue to do. I now need to find a stopping point.

    In its entirety, your argument is based a false premise.

    You believe that you have identified a distinction in the effects of information transfer, and somehow by virtue of this distinction, the semiotic argument (based on observed physical dynamics) fails. So let us put your distinction in play and follow it to its logical end. Let us say that only information transfer that produces objects is semiotic. That would mean that the exchange of words is not semiotic. Obviously that is incorrect. So let us say that only information transfer that does not produce objects is semiotic. In that case, there is no such thing as machine code (as machine codes are specifically representations and protocols which produce things). This second view suggest that machine code cannot have anything to do with representations, protocols, and effects. In other words, 01100001 is not a representation of the letter “A” and will not result in the letter “A” in a system where 01100001 is the protocol for the letter “A”. Obviously, this is incorrect as well. So your distinction first fails at the observed real-world level, but the question remains “does it change the dynamics of the transfer”. It completely fails here as well. So as I said earlier, there is no principle that information transfer must or must not result in the production of an object in order to be considered semiotic. It’s only required to have a physical effect within a system following the dynamics as set out by the observations themselves. Therefore the underlying premise in your objection has entirely refuted.

    If I should choose not to continue engaging you in this dialogue, I would like you to know one of the reasons why. In my comments I said …

    Demonstrating a system that satisfies the entailments (physical consequences) of recorded information, also confirms the existence of a semiotic state. It does so observationally. Yet, the descriptions of these entailments makes no reference to a mind. Certainly a living being with a mind can be tied to the observations of information transfer, but so can other living things and non-living machinery.

    … and I substantiated each of these statements by the observation of evidence. At no time have you been able to show an error in these observations. I then went on to say:

    But Dr Liddle, you are deliberately confusing what is at issue. The output of a fabric loom being driven by holes punched into paper cards is “a physical object” as well – an object created by representations operating in a system capable of creating fabric. The nucleotides in DNA don’t know what leucine is, any more than the hole-punched cards of a fabric loom know what “blue thread” is. Or, any more than a music box cylinder knows what the key of “c” is. Observing the critical dynamics does not require any reference to a mind in any way whatsoever, yet you are repeatedly trying (as hard as possible) to inject a mind into the observations so that you can then turn around and claim that its all about a mind. In case you have not yet noticed, you have failed at this position every time you’ve tried it, and you will continue to do so. The reason for this is simple; the observations are correct and you are wrong

    So instead of successfully attacking the correctness of the observations, you introduced Saussure’s (specifically anthropic) concept of a “sign” and have used it as a definition that somehow isn’t required to address the observations. This has the net effect of allowing you to introduce a mind without regard to the observations being made. This is, of course, pure obfuscation of the evidence. Yet, having done so, you then go on to misrepresent the argument as if none of the preceding ever occurred. You say:

    [BiPed] seems to be saying: cell-reproduction is information transfer, and information transfer is semiotic, and semiotics require minds, therefore cell-reproduction requires minds.

    You say this even though you know it is an absolute misrepresentation of the argument I’ve made. The semiotic argument is simply that the information transfer in protein synthesis is not only physical (as in all other forms of recorded information transfer) but is also semiotic (as in all other forms of recorded information transfer). I do not say that semiosis requires a mind in those physical observations, nor do I have to in order to make those physical observations. I do not say so for a specific reason. That reason is because the source of the information is in question, so to make that assumption in the observations is a logical fallacy. In other words, I do not make that assumption as a matter of evidentiary discipline, and you have used it to smuggle in a mind without addressing that same evidence.

    Now certainly I have thick enough skin to be misrepresented, and each time I am I will endeavor to straighten it out. But you represent a special case for two reasons. Firstly, we have been talking rather consistently in and around these observations since May of this year. For you to start blatantly misrepresenting me at this late date is, well, uninteresting. And secondly, you present as someone who simply cannot, or will not, remove themselves from the observations. And that is an argument that I must concede; I cannot argue against it.

  24. I have to admit that I didn’t follow the conversation that closely, but did anyone actually explain why the onion has more DNA than the human?

    Was the answer to that question ever really relevant?

    (Sorry. I often feel, in these discussions, that I have to apologize for being stupid or naive.)

  25. The “onion test”, as it’s called, is a challenge to people who assert that there ain’t no such thing as ‘junk DNA’. Because if all DNA is functional, what the heck does an onion do with 5* as much DNA as Homo sapiens? In most/all other contexts, the onion test isn’t particularly useful or relevant.

  26. Cubist: Because if all DNA is functional, what the heck does an onion do with 5* as much DNA as Homo sapiens?

    Or even as other closely related and largely indistinguishable onions, apparently.

  27. I will try to get to commenting on UBP’s response, but it’s a tough week this week!

    (And it’s a long post…..)

  28. “In genetic terms, “functional” is “makes a difference to the phenotype”.”

    I prefer to think that it takes more than making a difference to the phenotype for something to be considered functional. Something functional should also have a significant positive effect on fitness.

  29. Nothing of substance to add, but I am struck at how purposefully people like UBP avoid simply discussing their “information” issues in terms of genetics/biology, and instead rely almost entirely on non-biological analogies and ‘logical’ arguments. It is almost as if… they can’t discuss it in a relevant way. UPB is a big fan of David Abel. You know David Abel – David L. Abel, Director
    The Gene Emergence Project
    Department of ProtoBioCybernetics/
    ProtoBioSemiotics
    The Origin of Life Science Foundation, Inc.

    Odd that there is even a department at this “foundation” which resides in Abel’s house. But what is REALLY impressive is Abel’s publication list, which can be found here:
    http://davidlabel.blogspot.com/
    Looks like he has a WHOLE bunch of high-falutin high tech science papers, don’t it? Please, look carefully at the list. Maybe pick the title of one and do a page search for it….

    Who do these people think they are fooling? I mean besides UBP?

  30. I’ve heard IDers (and others) state that information is not physical.
    If they could prove something like that…

    Is information physical?

    Maybe we should ask that before we proceed to “Is it meanigful?”

  31. Rock: Is information physical?

    No, it isn’t. It is abstract, so not physical.

    Honestly, we have several very different notions of information. One of those is physical information, and that’s roughly what physicists discuss. But for ordinary use, as in speech, information is abstract but represented physically.

    The problem with the ID use of information, is that they mix various meanings of “information” together, and somehow manage to make incoherent nonsense out of it.

    They want to say that DNA is information. When they say that, they are dealing with physical information. But then they want to say that information results from the intentions of an intelligence, but that applies for abstract (non-physical) information, not for physical information.

  32. Information is physical. If its measurable, its physical. If its meaningful, its physical. If its “abstract,” “intentional,” “symbolical,” etc.; it its anything at all!–Its physical.

    I thought the website linked to your name, Neil Rickert, was interesting.

  33. Rock: I thought the website linked to your name, Neil Rickert, was interesting.

    Thanks. Maybe I should post something there about why information is abstract.

    I’m a bit puzzled what you mean by “physical” – particularly since you say that abstract implies physical. To me, abstract implies fictional.

  34. By “physical” I mean I understand it, whatever “it” is, at least in part, in the terms of the physical theories I was taught in school. By “abstract” I mean a representation (e.g., description or explanation) that elides information. I suppose one could call that “fiction.” In which case all scientific theories are fictions.

  35. Rock,

    Strange. As I use the terms, “The cat is on the mat” is a representation but is not abstract, though it might be false. It is not abstract because cats and mats are real things that people talk about. However “2+2=4” is abstract, because neither 2 nor 4 is a real thing – both are mental constructs.

    If you say “The cat is on the mat” then, assuming it to be true, you have informed me of the state of the cat relative to the mat. There’s nothing abstract about that. But when you say that the sentence contains information, then you have used the term “information” as the name for a useful fiction, and that is abstract. Coining such a term and treating what is thereby named as real, is generally known as reification.

Leave a Reply