Design as the Inverse of Cognition

     Several regulars have requested that I put together a short OP and I’ve agreed to do so out of deference to them. Let me be clear from the outset that this is not my preferred course of action. I would rather discuss in a more interactive way so that I can learn from criticism and modify my thoughts as I go along. OPs are a little too final for my tastes.
      I want to emphasize that everything I say here is tentative and is subject to modification or withdraw as feedback is received,
      It’s important to understand that I speak for no one but myself it is likely that my understanding of particular terms and concepts will differ from others with interest in ID. I also want to apologize for the general poor quality of this piece I am terrible at detail and I did not put the effort in I should have due mainly to laziness and lack of desire.
  With that out of the way:
Background
     For the purpose of this discussion I would like to expand upon the work of Phill Mcguire found here  and stipulate that cognition can be seen as lossless data compression in which information is integrated in a non-algorithmic process. The output of this process is a unified coherent whole abstract concept that from here forward I will refer to as a specification/target. Mcguire’s work thus far deals with unified consciousness as a whole but I believe his incites are equally valid when dealing with integrated information as associated with individual concepts.
     I am sure that there are those who will object to the understanding of cognition that I’m using for various reasons but in the interest of brevity I’m treating it an an axiomatic starting point here. If you are unwilling to accept this proviso for the sake of argument perhaps we can discuss it later in another place instead of bogging down this particular discussion.
     From a practical perspective cognition works something like this: in my mind I losslessly integrate information that comprise the defining boundary attributes of a particular target; for instance,”house” has such information as “has four walls”, “waterproof roof”, “home for family”, “warm place to sleep”, as well as various other data integrated into the simple unified “target” of a house that exists in my mind. The process by which I do this can not be described algorithmically. from the outside it is a black box but it yields a specified target output: the concept of “house”.
     Once I have internalize what a house is I can proceed to categorize objects I come across into two groups: those that are houses and those that are not. You might notice the similarity of this notion to the Platonic forms in that the target House is not a physical structure existing somewhere but an abstraction.
Argument
     With that in mind, it seems reasonable to me to posit that the process of design would simply be the inverse of cognition.
    When we design something we begin with a pre-existing specific target in mind and through various means we attempt to decompress it’s information into an approximation of that target. For instance I might start with the target of house and through various means proceed to approximate the specification I have in my mind into a physical object. I might hire a contractor nail and cut boards etc . The fruit of my labor is not a completed house until it matches the original target sufficiently to satisfy me. However, no matter how much effort I put into the approximation, it will never completely match the picture of an ideal house that I see in my mind. This is I believe because of the non-algorithmic nature of the process by which targets originate. Models can never match their specification exactly.
   Another good example of the designing process would be the act of composing a message.
   When I began to write this OP I had an idea of the target concept I wanted to share with the reader and I have proceeded to go about decompressing that information in a way that I hoped that could be understood. If I am successful after some contemplation a target will be present in your mind that is similar to the one that exists in mine. If the communication was perfect the two targets would be identical.
   The bottom line is that each designed object is the result of a process that has at its heart an input that is the result of the non-algorithmic process of cognition (the target). The tee shirt equation would look like this
CSI=NCF
    Complex Specified Information is the result of a noncomputable function. If the core of the design process (CSI) is non-computable then the process in its entirety can not be completely described algorithmically,
    This insight immediately suggests a way to objectively determine if an object is the result of design. Simply put if an algorithmic process can fully explain an object then it is not designed. I think this is a very intuitive conclusion, I would argue that humans are hardwired to tentatively infer design for processes that we can’t fully explain in a step by step manner. The better we can explain an object algorithmically the weaker our design inference becomes. If we can completely explain it in this way then design is ruled out.
     At some point I hope to describe some ways that we can be more objective in our determinations of whether an object/event can be fully explained algorithmically but as there is a lot of ground covered here so I will put it off for a bit. There are also several questions that will need to be addressed before this approach can be justifiably adopted generally such as how comprehensive an explanation must be to rule out design or conversely when we can be confident that no algorithmic explanation is forthcoming.
    If possible I would like to explore these in the future perhaps in the comments section. It will depend on the tenor of feed back I receive.
peace

923 thoughts on “Design as the Inverse of Cognition

  1. Erik says,

    Please give some strings a go and report the results, fifthmonarchyman.

    I already have and I’m happy to report that in preliminary tests the method works just fine.

    I haven’t done anything as rigorous as double blind studies. I just plug in a string along with a model that is close to it and see if I can tell the difference. So far so good.

    Several people have said that they will contribute strings for me to try. I assume they will arrive shortly.

    But there is no need to wait on me. My method is not difficult to duplicate. I would love it if others would give it a go and see what they discover.

    I will be happy to share my spread sheet to make it easy if anyone is interested.

    peace

  2. Petruska,

    2. is demonstrably false. You can cajole your brain to recall old fone numbers and names. Everyone has had that experience of trying to recall a name, and hours or days later it pops up suddenly.

    So your brain does what you ask it to do. it is a damn good secretary that never throws stuff out, but categorizes information according to a priority list, which YOU set.

    To further drive the point home, there is evidence that hypnotists are able to bring to the conscious mind complete memories with audio/visual/sensory aspects intact. So your brain records EVERYTHING and destroys NOTHING.

    So it seems it is the conscious minds’ ability to RETRIEVE memories that may degrade over time, but the brain’s redundancies ensures NO memory is ever lost.

    Therefore, the brain functions losslessly.

    petrushka:
    1. Your concept of lossy storage is hopelessly muddled.
    2. Brains are not in any sense of the word lossless.
    3. Starting with premises that are demonstrably wrong is not a good start.
    4. You cannot distinguish pi from an algorithmically derived approximation by comparing finite strings.
    5. None of this has any bearing on how brains work or how evolution works.

  3. Patrick says,

    if I remember correctly, already pointed out that any sonnet can be produced by an algorithm. This is based on the observation that all sonnets are of finite length and any finite string can be produced algorithmically.

    I say,

    Just to reiterate what I explained to keiths. I am not saying that a particular string can not be produced by an algorithm. I’m saying that strings representing designed objects can not be “explained” algorithmically.

    What this means is that in the case of designed objects I can distinguish the original string from a algorithmically produced string that is close to it.

    peace

  4. fifthmonarchyman,

    Just to reiterate what I explained to keiths. I am not saying that a particular string can not be produced by an algorithm. I’m saying that strings representing designed objects can not be “explained” algorithmically.

    Please provide an operational definition of “explain” as you are using the word. How could we reach agreement that a particular algorithm objectively “explains” a particular result?

  5. Patrick says,

    Please provide an operational definition of “explain” as you are using the word.

    I say,

    Great question!!!!! It’s the sort of question I would have liked to see about 100 comments ago. Because it helps me to think deeply about this stuff. Please ask followup questions so we can come to an understanding. This is important.

    Informally by “explain” I mean to describe what it is that makes a particular string different from other strings.

    The algorithm {1.14159265359 +2} does not explain the string 3.14159265359
    because it has nothing whatsoever to do with the target that the string is modeling (Pi).

    What I mean to say is that the reason {1.14159265359 +2} is not explanatory is because there is more information in the string 3.14159265359 than there is in the algorithm . The extra information is in fact the target itself (Pi).

    In other words there is more separating 3.14159265359 from 1.14159265359 than just the number 2.

    You might want to take a look at my first thread here to get a feel for my ideas on the relationship between algorithms and targets.

    you say

    How could we reach agreement that a particular algorithm objectively “explains” a particular result?

    I say

    We would compare the particular string with a close one that is produced algorithmically if we can easily distinguish the two strings. If we can quickly tell them apart then an algorithm does not explain the string.

    for example

    In order to test whether the string 8675309 could be explained by the algorithm {8675308+1} we might run the algorithm {8675308+.99} which yields 8675308.99

    This result is very close to the original string but it does not in any way help to describe that string.

    Just to verify this we could enter the second string into a phone key pad and see if Jenny (or anyone else) answers.

    Again the reason that the algorithm does not explain the string is because it has nothing to do with the target the original string is modeling.

    anxiously awaiting your response

    peace

  6. fifthmonarchyman,

    Now I’m really confused. I wouldn’t consider 2 + 1.14159 to be an algorithm (“a process or set of rules to be followed in calculations or other problem-solving operations”), it’s just an equation.

    How could we reach agreement that a particular algorithm objectively “explains” a particular result?

    We would compare the particular string with a close one that is produced algorithmically if we can easily distinguish the two strings.

    The point that keiths originally made is that any finite string can be generated algorithmically. There is no difference between the algorithmically generated sonnet string, for example, and the one produced originally by a poet.

    Again the reason that the algorithm does not explain the string is because it has nothing to do with the target the original string is modeling.

    This seems to be a separate, but equally incorrect claim. What target is a sonnet modeling? What does it even mean to model a target?

    Going up one level, what does any of this have to do with ID?

  7. Hey Patrick

    You say,

    it’s just an equation.

    I say,

    Think of it as a one step process start with 1.14159 then add 2

    you say,

    There is no difference between the algorithmically generated sonnet string, for example, and the one produced originally by a poet.

    I say,

    Except in their origin and their origin is what we are interested in here.

    you say,

    What target is a sonnet modeling?

    I say,

    The sonnet is not modeling anything.

    The sonnet is the target and the designer is using the algorithm to model it

    you say

    What does it even mean to model a target?

    I say,

    from the OP quoting myself

    quote:

    When we design something we begin with a pre-existing specific target in mind and through various means we attempt to decompress it’s information into an approximation of that target. For instance I might start with the target of house and through various means proceed to approximate the specification I have in my mind into a physical object.

    end quote:

    you say,

    what does any of this have to do with ID?

    I say,

    It’s pretty simple really

    ID is all about detecting design. Design is all about modeling targets/specfications.

    Peace

    PS

    Again thanks for the interaction please don’t give on the discussion yet. I apologize if I confuse you. I confuse me sometimes as well when I’m trying to organize and tidy up my ideas

    I promise there is a method to the madness. Keep asking questions

  8. fifthmonarchyman,

    I’d like to see your response to Steve. (Not that it should matter, but petrushka just had cataract surgery, and will be out for a while.)

  9. OK Tom,

    I’ll pick up on this claim:

    Steve: there is evidence that hypnotists are able to bring to the conscious mind complete memories with audio/visual/sensory aspects intact. So your brain records EVERYTHING and destroys NOTHING.

    I wonder if Steve has heard of false memory syndrome, a problem that is significant enough to have a foundation dedicated to the issue.

  10. fifthmonarchyman says,

    It’s pretty simple really / ID is all about detecting design. Design is all about modeling targets/specfications.

    I say,
    IDT is about detecting/discovering/acknowledging/etc. ‘Design’ not ‘design.’ The capitalisation makes a big difference to the ideology.

    If IDT were really about lowercase ‘designs’ then it would be able to study lowercase ‘designers’ and not insist that the uppercase (singular) ‘Designer’ is not researchable. But that kind of ‘design’ is not what IDT is about.

    The ‘method’ to IDism’s madness is equivocation between uppercase Intelligent Design and lowercase intelligent design. Blatant, obvious & even reckless abuse of communicative clarity. That is why religious theists, as William Lane Craig recently said, after other theists, particularly Catholics like Feser and Barr, but also protestants like Owen Gingerich, reject uppercase ‘Intelligent Design’ while still accepting lowercase ‘intelligent design.’

    The entire discourse with IDists would be simplified if this point could be conceded. Will fifthmonarchyman concede it?

  11. Gregory says

    The entire discourse with IDists would be simplified if this point could be conceded. Will fifthmonarchyman concede it

    I say,

    I not sure what point it is you want me to concede. Can you phrase it in one sentence?

    I will say that as far as capitalization goes I am terrible at it. I never know when it is appropriate.

    Often I will capitalize a word then later use the same word in the same context and not capitalize it. Therefore I have no opinion one way or another if you want to capitalize Designer or any other word for that matter.

    It makes no difference whatsoever with my method as far as I can tell.

    I will also say I would like to be able to discuss this stuff without it constantly being about culture warfare.

    Tom English says,

    But fifthmonarchyman might clear up a lot for all of us if he took time out to explain himself to a would-be ally.

    I say,

    I have no problem with that. If a “would-be ally” has questions I’d be happy to answer them.

    peace

  12. Alan fox says

    I wonder if Steve has heard of false memory syndrome

    I say,

    Not sure if Steve has heard about that but I have. It’s important to keep in mind that truthfulness of a particular a memory is irrelevant to question of whether cognition is a nonlossy process.

    For all I know I might be a brain in a vat and all my memories are false but they are my memories none the less.

    peace

  13. Tom English says.

    I’d like to see your response to Steve.

    I say,

    not sure what kind of response you would like.

    How about “GO Team”?

    peace

  14. fifthmonarchyman: It’s important to keep in mind that truthfulness of a particular a memory is irrelevant to question of whether cognition is a nonlossy process.

    Sorry, I was just pointing out that Steve’s claim was contrary to fact. I doubt human memory is reliable either on detail or after time. I think we reinforce and embroider memories by re-remembering them.

    For all I know I might be a brain in a vat and all my memories are false but they are my memories none the less.

    I don’t think philosophical brain-in-a-vat thought experiments are useful.

  15. Hey Allen,

    you say,

    I doubt human memory is reliable either on detail or after time

    I say,

    Here is something timely you might find interesting.

    http://www.sciencedaily.com/releases/2015/05/150528142815.htm

    perhaps he is more correct than you realize. I would also argue that episodic memory is different than memory of concepts or perceptions. There is a lot we don’t know.

    I’m agnostic about the whole thing.

    Cognition is different than memory anyway so the reliability of memory does not make any difference to what we are discussing here. We have memories that never enter our conscious awareness that should settle the matter.

    The claims of the article are about how we integrate the information from our memories and other sources into unified whole lossless data compressions.

    you say,

    I don’t think philosophical brain-in-a-vat thought experiments are useful.

    These speculations are not just philosophical thought experiments but cutting edge science especially when we think about the implications of the multiverse

    again check it out.

    http://www.newscientist.com/article/mg22229692.600-quantum-twist-could-kill-off-the-multiverse.html#.VWeNyEarHeY

    We may not like the idea but we need to at least be ready to entertain the possibility. Not to do so is to hide our head in the sand.

    peace

  16. fifthmonarchyman: Here is something timely you might find interesting.

    linkperhaps he is more correct than you realize.

    I’m not clear who the “he” is, unless you mean Tonegawa.

    I would also argue that episodic memory is different than memory of concepts or perceptions. There is a lot we don’t know.

    Absolutely agree on not having a complete picture of how neurons firing in the brain explain human awareness, reasoning and memory.

    I’m agnostic about the whole thing.

    Cognition is different than memory anyway so the reliability of memory does not make any difference to what we are discussing here. We have memories that never enter our conscious awareness that should settle the matter.

    The claims of the article are about how we integrate the information from our memories and other sources into unified whole lossless data compressions.

    Unfortunately, I couldn’t access the article (absract here) so I can’t comment.

  17. Alan Fox says,

    Unfortunately, I couldn’t access the article (absract here) so I can’t comment.

    I say.

    I meant the article I linked in the OP

    Sorry about the confusion

    peace

  18. fifthmonarchyman: I meant the article I linked in the OP

    I see. This one. Skimming, I see several references to Giulio Tononi and this paper which covers a lot of ground. Did you read it? My math is lacking so the “computability of consciousness” is an exercise I’d have to take on trust. Tononi seems to ask the prerequisite question of what consciousness is in neurological terms, as I would have thought you needed a clear idea of what, physically, consciousness is before you can quantify it.

    ETA Regarding the nature of consciousness, there was a thread a few months back, discussing Michael Graziano’s argument that “consciousness” is poorly defined. See here for instance.

  19. On glancing at that thread, I was also reminded of Michael Tomasello, who, if Wikipedia can be trusted, “argues that children grow up and learn in a very interactive environment that is facilitated by their caregivers. Tomasello gives an example of the impact of environment when he cites how a child being raised on a desert island, isolated from social interaction, would have cognition similar to that of apes.”

  20. Maybe drifting off topic but Frans der Waal has a TED talk on primate cognition which might interest you. To see an indignant Capuchin monkey, start around 14 minutes in. Here

  21. Hey Allen,

    you said,

    Tomasello gives an example of the impact of environment when he cites how a child being raised on a desert island, isolated from social interaction, would have cognition similar to that of apes.”

    I say,

    It is off topic but I would say that social interaction plays a big part in what makes us human. especially in regards to things like language. language in turn in integral to the process of cognition.

    A weird quirk with the understanding of design I’m proposing is that the interaction between the designer and the observer can be seen as a sort of communication with the designed object as the signal and the target as the message that is being conveyed.

    Anyway that is a ramble for later first I need to see if my general approach is sound.

    peace

  22. All,

    Does any one have any technical objections about my method?
    Do you understand it?
    Are you comfortable with an r squared of 80% being called “close”?
    Should there be a limit on the number of guesses an observer is given before you call off the trial and fail to reject the null?

    thanks in advance

    peace

  23. fifthmonarchyman: Does any one have any technical objections about my method?

    Technical objections — no.

    However, I am deeply skeptical as to whether it even makes sense. But I am attempting to follow along, in the hope that you will eventually tell us enough about what you are doing, that I can begin to make sense of it.

  24. Neil Rickert says,

    However, I am deeply skeptical as to whether it even makes sense

    I say,

    Could you give a short summery of what you understand my method to be and why I employ it so that I can get a feel of what I have not communicated properly.

    Thanks
    peace

  25. fifthmonarchyman:
    All,

    Does any one have any technical objections about my method?
    Do you understand it?
    Are you comfortable with an r squared of 80% being called “close”?

    Yes, I have technical objections, but to be able to voice them, I’d need to understand where you are heading. So, the main objection is that you have not told anywhere enough about your “method” that I could determine whether it’s even a method.

    However, I have technical praise too. You have apparently learned to blockquote. This is most laudable. Blockquote goes a long way.

    And I have much to say about lonely children in deserted islands. There are some who think that a lonely child would become above-average human, not less. Given certain preconditions of course, which don’t always apply. This is much more interesting. Let’s discuss!

  26. Maquire et al: “Since lossy integration would necessitate continuous damage to existing memories … ”

    They are clearly discussing information integration in terms of memories. It seems their entire thrust is that a limited amount of memory space can’t store unlimited information, therefore, if consciousness is non-lossy, then consciousness must be non-computable or some such. Of course, integration of information is lossy in humans, so it’s a futile proof.

    More particularly, they define integrating function such that “the knowledge of m(z) does not help to describe m(z’), when z and z’ are close”, which is exactly contrary to how people learn and develop understanding.

    They also state, “An integrating function’s output is such that the information of its two (or more) inputs is completely integrated.” But we know from simple observation that people integrate information incompletely. In other words, information is almost always lost during the process of learning. People integrate new knowledge within the parameters of what they already know.

  27. fifthmonarchyman: Could you give a short summery of what you understand my method to be and why I employ it so that I can get a feel of what I have not communicated properly.

    I have not found a method yet.

    You are talking about computability. Normally, one asks computability questions about numbers or about functions. But you have not yet narrowed it down enough so that I can tell which of those.

  28. Well Zac is here

    There goes the neighborhood.

    Zac since we have repeatedly demonstrated an inability to communicate with each other. Is there any way I might persuade you to sit this one out for a while? I promise I’ll get back to you once everyone else has had a crack at me

    peace

  29. Hey Eric.

    you say

    So, the main objection is that you have not told anywhere enough about your “method” that I could determine whether it’s even a method.

    Here is a short step by summery from a hundred comments or so ago.

    It’s all based on the idea that algorithmic process can’t explain designed objects

    1. Represent an object to be evaluated as a numeric string

    2. Create a randomized copy of the string

    3. Run it through the “line graph game” to see if the observer can consistently identify the original string with feed back

    a. If no we cannot reject the null hypothesis that the object is effectively random at this measurement resolution

    b. If yes continue to step four

    4. Run the random string through a simple EA until the R squared is above a predetermined threshold (usually 80%)

    5. Repeat step 3 and see if the observer can consistently identify the original string with feed back.

    a. If no we cannot reject the null hypotheses that the object is not designed at this measurement resolution

    b. If yes we can reject the null hypothesis.

    Pretty simple right

    I can elaborate on any step in the process if needed just ask

    peace

  30. Neil Rickert says,

    You are talking about computability. Normally, one asks computability questions about numbers or about functions. But you have not yet narrowed it down enough so that I can tell which of those.

    I say,

    I’m taking it as axiomatic that cognition is not computable. I haven’t dwelt here on why I believe this is the case. Zac’s comment demonstrates that it will be quite a task to convince you all of that.

    Since the non-computability of consciousness is not an idea that originated with me I’d really like to avoid that discussion at this point.

    So that I can focus on my own ideas. keep in mind I’m not interested in convincing you of anything but in organizing and tiding up what I think

    For those proposes what is important is not whether a string is computable (every finite string is computable) but whether it can be “explained” by an algorithm. That means we need to look at functions and not numbers specifically.

    peace

  31. fifthmonarchyman: I’m taking it as axiomatic that cognition is not computable.

    I take it as obvious that water is not computable.

    In fact I take it as obvious that water is not the kind of thing about which it makes sense to ask “is it computable.”

    Likewise, I think that cognition is not the kind of thing about which it makes sense to ask if it is computable. Presumably you disagree with that. It would help if you way say enough about what you mean by “cognition”, so that I can at least see why you think the question of its computability even arises.

    For those proposes what is important is not whether a string is computable (every finite string is computable) but whether it can be “explained” by an algorithm.

    It would also be helpful to have some clarification of what “explained by an algorithm” is supposed to mean.

  32. Hey Neil,

    I know you said you did not finish reading the paper. Don’t you think if you want to talk about this that badly you should at least make that much of an effort?

    Perhaps If you would tell me what part you find to be nonsensical I could get a handle on what you are asking. To me cognition and computation are at least semantically related concepts. I have no idea why you believe this not to be the case

    Others here seem to think that the paper is not incoherent but simply wrong. It should be clear that an idea can’t be complete nonsense and at the same time demonstrably false.

    Zac and others seem pretty convinced that consciousnesses is indeed computable perhaps you should explain to them why the entire conversation is meaningless.

    It might be the case that our differing worldviews simply make that sort of conversation impossible.

    I sure hope not

    peace

  33. fifthmonarchyman: I know you said you did not finish reading the paper. Don’t you think if you want to talk about this that badly you should at least make that much of an effort?

    They are at least clear enough that they believe that cognition is computation. I had not previously heard that from you.

    I’ll finish reading it if you post something that is compelling enough to warrant reading it in full.

    Personally, I don’t believe that cognition is computation. Two thousand years of philosophy of mind and 60 years of AI can be wrong. I’ve spent some effort looking at the problems that an organism needs to solve, and at ways of solving those problems. And computation does not appear to be at all useful to an organism for solving its basic problems.

  34. Neil Rickert says.

    Personally, I don’t believe that cognition is computation.

    I say,

    That is my opinion as well. What are we arguing about again?

    you say,

    And computation does not appear to be at all useful to an organism for solving its basic problems.

    I say,

    Again I agree, I think we might be in the minority here.

    Would you also agree that designed objects are not likely the result of computation?

    from here

    https://www.google.com/search?q=computation+&ie=utf-8&oe=utf-8

    quote:
    Computation is a process following a well-defined model understood and expressed as, for example, an algorithm, or a protocol.
    end quote:

    peace

  35. fifthmonarchyman: That is my opinion as well. What are we arguing about again?

    I’m not sure that we have been arguing all that much. I’m still not sure what this thread is attempting to establish.

    Would you also agree that designed objects are not likely the result of computation?

    It is hard to know what that is asking. Designed things are not likely the result of computation alone. However, the processor in your computer was largely (but not entirely) designed by computation.

    “Computation is a process following a well-defined model understood and expressed as, for example, an algorithm, or a protocol.” (wikipedia)
    [I added the quotes and the “(wikipedia)” there, as implied by the context]

    People who say that cognition is computation are not claiming that there is a well-defined computational model. Rather, they are saying that when we do eventually understand what the brain is doing, it will turn out to be computation. And my disagreement is with that, rather than with whether there is a known model.

    So maybe you are trying to argue: “if it is computation, then it must be a non-computational computation” as a way of showing that it is not computation. But that’s where I’m not clear on what you are trying to show.

  36. Neil Rickert says,

    they are saying that when we do eventually understand what the brain is doing, it will turn out to be computation. And my disagreement is with that, rather than with whether there is a known model.

    I say,

    That is my problem as well. I think we can be pretty sure that computation is not all that is going on here.

    You say,

    Designed things are not likely the result of computation alone.

    I say,

    I agree and would argue that this insight suggests a way to differentiate between things that are designed and things that are not.

    That is what I’m trying to get at with my method.

    maybe you are trying to argue: “if it is computation, then it must be a non-computational computation” as a way of showing that it is not computation.

    No, I trying to say that it can’t be “explained” by computation alone.

    What I need to do is come up with a good definition for what I mean by that. I know what i want to say and I know what it looks like.

    I’m just not sure of the correct verbiage as of yet.

    peace

  37. fifthmonarchyman: Zac and others seem pretty convinced that consciousnesses is indeed computable perhaps you should explain to them why the entire conversation is meaningless.

    Didn’t say that, simply that the presumptions in the paper don’t reflect what we know of human memory and cognition.

    Maguire et al: According to the integrated information theory, when we think of another person as conscious we are viewing them as a completely integrated and unified information processing system, with no feasible means of disintegrating their conscious cognition into disjoint components. We assume that their behaviour calls into play all of their memories and reflects full coordination of their sensory input. We now prove that this form of complete integration cannot be modelled computationally.

    Scientists have shown that the human mind is made up of a network of components and doesn’t have “full coordination of its sensory input”.

    fifthmonarchyman: No, I trying to say that it can’t be “explained” by computation alone.

    Perhaps, but you’re not likely to prove that mathematically based on compressibility of memory or with axioms that don’t reflect what we know about human cognition.

  38. Neil Rickert said,

    It would also be helpful to have some clarification of what “explained by an algorithm” is supposed to mean.

    I say,

    Let me give that a go. slightly formally What I mean by explain is………

    “to describe an object so that an observer can ascertain the target it models”

    check this out for an example of what I’m talking about

    http://www.evolvingai.org/fooling

    The algorithms that created these images do not in any way help to describe the objects that they are supposed approximate despite the fact that a computer can not distinguish the algorithmically produced image from the original one.

    This objective but not quantifiable difference between the algorithmically produced images in the study and the original images of real things is what my method is attempting to get at.

    Does that makes sense?

    peace

  39. zac says,

    Scientists have shown that the human mind is made up of a network of components and doesn’t have “full coordination of its sensory input”.

    I say,

    ZAC has shown REPEATEDLY an inability to even attempt to understand what the other side is trying to say,

    I’m sorry that this is the case perhaps it will change in the future but I’ve seen no evidence of that from his latest spiel.

    Until I do I will assume communication with him impossible.

    To attempt to once again to go down that dead end road would not be fair to everyone else and would be counter productive to what I want here which is to clarify and tighten-up my own ideas.

    I hope everyone understands

    If anyone thinks his comments have merit I would ask that they please rephrase them in a way that is at all germane to this discussion and I will be happy to discuss

    Thanks in advance

    peace

  40. fifthmonarchyman:
    It’s all based on the idea that algorithmic process can’t explain designed objects

    1. Represent an object to be evaluated as a numeric string

    2. Create a randomized copy of the string

    3. Run it through the “line graph game” to see if the observer can consistently identify the original string with feed back

    a. If no we cannot reject the null hypothesis that the object is effectively random at this measurement resolution

    b. If yes continue to step four

    4. Run the random string through a simple EA until the R squared is above a predetermined threshold (usually 80%)

    5. Repeat step 3 and see if the observer can consistently identify the original string with feed back.

    a. If no we cannot reject the null hypotheses that the object is not designed at this measurement resolution

    b. If yes we can reject the null hypothesis.

    Two questions:

    What exactly does this show?

    How is this in any way objective? Different observers are going to give different results.

    Bonus question: If I give you six strings of over 1000 bits each, how are you going to visually compare them with any other bit string of similar length to “consistently identify” similarities?

  41. Zachriel: “Humans are susceptible to optical illusions. Such illusions are designed to hack the way our brains see the world. Similarly, these images hack the way neural networks see the world.”

    Indeed. I read the paper and my first thought was that it would be straightforward to train a neural network to categorize the curves just as the human contestants were trained.

    Fifthmonarchyman — What does this paper have to do with your argument? If, as I contend, a software system could learn the same task, would that impact your argument in any way?

  42. fifthmonarchyman:
    Let me give that a go. slightly formally What I mean by explain is………

    “to describe an object so that an observer can ascertain the target it models”

    As has been repeatedly pointed out in this thread, any finite string can be generated algorithmically. Since a sonnet, for example, is a finite string, it can be generated algorithmically. The output of the algorithm is indistinguishable from the “target” of the original sonnet. An observer can easily ascertain that target.

    So, why do you not consider the algorithm to have “explained” the output?

  43. Not sure if Steve has heard about that but I have. It’s important to keep in mind that truthfulness of a particular a memory is irrelevant to question of whether cognition is a nonlossy process.

    For all I know I might be a brain in a vat and all my memories are false but they are my memories none the less.

    Again, you can’t just assume that the brain stores memories in a non-lossy fashion. That is completely at odds with any number of empirical observations, as noted throughout this thread. If that is essential to your argument, whatever that may turn out to be, you need to support it and address the conflicting evidence.

  44. Patrick says,

    The output of the algorithm is indistinguishable from the “target” of the original sonnet

    I say,

    This is incorrect. The output of the algroythym is simply pixels on a screen.
    It’s not even close to the target.

    peace

  45. fifthmonarchyman,

    for more of what I’m talking about check it out.

    http://theshrug.net/everyone-failed-to-ride-this-bicycle-the-reason-why-is-mind-boggling/

    This is a pretty good demonstration of how human cognition is nonlossy.

    You’re going to have to show your work here. To me it argues for the opposite — if our brains were non-lossy he could have learned the new riding style almost instantly as his brain integrated the new feedback. Instead it took him eight months.

  46. Patric says,

    What exactly does this show?

    I say,

    It shows “the knowledge of m(z) does not help to describe m(z’), when z and z’ are close”,

    How is this in any way objective? Different observers are going to give different results.

    I think this is a great question again it shows that you understand me

    I don’t think that is the case that observers will give different results but we need to test to be sure.

    you say,

    If I give you six strings of over 1000 bits each, how are you going to visually compare them with any other bit string of similar length to “consistently identify” similarities?

    I say,

    check out the paper that inspired my method

    http://arxiv.org/pdf/1002.4592.pdf

    If you’d like I can send you the spreadsheet I use and you can try it yourself. Just let me know how to contact you

    peace

Leave a Reply