Design as the Inverse of Cognition

     Several regulars have requested that I put together a short OP and I’ve agreed to do so out of deference to them. Let me be clear from the outset that this is not my preferred course of action. I would rather discuss in a more interactive way so that I can learn from criticism and modify my thoughts as I go along. OPs are a little too final for my tastes.
      I want to emphasize that everything I say here is tentative and is subject to modification or withdraw as feedback is received,
      It’s important to understand that I speak for no one but myself it is likely that my understanding of particular terms and concepts will differ from others with interest in ID. I also want to apologize for the general poor quality of this piece I am terrible at detail and I did not put the effort in I should have due mainly to laziness and lack of desire.
  With that out of the way:
Background
     For the purpose of this discussion I would like to expand upon the work of Phill Mcguire found here  and stipulate that cognition can be seen as lossless data compression in which information is integrated in a non-algorithmic process. The output of this process is a unified coherent whole abstract concept that from here forward I will refer to as a specification/target. Mcguire’s work thus far deals with unified consciousness as a whole but I believe his incites are equally valid when dealing with integrated information as associated with individual concepts.
     I am sure that there are those who will object to the understanding of cognition that I’m using for various reasons but in the interest of brevity I’m treating it an an axiomatic starting point here. If you are unwilling to accept this proviso for the sake of argument perhaps we can discuss it later in another place instead of bogging down this particular discussion.
     From a practical perspective cognition works something like this: in my mind I losslessly integrate information that comprise the defining boundary attributes of a particular target; for instance,”house” has such information as “has four walls”, “waterproof roof”, “home for family”, “warm place to sleep”, as well as various other data integrated into the simple unified “target” of a house that exists in my mind. The process by which I do this can not be described algorithmically. from the outside it is a black box but it yields a specified target output: the concept of “house”.
     Once I have internalize what a house is I can proceed to categorize objects I come across into two groups: those that are houses and those that are not. You might notice the similarity of this notion to the Platonic forms in that the target House is not a physical structure existing somewhere but an abstraction.
Argument
     With that in mind, it seems reasonable to me to posit that the process of design would simply be the inverse of cognition.
    When we design something we begin with a pre-existing specific target in mind and through various means we attempt to decompress it’s information into an approximation of that target. For instance I might start with the target of house and through various means proceed to approximate the specification I have in my mind into a physical object. I might hire a contractor nail and cut boards etc . The fruit of my labor is not a completed house until it matches the original target sufficiently to satisfy me. However, no matter how much effort I put into the approximation, it will never completely match the picture of an ideal house that I see in my mind. This is I believe because of the non-algorithmic nature of the process by which targets originate. Models can never match their specification exactly.
   Another good example of the designing process would be the act of composing a message.
   When I began to write this OP I had an idea of the target concept I wanted to share with the reader and I have proceeded to go about decompressing that information in a way that I hoped that could be understood. If I am successful after some contemplation a target will be present in your mind that is similar to the one that exists in mine. If the communication was perfect the two targets would be identical.
   The bottom line is that each designed object is the result of a process that has at its heart an input that is the result of the non-algorithmic process of cognition (the target). The tee shirt equation would look like this
CSI=NCF
    Complex Specified Information is the result of a noncomputable function. If the core of the design process (CSI) is non-computable then the process in its entirety can not be completely described algorithmically,
    This insight immediately suggests a way to objectively determine if an object is the result of design. Simply put if an algorithmic process can fully explain an object then it is not designed. I think this is a very intuitive conclusion, I would argue that humans are hardwired to tentatively infer design for processes that we can’t fully explain in a step by step manner. The better we can explain an object algorithmically the weaker our design inference becomes. If we can completely explain it in this way then design is ruled out.
     At some point I hope to describe some ways that we can be more objective in our determinations of whether an object/event can be fully explained algorithmically but as there is a lot of ground covered here so I will put it off for a bit. There are also several questions that will need to be addressed before this approach can be justifiably adopted generally such as how comprehensive an explanation must be to rule out design or conversely when we can be confident that no algorithmic explanation is forthcoming.
    If possible I would like to explore these in the future perhaps in the comments section. It will depend on the tenor of feed back I receive.
peace

923 thoughts on “Design as the Inverse of Cognition

  1. fifthmonarchyman,

    It points to a way that we can distinguish between strings that is inaccessible to computers.

    It does no such thing.

    From the actual paper

    quote:
    We suggest that such novel interfaces can harness human capabilities to process
    and extract information from financial data in ways that computers cannot.
    end quote:

    Nothing in the paper or in your arguments supports the claim that distinguishing those strings is not possible with software. If you disagree, please present the actual empirical evidence and arguments, not just bald assertions.

  2. fifthmonarchyman,

    I am using the standard definition. The authors of the paper also use the standard definition and go into detail to explain exactly what they mean. That you insist on reading some other definition when you see the term is your problem not mine.

    I suggest you use the definition that the authors use when discussing the topic of the paper.

    The way it is used in the paper is wrong, as already discussed. Human memory is not non-lossy. If you want to continue to claim that it is, you need evidence to support that claim.

  3. fifthmonarchyman,

    If you are claiming that an artifact under discussion is “explained” only when we take the causal chain back to a human mind, then your whole process is unnecessary,

    That is not my claim at all.

    My claim is that a artifact is “explained” when it points you to the target. My original definition works just fine to convey this understanding. I make no mention of a mind human or otherwise.

    The target is nothing but a nonlossy data compression. The information in the target is decompressed in the artifact.

    Because Algorithms compress data lossily they are inadequate to explain artifacts.

    Any finite string can be produced algorithmically, without any loss in fidelity. Therefore, by your own definition, they suffice to “explain” those strings.

  4. fifthmonarchyman,

    If my prediction in step 5 turns out to be true, would it result in you “abandoning any hope [of] design detection by this method”?

    once again if an algorithmically based process can nonlossily integrate all the information in a string in direct contradiction of the mathematical proof offered in the original paper I linked in the OP then yes I would abandon any hope of design detection by this method.

    Please answer my question as posed without insinuating your own assumptions into your rephrasing of my words.

  5. fifthmonarchyman,
    One more note on your comment:

    long story short if your software can distinguish between strings and at the same time not be subject to being easily fooled by EAs I would abandon the whole enterprise.

    EAs have nothing to do with the paper you posted. I’m not even sure how you would apply them in that domain.

    The fact is that the accuracy of humans on the problem is between 70-80%. That means they are “fooled” 20-30% of the time. If software can equal or better that performance, why would it not falsify your claims about the paper?

    By the way, the financial game paper doesn’t provide any mathematical proof about integrated information.

  6. fifthmonarchyman,

    “If it exists it will turn out be the result of evolution. Because we know that everything that exists in biology is demonstrably the result of evolution”

    quite a tight circle to that logic you got going there.

    No, it’s a vast collection of consilient empirical evidence. Intelligent design creationism, on the other hand, has none.

    You seem to be trying to define evolution as insufficient to account for . . . something. You need evidence, not word games.

  7. Patrick says,

    The way it is used in the paper is wrong, as already discussed. Human memory is not non-lossy.

    I say,

    Can you point to one time either I or the paper said that human memory was non-lossy?

    Of course you can’t because that is not the claim. It never was.

    The claim is not about memory. I’m not sure how many more ways I can explain it to you.

    peace

  8. fifthmonarchyman,

    Can you point to one time either I or the paper said that human memory was non-lossy?

    From the paper:

    In particular, memory functions must be vastly non- lossy, otherwise retrieving them repeatedly would cause them to gradually decay.

    In fact, we observe that human memories do change as they are accessed.

  9. Patrick says,

    By the way, the financial game paper doesn’t provide any mathematical proof about integrated information.

    geez

    before I said:

    quote:

    if an algorithmically based process can nonlossily integrate all the information in a string in direct contradiction of the mathematical proof offered in the original paper I linked in the OP then yes I would abandon any hope of design detection by this method.

    end quote:

    Perhaps this obvious reading comprehension problem is one of the reasons you are having such difficulty with the concept of lossless data comprehension as expressed in the paper linked in the OP

    peace

  10. Patrick quotes the paper

    In particular, memory functions must be vastly non- lossy, otherwise retrieving them repeatedly would cause them to gradually decay.

    I say,

    How could you possible miss this?

    “Memory functions” is not remotely the same thing as memory.

    I have a memory of what I did last week. My “memory function” is the process I use to bring that memory into conscious awareness. My memories can decay with time but my retrieving them into conscious awareness does not cause them to decay instead it prevents them from decaying.

    It’s my retrieving them (my memory function) that the paper is talking about

    peace

  11. Patrick says,

    EAs have nothing to do with the paper you posted. I’m not even sure how you would apply them in that domain.quoted text

    There are three papers that I have linked. Each is integral to my method

    1) talks about cognition as a non-algorithmic lossless process
    2) shows this process in action in relation to distinguishing patterns in numeric strings
    3) shows conclusively that deep learning software does not work in this way by using EAs to easily fool state of the art systems

    If we are going to discuss you need look at the big picture and get a handle on what is being said.

    peace

  12. fifthmonarchyman,

    if an algorithmically based process can nonlossily integrate all the information in a string in direct contradiction of the mathematical proof offered in the original paper I linked in the OP then yes I would abandon any hope of design detection by this method.

    end quote:

    Perhaps this obvious reading comprehension problem is one of the reasons you are having such difficulty with the concept of lossless data comprehension as expressed in the paper linked in the OP

    I appreciate that it’s hard presenting and defending a minority viewpoint in an online forum. You have earned a lot of respect for being willing to interact here rather than staying in the UD echo chamber.

    With that due respect in mind, I suggest that the problem is not my reading comprehension but your failure to express your argument clearly. This is why operational definitions are essential. You are using “non-lossy” in a very non-standard way. If you can’t drop the term for some reason, despite the confusion it causes and its apparent lack of value, please provide an unambiguous definition for how you are using it. If you believe it’s already defined elsewhere, copy and paste the definition that you agree with.

    The same for “algorithmic”, “integrated information”, “cognition”, and “explain” would be really helpful. The reason we’re going in circles is because I feel like I’m trying to nail jello to a wall. When I use your stated definitions, you constantly tell me that I’m not.

    To get back to the financial game paper, please give me a yes or no answer to my question. If I follow the procedure I described and get the results I predict, would that refute your approach to design detection?

    If the answer is “No”, that’s fine. We can then identify how the procedure would need to be changed. Repeatedly replying by modifying my question is getting us nowhere.

  13. fifthmonarchyman,

    In particular, memory functions must be vastly non- lossy, otherwise retrieving them repeatedly would cause them to gradually decay.

    How could you possible miss this?

    “Memory functions” is not remotely the same thing as memory.

    I have a memory of what I did last week. My “memory function” is the process I use to bring that memory into conscious awareness. My memories can decay with time but my retrieving them into conscious awareness does not cause them to decay instead it prevents them from decaying.

    It’s my retrieving them (my memory function) that the paper is talking about

    I do not see this distinction in the paper and see no reason to read it this way. The “them” in the quoted excerpt is clearly discussing memories. See the preceding sentences:

    If the brain integrated information in this manner, the inevitable cost would be the destruction of existing information. While it seems intuitive for the brain to discard irrelevant details from sensory input, it seems undesirable for it to also hemorrhage meaningful content.

    It makes no sense to talk about retrieving “memory functions”, whatever those may be. We only retrieve memories, and the act of doing so does in fact change them.

  14. fifthmonarchyman,

    There are three papers that I have linked. Each is integral to my method

    1) talks about cognition as a non-algorithmic lossless process
    2) shows this process in action in relation to distinguishing patterns in numeric strings
    3) shows conclusively that deep learning software does not work in this way by using EAs to easily fool state of the art systems

    If we are going to discuss you need look at the big picture and get a handle on what is being said.

    I suggest that you need to get a handle on your own argument.

    Your second paper says nothing about integrated information, lossy or otherwise. It merely demonstrates that humans are good at pattern recognition. That’s the only conclusion about humans that the data support.

    The third paper shows that neural networks have trouble with some patterns. Humans do too. One set of these we call “optical illusions.”

    You’ve got a lot of work to do if you want to tie these three papers together. Simply asserting that they’re all talking about the same thing is . . . unconvincing.

  15. Patrick said

    If I follow the procedure I described and get the results I predict, would that refute your approach to design detection?

    I say,

    I will be as brief an concise as I can. One sentence

    If algorithmic software can do better than humans with out being susceptible to being fooled by evolutionary algorithm I would abandon my enterprise.

    peace

  16. fifthmonarchyman,

    If I follow the procedure I described and get the results I predict, would that refute your approach to design detection?

    I will be as brief an concise as I can. One sentence

    If algorithmic software can do better than humans with out being susceptible to being fooled by evolutionary algorithm I would abandon my enterprise.

    You can be briefer and more concise. One word. Yes or no.

    If it’s “No” then we can discuss how to change the procedure. I’m not playing word games with you any more. Let’s see what your argument really means.

  17. Patrick says,

    I suggest that you need to get a handle on your own argument

    I say,

    That is why I’m here. I was hoping for some assistance in this regard by a little back and forth with ID critics.

    Maybe that was too much to ask

    Peace

  18. fifthmonarchyman,

    That is why I’m here. I was hoping for some assistance in this regard by a little back and forth with ID critics.

    Maybe that was too much to ask

    You have a number of people willing to engage with you in this thread, including myself. Present your operational definitions and we’ll be able to have an actual discussion.

  19. Patrick says,

    Yes or No If it’s “No” then we can discuss how to change the procedure.

    I say,

    No,

    You will need to verify that your software is indeed losslessly integrating the information in the string. You can do this by running the random string through an evolutionary algorithm until the R squared reaches a high level (probably 80%). If at that point your software is still able to consistently distinguish the original from the copy better than humans I would abandon the enterprise.

    Peace

  20. Patrick says,

    Present your operational definitions and we’ll be able to have an actual discussion.

    I say,

    First I was told I needed to write an OP then you would have a discussion

    Then I was told I needed to detail my method then we would have a discussion

    Now I’m told I need to provide an exhaustive list of operational definitions then we will have an actual discussion

    I sense a pattern.

    How about we try this. If you are confused about how I’m using a term ask me. Once I tell you move on unless you have clarifying questions

    That sounds like a more natural realistic and fair way to have a discussion.

    Peace

  21. fifthmonarchyman,

    Yes or No If it’s “No” then we can discuss how to change the procedure.

    No

    Progress! Okay, let’s go through the procedure and see how to adjust it to make it address your argument.

    (For the moment I am deliberately ignoring the portion of your response that includes terms that have yet to be operationally defined. My goal in asking these questions is to arrive at an understanding of the real world referents of your terms without continuing to go in definitional circles.)

    Here’s my proposed procedure again:

    1. Create up to 1000 pairs of time-series data as described in the paper.
    2. Create either a recurrent neural network that accepts the time series data directly or a support vector machine that takes in statistics directly generated from the data.
    3. Train the network with 800 or so pairs of time-series data and validate it with the remainder.
    4. Create a few hundred new pairs of time-series data for testing.
    5. Demonstrate that performance on the test set meets or exceeds the human performance described in the paper.

    Since this is insufficient to address your argument, I have a few more simple questions:

    a) Are any of the steps unnecessary?
    b) Are any of the steps incorrect or misleading?
    c) Are additional steps required?

    Again, please answer yes or no so that we don’t get hung up on terminology. Let’s make this specific and measurable.

    If the answer to c is “Yes”, what specific step(s) must be added? Again, please answer in terms of observable, measurable actions that clearly and unambiguously describe the experiment.

  22. fifthmonarchyman,

    How about we try this. If you are confused about how I’m using a term ask me. Once I tell you move on unless you have clarifying questions

    Resistance to providing clear definitions is common, in my experience, among intelligent design creationists. I don’t know why this is; without clear definitions there is no way to communicate.

    Nonetheless, I have some hope for the approach of specifying an experimental procedure to test your claim. That has a chance of breaking us out of the word games.

  23. Patrick says

    Are additional steps required?

    I say

    use your head man

    see here,

    Design as the Inverse of Cognition

    quote:

    You will need to verify that your software is indeed losslessly integrating the information in the string. You can do this by running the random string through an evolutionary algorithm until the R squared reaches a high level (probably 80%). If at that point your software is still able to consistently distinguish the original from the copy better than humans I would abandon the enterprise.

    End quote:

    peace

  24. Patrick says

    I don’t know why this is

    I say,

    Perhaps it’s because as soon as a tentative definition is offered you proceed to bring out the “By your own definition” tripe.

    instead of simply asking for clarification

    peace

  25. fifthmonarchyman: Perhaps it’s because as soon as a tentative definition is offered you proceed to bring out the “By your own definition” tripe.

    instead of simply asking for clarification

    I’m a native English speaker living permanently in France. Sometimes, it is important that I understand what someone is telling me. I can say I didn’t understand and ask that they explain again, but now I say, “let me see if I understand. You are telling me…” which sometimes elicits a simple yes or no. I think Patrick may be trying a similar approach.

  26. Allen Fox says,

    I think Patrick may be trying a similar approach.

    I say,

    If that is the case I think he has a very funny way of going about it.

    I don’t think you should lead with “by your own definition you are incorrect.”

    Especially when I made it clear that it was my first attempt at even defining the term and have repeatedly said I’m only trying to tighten up my ideas here.

    Add that behavior to the claim that “The paper is obviously wrong” when he has not shown any evidence that he even understands what it is talking about.

    Call me skeptical but I sense a lack of intention to actually discuss and instead a desire to debate with the fundi.

    I usually have no problem with debate but I’d like to have the semblance of a completed argument before I’m asked to vigorously defend it from every conceivable angle .

    Give me some time to think this all through and I’ll be ready for that.

    peace
    .

  27. fifthmonarchyman,

    Are additional steps required?

    You will need to verify that your software is indeed losslessly integrating the information in the string. You can do this by running the random string through an evolutionary algorithm until the R squared reaches a high level (probably 80%). If at that point your software is still able to consistently distinguish the original from the copy better than humans I would abandon the enterprise.

    Without operational definitions, many of your terms are literally nonsensical. Further, you are talking about distinguishing strings when the procedure I propose deals with pairs of time-series data.

    Given that, I am asking you to specify exactly what additional steps you think are required, without using your undefined terms. These steps should be unambiguous and enable anyone who wishes to follow the process.

    So, what _exactly_ should step 6 be and why?

  28. fifthmonarchyman,

    Perhaps it’s because as soon as a tentative definition is offered you proceed to bring out the “By your own definition” tripe. instead of simply asking for clarification

    That’s not tripe, that’s demonstrating enough interest in your ideas to make a sincere attempt to understand what you’re saying. At no point in this conversation have I insisted on holding you to any of your definitions — you are free to clarify them at any time.

    If you don’t like having your ideas challenged, though, this might not be the optimal venue for you. (Although I do hope you choose to stay.)

  29. Alan Fox,

    I’m a native English speaker living permanently in France. Sometimes, it is important that I understand what someone is telling me. I can say I didn’t understand and ask that they explain again, but now I say, “let me see if I understand. You are telling me…” which sometimes elicits a simple yes or no. I think Patrick may be trying a similar approach.

    Exactly.

    I was a native English speaker living in Luxembourg for a number of years. When I didn’t make the kind of effort you’re talking about, I ended up eating rognon. *shudder*

  30. fifthmonarchyman,

    Call me skeptical but I sense a lack of intention to actually discuss and instead a desire to debate with the fundi.

    I’ll call you skeptical when you’ve earned that honor. 😉

    I am genuinely interested in your method. Personally, I would find a working method of design detection fascinating. Intelligent design creationists keep promising such a thing, but they have yet to produce one that works. Perhaps you’ll be the first.

    That being said, I’m not going to cut you any slack. You’re making some strong claims that I don’t see as supported by any evidence. I’m happy to work with you to either support or refute what you’re saying, but I’m not going to treat your ideas with kid gloves. If you aren’t willing to work with me on those terms, let me know.

  31. Patrick said.

    I am asking you to specify exactly what additional steps you think are required, without using your undefined terms.

    more than happy to oblige

    6. run the “randomized” part of the time sensitive pairs through an evolutionary algorithm with a the “real” part as it’s target

    7) Halt the program when r squared is at 80%

    8) repeat the steps 2 through 5 with the same recurrent neural network used the first time.

    ———–

    If the recurrent neural network is shown to out perform humans with both sets of pairs I will abandon the enterprise.

    peace

    PS

    At that point I will become very interested in investing in your software company .

  32. Patrick says,

    Without operational definitions, many of your terms are literally nonsensical.

    I say,

    If you are confused about a term ask. You might say something like this.

    “What do you mean by term X?”.

    If after I’ve answered you are still confused ask me to clarify.

    That is sort of how conversations work

    peace

  33. Asking you for operational definitions is asking what you mean by all your non-standard terms. Much more efficient than going one by one.

  34. Patrick: I ended up eating rognon. *shudder*

    An ex-pat acquaintance went to a doctor here with lower back-pain. The doctor asked where it hurt. “Dans mes rognons” he replied and wondered why the doctor fell about laughing.

  35. JonF says

    Asking you for operational definitions is asking what you mean by all your non-standard terms. Much more efficient than going one by one.

    I’m not using any non-standard terms as far as I can tell.

    If you think I’m using a term in a nonstandard way please provide the standard definition and where you think I deviate from it. That is how conversations work

    peace

  36. fifthmonarchyman,

    6. run the “randomized” part of the time sensitive pairs through an evolutionary algorithm with a the “real” part as it’s target

    7) Halt the program when r squared is at 80%

    8) repeat the steps 2 through 5 with the same recurrent neural network used the first time.

    First question: Why? The point you have emphasized from the paper is this:

    We suggest that such novel interfaces can harness human capabilities to process and extract information from financial data in ways that computers cannot.

    which you paraphrase as:

    It points to a way that we can distinguish between strings that is inaccessible to computers.

    The five step procedure I propose directly addresses this claim. If a software system can perform equal to or better than human players, this claim is refuted. That in turn means that you cannot use this paper in support of whatever argument it is you are making. Why, specifically, do you think anything more is needed?

    Second question: What do the three steps you’ve suggested add to the experiment? Mathematically and logically they seem to have nothing to do with the financial game paper. The human players weren’t asked to do anything comparable. Why these steps?

    Third question: Assuming you have answers for the first two, what exactly do you mean by “evolutionary algorithm” in your proposed step 6? There are a lot of different EAs available. Why the particular one you suggest?

  37. Patrick says,

    First question: Why?

    I say,

    I have total confidence that your software will not be able to distinguish between the strings as well as humans do and it would be interesting to see you try.

    However my real interest is in the ideas I’m presenting here and whether or not software can distinguish between real and randomized data does not have a lot of relevance to those ideas.

    If you want to construct an experiment that is relevant to my ideas you need to address it to them…

    My working hypothesis is that cognition is a process of nonlossy data compression that is non-computable and that design is the inverse of that process.

    I believe that when we integrate the information in the original string into a unified singular pattern what we are doing is what is described in the paper in the OP.

    It’s possible that software might learn to distinguish between two strings but I don’t think it would do it like we do.

    I added the extra steps because they would help to demonstrate that the software is doing the same thing we are when it “learns” patterns.

    If you need clarification just ask.

    peace

  38. Patrick says,

    Mathematically and logically they seem to have nothing to do with the financial game paper.

    I say,

    The Game in the financial paper was just the inspiration for the comparison environment in my method.

    I’m really not interested in finance. I’m interested in comparing “real” strings with strings that are close to them but produced by an algorithm .

    peace

  39. Patrick says,

    what exactly do you mean by “evolutionary algorithm” in your proposed step 6? There are a lot of different EAs available. Why the particular one you suggest?

    I have no preference to the algroythym you use. As long as the comparison string is produced by an algorithm and the output is not identical to the original string (r squared 80%) I’m fine with it.

    peace

  40. fifthmonarchyman,

    I have total confidence that your software will not be able to distinguish between the strings as well as humans do and it would be interesting to see you try.

    Do you mean via the five step process I described?

    However my real interest is in the ideas I’m presenting here and whether or not software can distinguish between real and randomized data does not have a lot of relevance to those ideas.

    Why bring up that second paper, then? You claimed “It points to a way that we can distinguish between strings that is inaccessible to computers.” If that ability to distinguish is not inaccessible to computers, does that not refute a key component of your argument?

    I added the extra steps because they would help to demonstrate that the software is doing the same thing we are when it “learns” patterns.

    You are going to need to provide significant empirical evidence to support the idea that your proposed additions reflect how humans learn patterns. Simply asserting it isn’t enough.

    Before that, though, please address the first question I posed in more detail. It seems to me that refuting the claim you made based on the financial game paper eliminates one of the significant pillars on which you are trying to build your argument. If that does not refute the argument as a whole, why do you use it for support at all?

  41. fifthmonarchyman,

    I’m really not interested in finance. I’m interested in comparing “real” strings with strings that are close to them but produced by an algorithm .

    You have yet to address the point that any finite string can be exactly produced algorithmically.

  42. fifthmonarchyman,

    what exactly do you mean by “evolutionary algorithm” in your proposed step 6? There are a lot of different EAs available. Why the particular one you suggest?

    I have no preference to the algroythym you use. As long as the comparison string is produced by an algorithm and the output is not identical to the original string (r squared 80%) I’m fine with it.

    This is your argument, not mine. If you haven’t specified it in sufficient detail to know exactly how to compute your step 6, I suggest that you haven’t given it sufficient thought. Waving some math over it does not magically make it rigorous.

  43. fifthmonarchyman:
    My working hypothesis is that cognition is a process of nonlossy data compression that is non-computable and that design is the inverse of that process.

    This cannot be your working hypothesis, because you are not testing cognition. You are running some data through some model, i.e. you are modelling something. Whether the modelling is relevant to cognition has to be established with reference to the relevant research.

    Therefore, search up and provide the relevant references to pedagogics or wherever relevant field that say something like “cognition and learning process goes like this”, and then you will see if your model can be said to describe the learning process. Until then you are just running some random data that may be telling something, but nobody can be sure what it’s telling.

  44. Patrick says,

    If that ability to distinguish is not inaccessible to computers, does that not refute a key component of your argument?

    I say,

    No,

    my method is not an argument for dualism or against AI. It’s about the nature of cognition and the limits of algorithms,

    You say,

    You have yet to address the point that any finite string can be exactly produced algorithmically.

    I say,

    I think I have multiple times.

    Once again, Any finite string can be can be exactly produced algorithmically. This is trivial

    Algorithms can not “explain” designed objects.

    I am sure we went over this in great detail because I remember you repeating “by your own definition” several times before I finally assumed you got what I was talking about and you moved on.

    you say

    If you haven’t specified it in sufficient detail to know exactly how to compute your step 6, I suggest that you haven’t given it sufficient thought.

    I say,

    This is not rocket science

    I’ve used several different simple EAs. They all have the same basic structure and I see no practical difference

    1) create a copy of the randomized string with a small number of random mutations
    2) compare both strings with the original
    3) discard the string with the smallest R squared
    4) repeat until a string reaches 80%

    peace

  45. fifthmonarchyman,

    Once again, Any finite string can be can be exactly produced algorithmically. This is trivial

    Algorithms can not “explain” designed objects.

    You have not yet provided an operational definition for “explained” that both excludes algorithms and does not recurse to “a human thought of it.”

  46. Eric says,

    Whether the modelling is relevant to cognition has to be established with reference to the relevant research.

    I say,

    I somewhat agree. That is why I linked the paper on the non-computability of consciousness in the OP.

    The question is whether what is happening when I “learn” the specification/target of a numeric string is what the paper is talking about. I think it is and I think I could make a good argument to that effect.

    In fact I think Patrick’s experiment with my proposed additions would serve as a pretty good test.

    you say,

    Therefore, search up and provide the relevant references to pedagogics or wherever relevant field that say something like “cognition and learning process goes like this”,

    I say,

    Have you read and understood the paper I linked in the OP?

    Peace

  47. fifthmonarchyman,

    If that ability to distinguish is not inaccessible to computers, does that not refute a key component of your argument?

    No,

    my method is not an argument for dualism or against AI. It’s about the nature of cognition and the limits of algorithms,

    The second paper is, according to you, about the limits of algorithms. If it can be shown that a software system can meet or exceed human performance on that problem, that demonstrates that your claim about the paper is wrong.

    If no refutation of that claim has any impact on your argument then the paper is immaterial to your argument and we can simply ignore it.

    Which is it?

  48. fifthmonarchyman,

    This is not rocket science

    So far this isn’t science at all. (Hint: Science uses rigorous operational definitions.)

    I’ve used several different simple EAs. They all have the same basic structure and I see no practical difference

    Perhaps you need to get out more. 😉

    1) create a copy of the randomized string with a small number of random mutations
    2) compare both strings with the original
    3) discard the string with the smallest R squared
    4) repeat until a string reaches 80%

    How is this different from just randomizing bits in the string until the Hamming distance is equal to 0.2 * length?

    And again, what exactly does it model?

  49. Patrick says,

    You have not yet provided an operational definition for “explained” that both excludes algorithms and does not recurse to “a human thought of it.”

    I say,

    I think I have

    but let me give it some thought today and see if I can do a better job of showing you what I mean than I already have.

    Peace

  50. Patrick says,

    If no refutation of that claim has any impact on your argument then the paper is immaterial to your argument and we can simply ignore it.

    I say,

    The paper shows how I’m comparing strings so it is important to my method. if you had my spreadsheet you would not need the financial paper to see this. You could see what I’m talking about for yourself.

    If it weren’t for the paper I would have spent hundreds of comments explaining how humans could possibly reliably distinguish between long numeric strings and approximations of those strings.

    I’m quite sure this conversation would not be as far along as it is if it weren’t for that paper.

    With that said

    Discovering a way that computers could automatically and reliably distinguish between actual financial data and randomized copies of that same data would allow us to make a fortune in the stock market. So by all means give it a go.

    peace

Leave a Reply