Design as the Inverse of Cognition

     Several regulars have requested that I put together a short OP and I’ve agreed to do so out of deference to them. Let me be clear from the outset that this is not my preferred course of action. I would rather discuss in a more interactive way so that I can learn from criticism and modify my thoughts as I go along. OPs are a little too final for my tastes.
      I want to emphasize that everything I say here is tentative and is subject to modification or withdraw as feedback is received,
      It’s important to understand that I speak for no one but myself it is likely that my understanding of particular terms and concepts will differ from others with interest in ID. I also want to apologize for the general poor quality of this piece I am terrible at detail and I did not put the effort in I should have due mainly to laziness and lack of desire.
  With that out of the way:
Background
     For the purpose of this discussion I would like to expand upon the work of Phill Mcguire found here  and stipulate that cognition can be seen as lossless data compression in which information is integrated in a non-algorithmic process. The output of this process is a unified coherent whole abstract concept that from here forward I will refer to as a specification/target. Mcguire’s work thus far deals with unified consciousness as a whole but I believe his incites are equally valid when dealing with integrated information as associated with individual concepts.
     I am sure that there are those who will object to the understanding of cognition that I’m using for various reasons but in the interest of brevity I’m treating it an an axiomatic starting point here. If you are unwilling to accept this proviso for the sake of argument perhaps we can discuss it later in another place instead of bogging down this particular discussion.
     From a practical perspective cognition works something like this: in my mind I losslessly integrate information that comprise the defining boundary attributes of a particular target; for instance,”house” has such information as “has four walls”, “waterproof roof”, “home for family”, “warm place to sleep”, as well as various other data integrated into the simple unified “target” of a house that exists in my mind. The process by which I do this can not be described algorithmically. from the outside it is a black box but it yields a specified target output: the concept of “house”.
     Once I have internalize what a house is I can proceed to categorize objects I come across into two groups: those that are houses and those that are not. You might notice the similarity of this notion to the Platonic forms in that the target House is not a physical structure existing somewhere but an abstraction.
Argument
     With that in mind, it seems reasonable to me to posit that the process of design would simply be the inverse of cognition.
    When we design something we begin with a pre-existing specific target in mind and through various means we attempt to decompress it’s information into an approximation of that target. For instance I might start with the target of house and through various means proceed to approximate the specification I have in my mind into a physical object. I might hire a contractor nail and cut boards etc . The fruit of my labor is not a completed house until it matches the original target sufficiently to satisfy me. However, no matter how much effort I put into the approximation, it will never completely match the picture of an ideal house that I see in my mind. This is I believe because of the non-algorithmic nature of the process by which targets originate. Models can never match their specification exactly.
   Another good example of the designing process would be the act of composing a message.
   When I began to write this OP I had an idea of the target concept I wanted to share with the reader and I have proceeded to go about decompressing that information in a way that I hoped that could be understood. If I am successful after some contemplation a target will be present in your mind that is similar to the one that exists in mine. If the communication was perfect the two targets would be identical.
   The bottom line is that each designed object is the result of a process that has at its heart an input that is the result of the non-algorithmic process of cognition (the target). The tee shirt equation would look like this
CSI=NCF
    Complex Specified Information is the result of a noncomputable function. If the core of the design process (CSI) is non-computable then the process in its entirety can not be completely described algorithmically,
    This insight immediately suggests a way to objectively determine if an object is the result of design. Simply put if an algorithmic process can fully explain an object then it is not designed. I think this is a very intuitive conclusion, I would argue that humans are hardwired to tentatively infer design for processes that we can’t fully explain in a step by step manner. The better we can explain an object algorithmically the weaker our design inference becomes. If we can completely explain it in this way then design is ruled out.
     At some point I hope to describe some ways that we can be more objective in our determinations of whether an object/event can be fully explained algorithmically but as there is a lot of ground covered here so I will put it off for a bit. There are also several questions that will need to be addressed before this approach can be justifiably adopted generally such as how comprehensive an explanation must be to rule out design or conversely when we can be confident that no algorithmic explanation is forthcoming.
    If possible I would like to explore these in the future perhaps in the comments section. It will depend on the tenor of feed back I receive.
peace

923 thoughts on “Design as the Inverse of Cognition

  1. fifthmonarchyman,

    Why would two random strings be indistinguishable from each other?

    In the context of the game the way we distinguish strings is by predicting what number comes next in the series.

    The only way we can pick the real string is to know what data point is coming next. The only way we can know this is by knowing the global patterns in the string. We can’t look at individual numbers.

    This is the key to the entire method.

    In order to tell the strings apart we need to nonlossily compress the information in the real string. It’s the only way to win.

    By definition It’s impossible to compress the information in a random string. That is the reason we can’t distinguish between random strings with the”game”.

    I think I see a problem. The bolded part is not accurate. That is not the definition of a random string, although truly random data can be non-compressible. Just because a string is randomly generated, though, does not mean it has no patterns that a compression algorithm might be able to take advantage of. The next string might have no patterns or different patterns, so the compression algorithm will not work equally well on all possible strings.

    I think you’ve answered my question, though. Is it correct to restate what you said as “Two random strings are indistinguishable from each other in the context of the game.”?

  2. fifth:’

    By definition It’s impossible to compress the information in a random string. That is the reason we can’t distinguish between random strings with the”game”.

    Patrick:

    I think I see a problem. The bolded part is not accurate.

    That’s right. Fifth seems to be under the impression that random strings contain no patterns whatsoever and therefore cannot be distinguished from each other. But they do contain patterns, and this would have been obvious if fifth had not brushed off my point that all finite strings are random, in that they all can be produced by random processes. That includes strings that ‘look random’ on a line chart, strings that look less random (or ‘clumpy’, as fifth puts it), strings that look highly patterned but with additive noise, and strings that appear perfectly patterned on a line chart.

    Any two non-identical random strings can be distinguished from each other by zooming the line charts to the appropriate level so that the differences ‘stick out’ to a human observer. And since the charts are produced by software, this is easily done.

  3. fifth’s preceding sentence is also incorrect:

    In order to tell the strings apart we need to nonlossily compress the information in the real string.

    It’s obvious that nonlossy compression is not taking place. If it were, then the observer would always be able to reproduce the “real” string (or the corresponding line chart) exactly from memory.

  4. Patrick says,

    Just because a string is randomly generated, though, does not mean it has no patterns that a compression algorithm might be able to take advantage of.

    I say,

    Are you saying that a random process can output nonrandom data? I would agree that you could summarize randomly produced data by mean and standard deviation for example. But that is not what we are talking about here. We are talking about predicting the next digit in the string from the patterns we see in it.

    from here

    http://dictionary.reference.com/browse/random

    quote:

    Random adjective
    1.proceeding, made, or occurring without definite aim, reason, or pattern:
    the random selection of numbers.
    2.Statistics. of or characterizing a process of selection in which each item of a set has an equal probability of being chosen.

    end quote:

    or again from the Wikipedia definition of random

    quote:

    Randomness is the lack of pattern or predictability in events. A random sequence of events, symbols or steps has no order and does not follow an intelligible pattern or combination.

    end quote:

    peace

  5. fifth,

    We are talking about predicting the next digit in the string from the patterns we see in it.

    The game doesn’t test your ability to predict the next digit in the numeric string. It tests your ability to compare two line charts and pick the one that represents the original string.

  6. Keiths says.

    It’s obvious that nonlossy compression is not taking place. If it were, then the observer would always be able to reproduce the “real” string (or the corresponding line chart) exactly from memory.

    I say.

    On the contrary when we losslessy compress something we don’t compress all possible information in the world only the information we deem to be important.

    Think about the way cognition works. We don’t learn every single detail about a sonnet we learn the theme and the rhyme and the style. We learn these things nonlossily.

    When compressing strings the observer does not have to reproduce every single digit he has to be able to produce the overall pattern and flow of the digits.

    Any deviations of the pattern would simply appear to be random to the observer because he does not deem them to be important or predictable.

    This is important for step two of the method.

    If the real string can be explained algorithmically then the observer will be unable to distinguish the real one from one that is close to it but produced by an algorithm.

    peace

    PS
    You guys seem to be asking questions that show you are beginning to understand the point please keep it up

  7. fifthmonarchyman,

    Just because a string is randomly generated, though, does not mean it has no patterns that a compression algorithm might be able to take advantage of.

    Are you saying that a random process can output nonrandom data?

    No, I’m making the observation that some random strings can have patterns. Being non-compressible is not part of the definition of a random string.

  8. Keiths says,

    The game doesn’t test your ability to predict the next digit in the numeric string. It tests your ability to compare two line charts and pick the one that represents the original string.

    I say

    when I looked at OMagain’s strings I saw that often and repeatedly the numbers would trail up slowly and incrementally for several digits something like this

    1,2,4,5,7,8,11,12.15,17

    The random string would never do this It might trail up for 3 or four digiots at a time but never 10 in a row that is the way I could tell them apart.

    That is what I mean by predicting the next digit if I saw 1,2,4,5,7 in the real string I would never expect a to see a 2 but I might see an 8 or 9.

    Every nonrandom string will be like this. The patterns are different with each string but once I know the pattern I can predict the next digit in the string.

    Does that make sense?

    peace

  9. fifth:

    On the contrary when we losslessy compress something we don’t compress all possible information in the world only the information we deem to be important.

    It’s exactly the opposite. When we losslessly compress an image, for example, we can reconstruct the exact image, which necessarily includes the information that is unimportant to us. It is lossy compression that throws away information.

    You guys seem to be asking questions that show you are beginning to understand the point please keep it up

    The problem is that you still don’t get it. You’re struggling with the basics, like what non-lossy compression is or why random strings can be compressible. Please try to understand what we’re telling you, or you’ll never comprehend why your method fails.

  10. Keiths, says,

    It’s exactly the opposite. When we losslessly compress an image, for example, we can reconstruct the exact image, which necessarily includes the information that is unimportant to us. It is lossy compression that throws away information.

    I say,

    Did you read the paper in the OP? Apparently you are laboring under a profound misunderstanding of what I mean by nonlossy compression. If we compressed a sonnet lossely for example we would be unable to distinguish a particular sonnet from one with similar but different style. The paper goes into some detail about this please reread it and try to comprehend what is being said therein.

    After that If you are hung up of the specific phrase “nonlossy compression” feel free to mentally substitute “Integrated information” every time I use the phrase.

    What you choose to call it does not affect what the the process is.

    You say,

    Please try to understand what we’re telling you, or you’ll never comprehend why your method fails.

    It's possible that we will never be able to bridge the communication gap but I assure you I understand exactly what you are telling me.

    It’s just that your criticisms are way off base don’t address what I’m talking about and in the case of Nonlossy compression show a profound misunderstanding of what is being claimed.

    Ive seen this exchange repeatedly here…..

    Keiths —-Your method fails because of A

    FMM—– I don’t hold to A

    Keiths—–yes you do, your method obviously fails

    FMM——No you don’t understand I don’t hold to A I hold to B

    Keiths—–silent long pause

    Keiths—— Your method fails because of C

    I can definitely see a pattern with this string 😉

    Peace

  11. fifth,

    “Nonlossy” means “without loss” or “lossless”. If you throw information away, you lose it. That isn’t nonlossy compression — it’s lossy, because you lose information. This is not rocket science.

    You’re trying to redefine “nonlossy” to mean “lossy”. You might as well try to redefine “minus” to mean “plus”.

    “Lossy” and “nonlossy/lossless” are standard terms, used all the time by computer scientists and engineers. We know what they mean, but apparently you still don’t.

    From Wikipedia:

    In information technology, “lossy” compression is the class of data encoding methods that uses inexact approximations (or partial data discarding) for representing the content that has been encoded. Such compression techniques are used to reduce the amount of data that would otherwise be needed to store, handle, and/or transmit the represented content.

    And:

    Lossless data compression is a class of data compression algorithms that allows the original data to be perfectly reconstructed from the compressed data.

    And:

    Lossy methods are most often used for compressing sound, images or videos. This is because these types of data are intended for human interpretation where the mind can easily “fill in the blanks” or see past very minor errors or inconsistencies – ideally lossy compression is transparent (imperceptible), which can be verified via an ABX test.

    And lest you’re tempted, it won’t work to say “Well, I choose to define ‘nonlossy/lossless’ as ‘lossy’ for my purposes”. Remember, you’re citing the Maguire paper as support, and the Maguire paper uses “nonlossy/lossless” the way the rest of us do, not the way you do.

    Please slow down and think carefully about this. When you rush through these things you just end up confusing yourself.

  12. fifth,

    Regarding your mistaken belief about patterns in random data, here’s Leonard Mlodinow from chapter 9 of his book The Drunkard’s Walk:

    And so in the late twentieth century a movement sprang up to study how randomness is perceived by the human mind. Researchers concluded that “people have a very poor conception of randomness; they do not recognize it when they see it and they cannot produce it when they try”…

    Imagine a sequence of events. The events might be quarterly earnings or a string of good or bad dates set up through an Internet dating service. In each case the longer the sequence, or the more sequences you look at, the greater the probability that you’ll find every pattern imaginable — purely by chance. As a result, a string of good or bad quarters, or dates, need not have any “cause” at all. The point was rather starkly illustrated by the mathematician George Spencer-Brown, who wrote that in a a random series of 10^1,000,007 zeroes and ones, you should expect at least 10 nonoverlapping subsequences of 1 million consecutive zeroes. Imagine the poor fellow who bumps into one of those strings when attempting to use the random numbers for some scientific purpose. His software generates 5 zeros in a row, then 10, then 20, 1,000, 10,000, 100,000, 500,000. Would he be wrong to send back the program and ask for a refund?

    [Aside: I used to joke with my colleagues that I was going to make money selling a random number generator that did nothing but return zero every time you called it. Without analyzing the code, you could never prove that it was actually nonrandom, so customers would have no basis for complaint.]

    Mlodinow again:

    Apple ran into that issue with the random shuffling method it initially employed in its iPod music players: true randomness sometimes produces repetition, but when users heard the same song or songs by the same artist played back-to-back, they believed the shuffling wasn’t random. And so the company made the feature “less random to make it feel more random,” said Apple founder Steve Jobs.

    Randomness is not what you think it is, fifth.

  13. Keiths,

    I can’t force you to read and understand the paper that serves as the foundation for my method but I’d hope you would. It would make the discussion more easy.

    I’m good with all the definitions of compression you posted, They express what I’m talking about here nicely.

    The argument is not about how nonlossy compression works but in what information is compressed.

    What you are missing is the part the observers choice plays in the process. The observer chooses what information he will compress. He does not compress all the information in the world but only the information he deems to be important and he compresses that information nonlossly

    Computers on the other hand have non choice in the matter they compress the all information that is given them. In the case of computers the operator does the choosing.

    There is still a choice that is being made computers don’t compress all the information in the world either but only a sub set of it.

    peace

  14. Keiths,

    I also have no problem with your quotes on randomness. Though they are pretty much irrelevant to what we are discussing.

    I don’t declare a string to be random because it looks random.

    I declare a string to be random when it looks like a string that I know is random. I declare it to be not random when it does not look like a sting that I know is random.

    The reason I need long strings is so that the the anomalous patterns you will obviously see in any string from time get cancelled out.

    In a random string I might get repeating numbers from time to time but I wont get the same repeating number happening over and over in the same frequency throughout the entire string.

    In other words random strings don’t have global patterns. That is after all the definition of random

    Peace

  15. Keiths,

    It might be a good time to remind you that according to my worldview true randomness does not exist there is only apparent randomness.

    An object or event is apparently random when it is not predictable. If you were omniscient there would be nothing that was not predictable.

    I know that certain understandings of QM hold that true randomness is a feature of our universe but I would hope we could ignore them for the sake of this discussion.

    peace

  16. As long as your sample is finite, a random string can have any possible pattern. Doesn’t matter how large your sample as long as it is finite. Your definition of random is simply wrong. As is your definition of lossless.

  17. fifthmonarchyman: It might be a good time to remind you that according to my worldview true randomness does not exist there is only apparent randomness.

    As I use the term, “random” is a technical term in mathematical probability. When dealing with real world situations, the important question is whether the probability model fits well enough to be usable. Whether “true randomness” actually has a real world meaning does not seem relevant.

  18. fifth,

    What part of this quote is confusing you? Do you think Spencer-Brown is wrong?

    In each case the longer the sequence, or the more sequences you look at, the greater the probability that you’ll find every pattern imaginable — purely by chance. As a result, a string of good or bad quarters, or dates, need not have any “cause” at all. The point was rather starkly illustrated by the mathematician George Spencer-Brown, who wrote that in a a random series of 10^1,000,007 zeroes and ones, you should expect at least 10 nonoverlapping subsequences of 1 million consecutive zeroes.

  19. 5th seems to have ignored the discussion of why a sample of a random string may not pass his test of having no structure or of being incompressible.

    He also ignored my question about how he would evaluate a maximally compressed string

  20. fifth,

    The reason I need long strings is so that the the anomalous patterns you will obviously see in any string from time get cancelled out.

    The anomalous patterns are part and parcel of what it means for the strings to be random. If you eliminated the anomalous patterns, you would no longer be producing your strings randomly. Did you read the iPod example?

    In a random string I might get repeating numbers from time to time but I wont get the same repeating number happening over and over in the same frequency throughout the entire string.

    That’s not right. All you can say is that the probability of that scenario is low for long strings (assuming that successive digits are independent and that their values are equiprobable).

    In other words random strings don’t have global patterns. That is after all the definition of random

    Again, that’s incorrect. The probability of any specific pattern can be calculated, and it will be nonzero (provided that the strings are long enough to contain the pattern).

  21. fifth,

    I’m good with all the definitions of compression you posted, They express what I’m talking about here nicely.

    No, they clash with what you’re saying.

    Take another look at what you wrote:

    In order to tell the strings apart we need to nonlossily compress the information in the real string.

    If you were nonlossily compressing the information in the real string, then none of it would be lost, and you could reconstruct the string from memory.

    Your method explicitly aims for lossy compression, because you take steps to prevent the observer from memorizing the string:

    My method is conservative in that I have limited time to get it right and there is no going over. I do this to prevent the observer from simply memorizing the digits in the original.

    If the observer can’t memorize the string, then he or she is losing some of the information in it. The process is lossy.

  22. Also, your statement itself is wrong:

    In order to tell the strings apart we need to nonlossily compress the information in the real string.

    The compression doesn’t need to be nonlossy. Lossy compression is fine as long as the detectable differences aren’t compressed out.

  23. fifth

    In order to tell the strings apart we need to nonlossily compress the information in the real string.

    And how did you to that to my second string, which was already incompressible?

  24. I’m thinking that 5th’s non lossy compression would better be labeled as abstraction. What gets integrated is not the information, but something like a pointer to the information. Like the definition of pi. There are, of course, savants who can memorize long strings, but they are notoriously poor at abstracting and integrating.

  25. Keiths says,

    The compression doesn’t need to be nonlossy. Lossy compression is fine as long as the detectable differences aren’t compressed out.

    I say,

    This is important to think about.

    Recall that I made a testable prediction that Patrick will not be able to design algorithmic software that pick out the real string as good as I can. That was in anticipation of a comment like you just made.

    If lossy compression will work just as well then Patrick will be able to put together algorithmic software to accomplish what I’m doing and my method will be falsified,

    How is that for some science for you.

    peace

  26. Keiths says,

    What part of this quote is confusing you? Do you think Spencer-Brown is wrong?

    I say,

    Nothing as far as I can tell. I agree with the quote.

    I’m not sure why you think is is relevant unless you think that looking for patterns in the real sting is what I’m doing when I compare strings.

    That is not what I’m doing at all.

    What I’m doing is finding a global pattern that is present in the real string that goes away when I randomize it’s digits.

    That is an entirely different kettle of fish

    peace

  27. Keiths says,

    If you were nonlossily compressing the information in the real string, then none of it would be lost, and you could reconstruct the string from memory.

    I say,

    I can reconstruct the string from memory well enough that I will not be able to distinguish my reconstructed string from the original.

    To the observer (ME) it will appear my reconstructed string and the original are the same when run through the game.

    That is the point.

    peace

  28. petrushka says,

    5th seems to have ignored the discussion of why a sample of a random string may not pass his test of having no structure or of being incompressible.

    He also ignored my question about how he would evaluate a maximally compressed string.

    I promise I did not ignore anything you have said. It might have escaped my notice. I told you I’m very bad at details,

    as far as a sample of a random string not being random I would agree that sampling is not a random process

    As far as evaluating a maximally compressed string. What do you mean by evaluate?

    peace

  29. fifthmonarchyman: I don’t declare a string to be random because it looks random.

    I declare a string to be random when it looks like a string that I know is random.

    So… looks random is not at all the same thing as looks like what I know is random. Curious; I would have thought that looks random implicitly requires some sort of concept of ‘random’, and comparing what’s-being-looked-at to that concept of ‘random’, and therefore looks random is looks like what I know is random. FMM’s verbiage here seems pretty darned incoherent to me. Perhaps FMM will clarify what they mean; perhaps they won’t. [shrug]

  30. Keiths says.

    The probability of any specific pattern can be calculated, and it will be nonzero (provided that the strings are long enough to contain the pattern).

    Again we are talking about global not local patterns. A random string will not have global patterns though it most certainly will have local ones.

    This hints at the importance of nonlossy compression. A lossy compression of a string will mistake local patterns for global ones.

    peace

  31. cubist says,

    I would have thought that looks random implicitly requires some sort of concept of ‘random’, and comparing what’s-being-looked-at to that concept of ‘random’,

    I say,

    If you would take a minute to play the game detailed here I think you would understand what I mean,

    http://arxiv.org/pdf/1002.4592.pdf

    peace

  32. OMagain asks,

    And how did you to that to my second string, which was already incompressible?

    Just because a sting is incompressible using one process does not mean it is using another.

    I used the non-computable process that is detailed here

    http://arxiv.org/pdf/1405.0126v1.pdf

    That this process is beyond the grasp of even state of the art algorithms is demonstrated here

    http://www.evolvingai.org/fooling

    hope that helps you to get a feel for what is being done

    peace

  33. keiths:

    Also, your statement itself is wrong:

    In order to tell the strings apart we need to nonlossily compress the information in the real string.

    The compression doesn’t need to be nonlossy. Lossy compression is fine as long as the detectable differences aren’t compressed out.

    fifth:

    This is important to think about.

    Recall that I made a testable prediction that Patrick will not be able to design algorithmic software that pick out the real string as good as I can. That was in anticipation of a comment like you just made.

    Your prediction is irrelevant. The question is whether nonlossy compression is necessary, as you claim:

    In order to tell the strings apart we need to nonlossily compress the information in the real string.

    We don’t need nonlossy compression, obviously. All that matters is that the compression doesn’t throw away information that is essential to distinguishing the two strings.

    A simple example: JPEG compression is lossy, but we nevertheless retain the ability to distinguish between JPEG images of Hillary Clinton and Carly Fiorina.

    If lossy compression will work just as well then Patrick will be able to put together algorithmic software to accomplish what I’m doing and my method will be falsified,

    How is that for some science for you.

    It’s very poor science and a complete nonsequitur. Patrick’s ability to emulate a human observer in software is unrelated to whether an observer can tell the difference between two strings after lossy compression and decompression.

  34. All,

    While rescanning the paper in the OP I was reminded of why it will be difficult to make the method work with binary strings

    Quote:

    Let’s imagine that a factory producing scented candles invests in an artificial smell detector. The detector is used for sampling the aroma of the candles passing on the conveyor belt below and directing them to the appropriate boxes. Let’s suppose that the factory is currently producing two flavors of scented candle: chocolate and lavender. In this case the detector only needs to distinguish
    between two possible smells. A batch of chocolate scented candles is passed underneath and the sensor flashes chocolate. Can we say that the detector has actually experienced the smell of chocolate? Clearly it has managed to distinguish chocolate from lavender, but this does not guarantee that it has experienced the full aroma in the same manner as humans do. For example, it may be the case that the detector is latching onto a single molecule that separates the two scents, ignoring all other aspects. The distinction between chocolate and lavender is a binary one,and can thus be encoded by a single bit. In contrast, humans can distinguish more than 10,000 different smells detected by specialized olfactory receptor neurons lining the nose (Alberts et al., 2008). When a human identifies a smell as chocolate they are generating a response which distinguishes between 10,000 possible states, yielding log 2 10,000=13.3 bits
    of information.The important point that Tononi (2008) raises with his initial thought experiment is that the quality of an experience is necessarily expressed relative to a range of alternative possibilities. For example, if the whole world was coloured the same shade of red, the act of labeling an object as ‘red’ would hold no meaning. The informativeness of ‘red’ depends on its contrast with other colours.
    Descriptions of experiences must be situated within a context where they discriminate among many alternatives (i.e. they must generate information)

    end quote:

    I can’t believe I forgot that.

    I still think that it’s possible to make bianary strings work if I compare with more than one randomized string and increase the length accordingly. But it will take some effort and I would expect I might be plagued with false negatives at that point.

    I do plan on making the necessary modifications and trying it when I get a minute

    peace

  35. keiths:

    If you were nonlossily compressing the information in the real string, then none of it would be lost, and you could reconstruct the string from memory.

    fifth:

    I can reconstruct the string from memory well enough that I will not be able to distinguish my reconstructed string from the original.

    If you can’t reconstruct it exactly, your compression is lossy.

    Nonlossy is not the same as lossy. Isn’t this obvious, even to you?

    If “nonlossy” means lossy in fifth-speak, then what does “lossy” mean?

  36. fifth:

    While rescanning the paper in the OP I was reminded of why it will be difficult to make the method work with binary strings

    If you prefer to work with numeric strings, all you have to do is convert the binary strings to your preferred base.

    Do you know how to convert from one base to another? Binary is just base 2, after all.

  37. Keiths says,

    A simple example: JPEG compression is lossy, but we nevertheless retain the ability to distinguish between JPEG images of Hillary Clinton and Carly Fiorina.

    I say,

    OK but that is irrelevant to the discussion.

    lossy compression is sufficient if the gap between the strings is very wide but insufficient if it is narrow. The whole point of the method is to look at gaps that are quite narrow.

    If the strings were completely different there would be no need to compare them in the game.

    you say,

    Patrick’s ability to emulate a human observer in software is unrelated to whether an observer can tell the difference between two strings after lossy compression and decompression.

    Actually it is highly relevant to my method if algorithmic lossy compression is equivalent to non-algorithmic nonlossy compression. In fact my entire method depends on them being not equivalent.

    peace

  38. Keiths says,

    If you prefer to work with numeric strings, all you have to do is convert the binary strings to your preferred base.

    I say,

    But the conversion always carries the signature of the original string. I do think that we could distinguish converted strings but it would be difficult and require much longer strings,

    I saw what I believed were patterns in Patrick’s sonnet but I ran out of opportunities before I could verify it.

    The rules of the game are put there for a reason so as to keep the observer from simply memorizing the real string. We will need to modify them for strings that are binary at their root

    peace

  39. keiths:

    The probability of any specific pattern can be calculated, and it will be nonzero (provided that the strings are long enough to contain the pattern).

    [Emphasis added]

    fifth:

    Again we are talking about global not local patterns.

    It makes no difference. Reread my statement above and note the word “any”.

    A random string will not have global patterns though it most certainly will have local ones.

    Not true. Why do you believe this?

    This hints at the importance of nonlossy compression. A lossy compression of a string will mistake local patterns for global ones.

    Again, why do you believe this?

  40. keiths says

    It makes no difference. Reread my statement above and note the word “any”

    I say,

    A long enough string will contain any pattern but a string will if it is random not be characterized by any one pattern over it’s entire length.

    you say,

    Again, why do you believe this?

    I say because the very definition of random is without pattern

    quote:

    random:

    1.proceeding, made, or occurring without definite aim, reason, or pattern:
    the random selection of numbers.
    2.Statistics. of or characterizing a process of selection in which each item of a set has an equal probability of being chosen.

    and

    Randomness is the lack of pattern or predictability in events.A random sequence of events, symbols or steps has no order and does not follow an intelligible pattern or combination. Individual random events are by definition unpredictable

    end quote:

    It can not be more clear if words have meaning.

    also check this out

    https://en.wikipedia.org/wiki/Schizophrenic_number

    peace

  41. keiths:

    If you prefer to work with numeric strings, all you have to do is convert the binary strings to your preferred base.

    fifth:

    But the conversion always carries the signature of the original string.

    Of course it does. The conversion preserves all the information in the string — it’s lossless — and that’s exactly what you want. What would be the point in mangling the string before you even apply your method?

    I do think that we could distinguish converted strings but it would be difficult and require much longer strings

    Which is one of the flaws in your method. It’s too dependent on the choice of representation.

    The rules of the game are put there for a reason so as to keep the observer from simply memorizing the real string.

    That’s right. You are trying to guarantee that the compression is lossy. How you can then turn around and claim that it is nonlossy is beyond me.

  42. keiths asks.

    Which is one of the flaws in your method. It’s too dependent on the choice of representation.

    I say,

    Why is this a flaw? Every physical object or event can be represented by a string that will work in the method with out any modification at all. And I believe that any representation will work with some modification.

    Math is all about converting information into forms that can be manipulated. Some conversions are simpler than others.

    The conversion preserves all the information in the string — it’s lossless — and that’s exactly what you want.

    If preserves the information we want as well as the binary structure we do not want. It’s like disguising a signal with noise. The message is still there but it will take more effort to get at it.

    peace

  43. fifth,

    Has it occurred to you that mathematicians, scientists and engineers might understand randomness and other mathematical concepts a bit better than dictionary editors?

    Here’s an exercise for you: If I randomly generate strings that are five bits long, with each bit having an equal probability of being a zero or one, then what is the probability of getting a string with the global pattern ‘all zeroes’?

    Now take another look at your statement:

    A random string will not have global patterns though it most certainly will have local ones.

  44. That’s right. You are trying to guarantee that the compression is lossy. How you can then turn around and claim that it is nonlossy is beyond me.

    It’s all about seeing the target behind the string instead of the string itself.

    Maybe it would help you to think about petrushka’s characterization of the nonlossy compression process as abstraction. I see no reason at this point to be a stickler for terminology. If you get the process you can call it whatever you wish. I would prefer we stick with the term Maguire used but to each his own.

    from here

    https://en.wikipedia.org/wiki/Abstraction

    An abstraction can be seen as a compression process,mapping multiple different pieces of constituent data to a single piece of abstract data;

    and

    Conceptual abstractions may be formed by filtering the information content of a concept or an observable phenomenon, selecting only the aspects which are relevant for a particular purpose.

    peace

  45. keiths:

    The conversion preserves all the information in the string — it’s lossless — and that’s exactly what you want.

    fifth:

    If preserves the information we want as well as the binary structure we do not want. It’s like disguising a signal with noise.

    fifth,

    You are really out of your depth here. In school, didn’t you learn to convert numbers from one base to another? There’s no “binary structure” that persists when you convert a binary number to some other base.

    In this thread, you’ve demonstrated that you don’t understand any of the following:

    1) Kolmogorov complexity
    2) computability
    3) randomness
    4) lossy and nonlossy compression
    5) conversions from one base to another

    And it isn’t just the finer points. You don’t understand the basics.

    How do you expect to understand what’s wrong with your method if you don’t even understand the concepts on which it is supposedly based?

    Why not pick up a textbook or take an online course or two? You’re making mistake after mistake after mistake in this thread. People have been patiently explaining your errors to you, but at some point you need to take responsibility for your own education. Make an effort, fifth.

  46. Keiths.

    I’m going to take a break before I say something I’ll regret.

    Perhaps you should take that time to finally read the papers I linked and maybe we can start fresh tomorrow.

    peace

  47. fifth,

    I’ve read the papers.

    You quoted Wikipedia:

    Conceptual abstractions may be formed by filtering the information content of a concept or an observable phenomenon, selecting only the aspects which are relevant for a particular purpose.

    That’s right. The information is filtered, and irrelevant details are thrown away. It’s a lossy process.

    Why do you think it’s nonlossy?

    And again, if you choose to define “nonlossy” as lossy, then how do you define “lossy”?

  48. fifthmonarchyman,

    Patrick’s ability to emulate a human observer in software is unrelated to whether an observer can tell the difference between two strings after lossy compression and decompression.

    Actually it is highly relevant to my method if algorithmic lossy compression is equivalent to non-algorithmic nonlossy compression. In fact my entire method depends on them being not equivalent.

    Just to set expectations, I’m hacking on this as time permits but it’s a lunchtime thing. It’ll take a couple of weeks.

  49. fifthmonarchyman,

    Every physical object or event can be represented by a string that will work in the method with out any modification at all. And I believe that any representation will work with some modification.

    Have you tried the hexadecimal equivalents I provided? They preserve any patterns in the bit string better than a simple conversion to decimal.

Leave a Reply