Sandbox (4)

Sometimes very active discussions about peripheral issues overwhelm a thread, so this is a permanent home for those conversations.

I’ve opened a new “Sandbox” thread as a post as the new “ignore commenter” plug-in only works on threads started as posts.

5,934 thoughts on “Sandbox (4)

  1. In case anyone is wondering about this “Karen” person that Jock keeps mentioning, she’s a fictional character in a vignette of his. In the vignette, Karen’s daughter Alice is trying to determine the circumference of the painted circle in the center of a soccer pitch. She and her mother both know that the circumference of a circle is equal to π times the diameter. However, Alice’s calculator doesn’t have a π key, so she decides to use 22/7 as an approximation. She measures the diameter and multiplies it by 22/7, while Karen, using her own calculator, multiplies it by π (technically, she multiplies it by the number produced by the π key, which is also an approximation, though a far better one than 22/7).

    I maintain that Alice has introduced an error (though a small one) by using 22/7 instead of π. Jock disagrees, as I will explain below.

    But first, let me assert something upon which I think Jock and I will agree. Consider an abstract circle, not the physical circle in the middle of the pitch. If the abstract circle has a diameter d, then the circumference is πd. I’m pretty sure Jock will agree. If we use a factor of 22/7 instead of π, we introduce an error in the calculated circumference of this abstract circle. The magnitude of the error is (22/7 – π)d. Again, I think Jock will agree.

    Now consider the actual physical circle in the center of the soccer pitch. Alice measures its diameter in yards, obtaining a number m. Since this is a measurement, m is only an approximation of the true diameter d. There is a measurement error. Let’s say that because of the measurement error, the true diameter d falls somewhere in the range m ± ε yards. Equivalently, we can say that m – ε ≤ d ≤ m + ε.

    Alice calculates the circumference by multiplying m by 22/7. Karen notes that by using that approximation of π, Alice has introduced an unnecessary error on top of the already existing measurement error. Therefore Karen decides to multiply by π instead. Karen calculates the magnitude of Alice’s additional error by multiplying m by (22/7 – π), as will be explained below. I say that Karen is right about Alice’s unnecessary error. Jock thinks she is mistaken.

    As far as I can tell, Jock believes that if the original measurement error (in terms of significant figures) is coarser than the discrepancy between 22/7 and π, the latter does not contribute an additional error. If I’ve misinterpreted him, I’m sure he’ll correct me. Whatever the reasoning, his conclusion is wrong. Using 22/7 instead of π does in fact contribute an additional error, as I will now explain.

    Let’s say that Karen calculates the window within which the true circumference must fall. If the true diameter d falls within the window m ± ε yards, as specified above, then it follows by simple logic that the true circumference c must fall within the window π * m ± π * ε yards. When Alice calculates the window using 22/7 in place of π, she obtains a window of 22d/7 ± 22ε/7 yards — different from Karen’s. Karen’s window calculation is correct, and a deviation from the correct window is an error, so I conclude that Alice has introduced an additional error beyond the original measurement error. (Technically, she has introduced a couple of errors, since she has both shifted the window and altered its width. I’ll concentrate on the shift, but we can discuss the width as well if necessary.)

    How much has the window been shifted? That’s easy — we can just subtract Karen’s circumference value from Alice’s circumference value, yielding 22d/7 – πd or (22/7 – π)d. That makes intuitive sense, and it matches the error introduced in the case of the abstract circle. Note that this expression cannot be equal to zero unless d is zero, and since we know that d is nonzero, it follows that the expression is always nonzero. Therefore Alice has introduced a nonzero error on top of the original measurement error, regardless of what the diameter turns out to be and regardless of the size of the measurement error.

    Now, the difference between 22/7 and π is tiny — only about 0.0013 — so the additional error is correspondingly tiny. That means that the two windows will have a high degree of overlap. However, the problem is a general one that occurs to some extent whenever approximations are used in measurement-related calculations. When the difference between the approximation and the thing being approximated is large enough, the error can be substantial.

    One such case is a scenario we discussed in the earlier thread. In that scenario, Jock chose a poor approximation (via overly aggressive rounding) that resulted in a huge additonal error, an error so large that the calculated window would not even overlap with the true window, given a realistic value for the measurement error. In other words, the true value would not even fall within the calculated measurement window. A disaster. My analysis can be found here.

    The moral of the story? Be very careful when using approximations. And remember that when you round a number, you are approximating it. Overly aggressive rounding can result in approximations that differ substantially from the values being approximated. Jock did that, and he got burned.

  2. petrushka:

    I tend to take your side on the number argument, but the quantity of verbiage gets tiresome.

    When your interlocutor fails to see or accept your point, there is no discussion.

    Repeating the argument adds nothing.

    Believe me, I’m as tired of repeating my arguments as you are of reading them. Note that I dropped the topic five months ago. I’m only discussing it now because Flint brought it up again.

    I have to repeat my arguments when Flint and Jock refuse to address them. Think about it. If the problem were that they didn’t understand my arguments, they could ask clarifying questions. They don’t. If the problem were that they disputed my reasoning, they could quote the parts they disagreed with and explain why. They don’t. If they were unable to refute my arguments, they could acknowledge that. They don’t. If they were to do any of those things, the discussion would stand a chance of progressing.

    Instead, both of them have ignored arguments of mine that get at the very heart of the issues under dispute, despite being asked repeatedly to address them, literally for weeks. If you don’t want to hear my arguments over and over, then urge Flint and Jock to address them.

    Here’s a good one to start with:

    The key concept to understand is that two exact numbers can be approximately equal to each other. Consider:

    1) π is an exact number with an infinite decimal expansion.

    2) 22 is an exact number with an infinite decimal expansion (and all of the digits to the right of the decimal point are zeros).

    3) 7 is an exact number with an infinite decimal expansion (and it too has all zeros to the right of the decimal point).

    4) The number 22/7 is an exact number with an infinite decimal expansion. No surprise there, since 22 and 7 are also exact numbers.

    5) 22/7 is a well-known approximation of π. In other words, the exact number 22/7 is approximately equal to the exact number π.

    Do you agree with those five statements? And in general, do you agree that it’s possible for one exact number to be approximately equal to another exact number?

  3. petrushka:

    There are discussions about factual issues, and disagreement can proceed as if at trial.

    There are arguments about morals and values.

    But the number argument seems to be about definitions. Tomato tomahto.

    Most of the disagreements are about far more than mere definitions. As just one example, suppose we choose to ignore the definitions of both the real numbers and the “measurement-derived reals”. Even if we do that, it remains true that exact numbers can be used to express approximate measurements, since exactitude is intrinsic while approximation is relational. It therefore remains true that we don’t need a new category of numbers, regardless of what you call them. Flint and Jock disagree with me on that, and it is a substantive disagreement having nothing to do with definitions. It’s about what works and what doesn’t.

    Anyway, it’s ridiculous to redefine terms that have established, time-tested, consensus definitions within the mathematical community. The “measurement-derived reals” do not fit the established definition of the reals, and so the onus is on Flint and Jock to come up with a better name. The onus is not on mathematicians to revise the definition of the reals in order to accommodate some crackpot ideas concerning measurements.

    As an aside for readers who haven’t seen the earlier thread, the “measurement-derived reals” used to be called the “flintjock numbers”. Jock rechristened them, presumably because “flintjock number” isn’t a very professional-sounding term. “Measurement-derived reals” sounds better, but it unfortunately is a misnomer.

  4. Jock,

    You oddly claim that I didn’t respond to your “5,93” comment, but I did so here, six months ago:

    Jock:

    The following is an argument that I do NOT think is sound, but I believe keiths does.(if it helps you to imagine periods for the decimals, or you wish to include the rider “all values are in inches”, feel free to so stipulate in your response)
    A) 5,93 is an exact number
    B) 5,928599201766 is an exact number
    C) 5,93 is a reasonable approximation for 5,928599201766 because the percentage difference between them is sufficiently small (it is less than 0.024%)
    does keiths view this as a sound argument?

    Here’s how I’d put it:

    1) Unbeknownst to us, the actual length of the object in question is 5.928599201766… inches. That’s the actual length, not a measurement. It is exact because the object has one and only one length. We don’t know what that length is, but we do know that it exists, and we do know that it has one and only one value. The object cannot have two or more actual lengths simultaneously.

    2) We measure the object and the result is 5.93 inches. That is approximately the same as the unknown actual length of 5.928599201766… inches. It is approximate, but not exact, because 5.93 inches is not exactly equal to 5.928599201766… inches. It’s an inexact measurement, just as all measurements are inexact.

    3) Note that the only requirement for the measurement to qualify as inexact is for the result of the measurement to differ from the actual length. There is no requirement whatsoever for the number 5.93 to be inexact, nor for the number 5.928599201766… to be inexact. They simply need to differ, and since they do, the measurement is inexact. Exact numbers can be used to express inexact measurements, and everything works out perfectly, with no contradictions. Therefore, your motivation for creating the flintjock numbers is bogus, based on a silly misconception.

    4) The number 5.93 is a real number, and like all real numbers, it is exact, has one and only one value, and possesses one and only one decimal expansion. The exact number 5.93 can be used to express the measurement “5.93 inches”, which like all measurements is inexact.

    if it helps encourage participation, I will note that I think the following is sound:
    D) 5.93 is a flintjock number

    No, 5.93 is a real number. Like all real numbers, it is exact. It has one and only one value, and there is no error term. Measurements have errors, but real numbers do not. Ask any competent mathematician.

    E) 5.928599201766… is a flintjock number

    You forgot the ellipsis, which I have restored. And no, that is not a flintjock number. The object has one and only one length. We don’t know what that length is, though for the purposes of this discussion we have stipulated that it is 5.928599201766… inches. That is an exact length — the actual length — of the object. It isn’t a measurement, and there is no error term. It’s an exact length, expressed using an exact real number.

    It boils down to this:
    Since it is possible for two exact numbers to differ (which I pray is obvious to you), it is possible for an exact number to represent an inexact measurement. The only requirement is for the exact number representing the measurement to differ from the exact number representing the actual value of the quantity in question.

    That is why people don’t hesitate to use real numbers to express measurements, and it’s why mathematicians haven’t bothered to create a new number system for that purpose. Like all mathematically savvy people, they can see that a new number system is totally unnecessary. The flintjock numbers, besides being badly broken, are completely unnecessary. You thought they were needed, but they aren’t, and that’s why no one uses them.

    You went on to complain that I hadn’t told you whether I thought the argument, as you phrased it, was sound. I responded:

    I rejected your poor and incomplete paraphrase of my argument and instead expressed it using my own words, in a more complete and better way. It’s my argument, and I’m in a far better position than you to express it, especially considering how confused you have been on this topic.

    I am allowing you to express your arguments in your own words. Please respond to my argument as I have expressed it in my own words.

  5. As to whether I regard 5.93 (which you refer to as “5,93”) as being approximately equal to 5.928599201766… (which you refer to as “5,928599201766…”), I affirmed that here:

    I have repeatedly asked you, when you disagree with something I’ve written, to quote it and explain why you disagree. I invite you to do so again.

    If you seriously want to claim that two objects, one 5.93 inches long and the other 5.928599201766… inches long, are not approximately the same length, then go for it. I will laugh, and I suspect most of the readers will also. All you have to do is imagine being presented those two objects, one lined up next to the other, and being asked whether they are approximately the same length. I wouldn’t hesitate to answer yes. If you would truly answer ‘no’, then you are a strange and confused bird indeed.

    If you want to make that argument, go for it. But don’t overlook the fact that my argument here does not depend on the fact that “5.93 inches” and “5.928599201766… inches” are approximately equal lengths. So don’t use that as an excuse for avoiding the questions.

  6. keiths,
    Yossarian-like, you continue to treat the wrong wound. Alice’s use of 22/7 is just fine: she’s in the story to illustrate how competent people behave.

    keiths: As far as I can tell, Jock believes that if the original measurement error (in terms of significant figures) is coarser than the discrepancy between 22/7 and π, the latter does not contribute an additional error. If I’ve misinterpreted him, I’m sure he’ll correct me. Whatever the reasoning, his conclusion is wrong. Using 22/7 instead of π does in fact contribute an additional error, as I will now explain.

    Close, but still wrong.
    You cannot state the magnitude of the additional error with a precision that exceeds the precision of the original measurement. I walked you through this.
    You CAN say “you have introduced an additional error of up to 0.0119… smoots”
    You CANNOT say “you have introduced an additional error of 0.0119… smoots”
    THIS is Karen’s error. And yours…

  7. keiths: Jock,

    You oddly claim that I didn’t respond to your “5,93” comment, but I did so here, six months ago:

    “oddly” ROFL
    you might want to check the next comment.

    You wrote a lot, but did not answer the question. It’s a very simple question; I bolded it for you.

    the original question, still unanswered:

    A) 5,93 is an exact number
    B) 5,928599201766 is an exact number
    C) 5,93 is a reasonable approximation for 5,928599201766 because the percentage difference between them is sufficiently small (it is less than 0.024%)
    does keiths view this as a sound argument?

  8. Jock,

    What’s amusing about this is that according to you, I have already made that argument. In a comment to aleta, you state that “keiths spent three whole pages arguing this very argument”.

    If I’ve already made the argument, why haven’t you refuted it? You have everything you need. Quote the statements of mine that you think are in error, and explain why you think so.

  9. Jock:

    You cannot state the magnitude of the additional error with a precision that exceeds the precision of the original measurement. I walked you through this.
    You CAN say “you have introduced an additional error of up to 0.0119… smoots”

    You just contradicted yourself in the space of three sentences. The “0.0119…” is infinitely precise — note the ellipsis. So you’re saying that I’m limited in how precisely I can state the magnitude of the additional error, and then you’re saying that there’s no limit to how precisely I can state it.

    Next, we’ve been over this already, more than once. The measurement window is shifted by 0.0119… smoots, which means that every point in the measurement window is shifted by 0.0119… smoots. Since every point is shifted, every point is in error. The additional error isn’t merely “up to 0.0119… smoots”, it is 0.0119… smoots, across the board.

    I understand the source of your confusion, and I explained it in the earlier thread:

    Jock gets excited because his measurement error becomes smaller (up to a point) when the actual pole length is less than 108 inches, and he takes this to mean that his rounding error isn’t as bad as I’ve made it out to be. Not true. The overall error gets smaller (up to a point), but his rounding error does not. It remains at a full 0.8 inches.

    How can his overall error get smaller if his rounding error remains constant? It’s simple. As I explained in statement #5, systematic and random errors can partially or wholly cancel each other if they are of opposite signs. The rounding error can be partially cancelled by a random error of the opposite sign (up to a point), and it is wholly cancelled if the random error is opposite in sign but equal in magnitude to the rounding error.

  10. Jock:

    Alice’s use of 22/7 is just fine: she’s in the story to illustrate how competent people behave.

    Alice is making the same mistake as you, though in her case the consequences are far less severe. You and she both introduce additional error by your use of approximations, but her approximation is only off by +0.04% while yours is off by over 20 times as much. In your case, the error is so large that the calculated measurement window might not even overlap the true window, given realistic assumptions about the size of the measurement error!

    She used 22/7 as an approximation when she could have borrowed her mother’s calculator and used the π key. You rounded to 1.6 when you could have retained many more significant digits. You thought that you needed to round aggressively because you believed that the precision of the unit conversion should be limited by the precision of the measurement itself, but that’s false. 1 smoot is equal to 67.0000… inches exactly, by definition. Unlike in a measurement, there is no error term in a unit conversion.

  11. keiths,

    Of course you have made that (A, B, C) argument. I just want you to say whether you think it is a sound argument or not. Own it, as it were.

    keiths,

    To people who are not keiths, the phrase “up to 0.0119… smoots” covers values that are less than 0.0119… smoots, whether “0.0119…” is meant to denote
    0.0[119402985074626865671641791044776] recurring (!) or not.
    There’s no contradiction.

  12. I will point out that there is a difference between asking if “5,93 is a reasonable approximation for 5,928599201766″ and if ” 5,93 is an approximation for 5,928599201766″

    My comments in the other thread were about how, in general, “reasonable” depending on the real-world context. I think it’s pretty clear that 5.93 is an approximation of the second number, as it appears to be the second number rounded to hundredths, and rounding to a certain decimal place is a standard way of approximating a number.

    Carry on with your other arguments, which I’m not attempting to follow.

  13. Jock:

    Of course you have made that (A, B, C) argument. I just want you to say whether you think it is a sound argument or not. Own it, as it were.

    Of course I believe that the arguments I make here are sound. If I didn’t, I wouldn’t post them.

    If you think my argument is mistaken, then refute it. Quote the statements of mine that you think are in error, and explain why you think so.

    If you can’t refute it, that’s fine too. Just acknowledge that you can’t. That would be progress.

  14. aleta:

    I will point out that there is a difference between asking if “5,93 is a reasonable approximation for 5,928599201766″ and if ” 5,93 is an approximation for 5,928599201766″.

    That’s right, and I’m pretty sure it’s why Jock is so frustrated that I won’t follow the script he’s written for me. He can’t refute the argument I’ve actually made, so he wants to put words in my mouth.

    He cracks me up.

    I stand behind my own words, Jock, but I don’t stand behind yours.

  15. Jock:

    To people who are not keiths, the phrase “up to 0.0119… smoots” covers values that are less than 0.0119… smoots, whether “0.0119…” is meant to denote
    0.0[119402985074626865671641791044776] recurring (!) or not.
    There’s no contradiction.

    It’s clearly a contradiction, because 0.0119… is the infinite decimal expansion of the error magnitude. You didn’t limit the precision there, in that sentence, but two sentences earlier you did limit it, thus contradicting yourself.

    Don’t stress too much over it. It’s not a big deal. I just thought it was funny to see you make a statement and then contradict it two sentences later.

    The mistake you should be focusing on is your belief that 0.0119… is merely the greatest possible value of the additional error, when in fact the additional error always has that value.

  16. Jock, to aleta:

    By way of example, 2,7 – 2,7 = 0,0 always, but 2.7 – 2.7 is going to have an error distribution.

    That quote highlights one of the many flaws of your “MDRs”, aka the “measurement-derived reals”, which is that you guys aren’t even consistent about what the MDRs actually are. In the quote, you treat them as numbers sampled from a distribution. Elsewhere you say they are ranges. In still other places you say they are distributions.

    They can’t be all three of those things simultaneously. Numbers aren’t ranges, ranges aren’t distributions, and distributions aren’t numbers. What a mess.

  17. While thinking about Jock’s ideas regarding approximation, I remembered a bizarre claim he made in the old thread — namely, that no two real numbers are ever approximately equal to each other. Why does he think this? Because there are always infinitely many numbers between them. I kid you not.

    He says:

    Once you move to the world of Infinite-precision reals (IPRs) nothing is approximately anything — they are either the same or different.

    (Note that what he’s calling the “infinite-precision reals” are what mathematicians refer to simply as the “reals”. Jock needed to rechristen them because otherwise he would be admitting that the “measurement-derived reals”, which have finite precision, aren’t actually real numbers at all.

    All real numbers are infinitely precise. Jock’s term “infinite-precision reals” comes straight from the Department of Redundancy Department.)

    Then there’s this exchange:

    keiths:

    Second, the difference between 8.29 and 8.2916882 is a whopping 0.02%, so if you try to argue that they are not approximately equal, I will laugh.

    Jock:

    As discussed previously, any two of your “abstract” numbers have an infinite number of intervening values between them, so calling them “approximately equal” seems like an invitation to error, so laugh away.

    Let that sink in. He is seriously arguing that no two real numbers are ever approximately equal, no matter how close they are in value.

    I am indeed laughing.

  18. This seems like a unreasonable definition/meaning for the word approximate.

    The square root of two is a infinite decimal that starts 1.414213562373095…

    It doesn’t make sense to not be able to say that 1.414213562373095 is approximately equal to the the square root of 2. What would one say to describe the relationship of 1.414213562373095 to the square root of 2?

    Note: we are just talking about pure math here, not measurement.

  19. aleta:

    It doesn’t make sense to not be able to say that 1.414213562373095 is approximately equal to the the square root of 2. What would one say to describe the relationship of 1.414213562373095 to the square root of 2?

    Jock says there are only two options — ‘same’ and ‘different’:

    Once you move to the world of Infinite-precision reals (IPRs) nothing is approximately anything — they are either the same or different.

    There you have it. According to Jock, we cannot say that 1.414213562373095 and the square root of 2 are approximately equal. We can only say that they’re different.

    I have no idea what he’s smoking, but it’s evidently neurotoxic.

  20. I understand what he is saying. Just doesn’t seem like a reasonable or useful position.

    I wonder what jockdna would say to my question: “What would one say to describe the relationship of 1.414213562373095 to the square root of 2?”

  21. aleta:

    I understand what he is saying. Just doesn’t seem like a reasonable or useful position.

    It’s absurd, isn’t it? If he were right, then 22/7 would not be an approximation of π. Which is especially funny given that he used 22/7 as an approximation of π in his story about Alice and Karen.

    Once again, it’s Jock against the mathematicians, and Jock loses.

  22. I’m not interested in giving jockdna a hard time or digging into the past.

    I’m just wondering what he would say to the question “How would you describe the relationship of 1.414213562373095 to the square root of 2?”

  23. aleta:

    I’m not interested in giving jockdna a hard time or digging into the past.

    Well, Jock hasn’t said anything in this thread about whether he believes that two numbers can be approximately equal, so I had to dig into the old thread to confirm my recollection. If he’s changed his mind since then — which would be a good thing — I’m hoping he’ll tell us.

    I’m just wondering what he would say to the question “How would you describe the relationship of 1.414213562373095 to the square root of 2?”

    He can speak for himself, but I’ll note that if he’s ruled out “approximately equal”, he can instead use something like “less than .000000000000005% apart”. The two numbers relate to each other in infinitely many ways, but the magnitude of the difference is the one that seems most germane to this discussion.

  24. aleta: This seems like a unreasonable definition/meaning for the word approximate.

    The square root of two is a infinite decimal that starts 1.414213562373095…

    It doesn’t make sense to not be able to say that 1.414213562373095 is approximately equal to the the square root of 2. What would one say to describe the relationship of 1.414213562373095 to the square root of 2?

    Note: we are just talking about pure math here, not measurement.

    Careful. The moment you truncated that infinite decimal expansion you moved out of the world of pure math and into the world of applied math. Welcome to flintjock world! This is where all the action is. 😉

    aleta: I wonder what jockdna would say to my question: “What would one say to describe the relationship of 1.414213562373095 to the square root of 2?”

    Well, I would say that 1.414213562373095 is a reasonable approximation for the square root of two, since the implied precision of 1.414213562373095 encompasses the value of the square root of two. Poor keiths is stuck in a world of infinite precision, where 1,414213562373095 is identical to 1,41421356237309500000000000000…
    and approximations don’t exist. He still refuses to answer the “Is A,B,C a sound argument?” question.
    I summed up my view here.

  25. I really wish I could have a conversation about this without being caught in the middle of the conflict between the two of you.

    But thanks for replying, jockdna.

    You say, “Careful. The moment you truncated that infinite decimal expansion you moved out of the world of pure math and into the world of applied math.”

    Why? 1.414213562373095, which technically is 1.4142135623730950000000000000… is a real number in the world of pure mathematics. It’s not being applied to anything in the real world, or the measurement of anything.

    Can you explain why 1.414213562373095 “moved out of the world of pure math and into the world of applied math”?

    You go on to say “Well, I would say that 1.414213562373095 is a reasonable approximation for the square root of two,” which is exactly what I would say. But I would/could say that as a matter of pure mathematics. I don’t understand why measurement and measurement errors in the real world have to be part of the discussion.

  26. Jock:

    He still refuses to answer the “Is A,B,C a sound argument?” question.

    In effect, you’ve given me a script and are demanding that I recite the lines you’ve written for me. But I’ve already made my argument in my own words, as you yourself have confirmed. If it’s unsound, you should be able to refute it as I have written it. My words, not yours. Have at it.

    If you can’t make your case without my assistance, then you don’t have a case.

  27. Jock, to aleta:

    Careful. The moment you truncated that infinite decimal expansion you moved out of the world of pure math and into the world of applied math.

    Come on, Jock. If that were true, no one doing pure math could write things like “3.2 + 0.1 = 3.3”, since those expansions have all been truncated.

    Poor keiths is stuck in a world of infinite precision, where 1,414213562373095 is identical to 1,41421356237309500000000000000…
    and approximations don’t exist.

    Of course they exist, and I’ve explained many times how it’s possible for two infinitely precise numbers to be approximately equal. It’s trivial. Can you identify a flaw in my logic?

    Here’s one of my explanations, for your convenience:

    The key concept to understand is that two exact numbers can be approximately equal to each other. Consider:

    1) π is an exact number with an infinite decimal expansion.

    2) 22 is an exact number with an infinite decimal expansion (and all of the digits to the right of the decimal point are zeros).

    3) 7 is an exact number with an infinite decimal expansion (and it too has all zeros to the right of the decimal point).

    4) The number 22/7 is an exact number with an infinite decimal expansion. No surprise there, since 22 and 7 are also exact numbers.

    5) 22/7 is a well-known approximation of π. In other words, the exact number 22/7 is approximately equal to the exact number π.

    I’ve also shown you that appending zeroes to the fractional part of a decimal representation does not change the underlying number. 3.0 really is equal to 3.00, as most grade-schoolers could tell you:

    3.0 =
    3 x 10^0 +
    0 x 10^-1

    3.00 =
    3 x 10^0 +
    0 x 10^-1 +
    0 x 10^-2

    The only difference is the “0 x 10^-2” term. That term is equal to zero. Adding zero to a number leaves the number unchanged. Therefore 3.0 is equal to 3.00. It’s obvious.

    I made the same argument in the old thread, and you dodged it for weeks. It isn’t hard to see why.

  28. aleta: Can you explain why 1.414213562373095 “moved out of the world of pure math and into the world of applied math”?

    It is your decision to truncate it that matters. You chose to truncate it in a way that (hopefully) still retains sufficient precision for the purpose you intend to put it to. I have often used 1.414 because I can remember it, but other times I will go to 15 or 16SF, if I need better precision. Suddenly, context and provenance matter.

    You go on to say “Well, I would say that 1.414213562373095 is a reasonable approximation for the square root of two,” which is exactly what I would say. But I would/could say that as a matter of pure mathematics. I don’t understand why measurement and measurement errors in the real world have to be part of the discussion.

    I guess I have a more restrictive definition of “pure math” than you do — it is a subject of debate — so I do not see the concept of approximation as being coherent in (my) pure math. By way of illustration, we can model the motion of a pendulum as sinusoidal, but to me that’s applied math. At large angles of deflection, it is wrong.
    My favorite example is the Apollo moonshot. They ignored relativistic effects, because they had figured out they could.

  29. Jock:

    Well, I would say that 1.414213562373095 is a reasonable approximation for the square root of two, since the implied precision of 1.414213562373095 encompasses the value of the square root of two.

    The number 1.414213562373095 is infinitely precise, as aleta and I keep reminding you. The notion that you’d have to append “000…” to it in order to exactify it is ridiculous. Also, it isn’t the result of a measurement, nor is it derived from one, so even by your own standards, there’s no reason to treat it as anything but exact.

  30. Jock, to aleta:

    It is your decision to truncate it that matters. You chose to truncate it in a way that (hopefully) still retains sufficient precision for the purpose you intend to put it to.

    It isn’t intended for any purpose. As aleta specifically told you, it’s pure math, not applied math. So even by your own lights, the number is infinitely precise.

    That puts you back in the ridiculous position of claiming that 1.414213562373095 and 1.414213562373095… — the square root of 2 — are not approximately equal, because there are infinitely many numbers between them.

  31. You write, jockdna, “It is your decision to truncate it that matters. You chose to truncate it in a way that (hopefully) still retains sufficient precision for the purpose you intend to put it to. … I guess I have a more restrictive definition of “pure math” than you do — it is a subject of debate — so I do not see the concept of approximation as being coherent in (my) pure math.”

    I agree, you seem to have a more restrictive definition of pure math than I do. It seems that, to you, (I’m trying to describe what think you think) the mere act of comparing two numbers displays an intention that moves the act from pure math to applied math, irrespective of whether there is any real-world purpose involved.

    So here’s another example. Lets say I just randomly typed a bunch of numbers, like 1.732051, and then my friend came along and said, “Whoa, that’s approximately the square root of 3.” There was no intention in my random typing. There was no purpose in stopping at six decimal places.

    What would you say to my friend? Is he wrong to make his observation using the word “approximately”, or does the mere act of comparing the two numbers mean he is using applied math, not pure math?

    In fact, for you, is pure math anything you can do anything at all with without moving into applied math: pure math is what exists apart from any human interaction with it?

    Can you describe what you would say to my friend, and would you explain what you think pure math is, and/or give an example of someone doing pure math?

  32. aleta: What would you say to my friend? Is he wrong to make his observation using the word “approximately”,

    I would say “Wow! That’s weird – that is the square root of 3 to 7SF! What are the chances!”
    I would not criticize his colloquial use of the word “approximately”, because we all have expectations that are based on the numbers that we come across in everyday life. But, absent any context whatsoever, I don’t see how “approximately” makes any sense.

    …or does the mere act of comparing the two numbers mean he is using applied math, not pure math?

    No. Within pure math, I can compare A and B, with three possible outcomes: A > B, A = B, A < B.
    A ~= B, not so much,

    aleta: In fact, for you, is pure math anything you can do anything at all with without moving into applied math: pure math is what exists apart from any human interaction with it?

    Well, there are a lot of mathematicians who define it that way. G.H. Hardy comes to mind. I’m not quite that strict: for me, pure math is exact math.

    Can you describe what you would say to my friend, and would you explain what you think pure math is, and/or give an example of someone doing pure math?

    Pure math: integrate y = x^2 over the interval 1 to 10
    Also pure math: find, and report, the roots of the equation
    x^5 – 4.x^3 – x^2 – 4 = 0
    Applied math: find, and report, the roots of the equation
    x^5 – 4.x^3 – x^2 – 3 = 0
    there’s going to be some truncation involved…

  33. Jock,

    Even if you were right that any number lacking an ellipsis was inexact, which is clearly false, your criterion for “approximately equal” would still fail.

    Consider the following two numbers. Let’s stipulate that they are inexact “measurement-derived reals”:

    x = 5.7935548
    y = 5.7935549

    Each MDR has 7 digits to the right of the decimal point, so the implied precision is the same, meaning that the distributions are the same size. But those distributions are centered at different values. Neither one encompasses the other, so according to your criterion the MDRs are not approximately equal.

    In fact, no two MDRs that have the same number of digits to the right of the decimal point can ever be approximately equal, according to your criterion, unless all of the digits match. They can fall within smaller and smaller windows, far surpassing any “close enough” requirement, yet never be approximately equal according to your standard.

    You can keep adding digits and bringing the numbers closer together, but no matter how close they get, they will never be approximately equal unless all of the digits are identical.

    Therefore 3.00 and 3.01 aren’t approximately equal, and neither are 3.00000000 and 3.00000001 despite being a million times closer together.

    Your criterion is broken.

  34. ========
    jockdna, “I would not criticize his colloquial use of the word “approximately”, because we all have expectations that are based on the numbers that we come across in everyday life. But, absent any context whatsoever, I don’t see how “approximately” makes any sense.”

    Well, that’s a clear statement, but I think pure mathematics has ways of dealing with the subject of approximation that is more than just colloquial. For instance, Newton’s method for finding the roots of a polynomial hones in on a values until a certain interval of precision is reached, at which point we have an approximation of the actual root. This is pure math.

    But in the second polynomial in your second example (find, and report, the roots of the equation x^5 – 4.x^3 – x^2 – 3 = 0) you would call that applied math. This is quite clarifying. For you, any time we report a number that can’t be written out because it is irrational we are doing applied math even though this has nothing to do with measurement or the real world.

    That certainly is not what I think applied math is, and I don’t think the distinction you want to make is useful, but I think I’ve heard enough to understand a bit the position you are trying to defend. I’m too much of a practical pure mathematician to be as pure as you want to be on this subject,

    I think I probably don’t have more to say (but that often proves wrong.) 🙂

  35. P.S jockdna. Assuming the little dot between 4 and x^3 is a typo, I don’t believe x^5 – 4.x^3 – x^2 – 4 = 0 has any rational roots or irrational roots that can be found exactly, and thus reported, so it isn’t an example of pure math according to your criteria. x^5 – 4.x^3 – x^2 – 3 = 0 is no different.

    Am I correct? Or did you intend x^5 – 4.x^3 – x^2 – 4 = 0 to have rational roots?

  36. aleta, to Jock:

    That certainly is not what I think applied math is, and I don’t think the distinction you want to make is useful, but I think I’ve heard enough to understand a bit the position you are trying to defend. I’m too much of a practical pure mathematician to be as pure as you want to be on this subject,

    I don’t think this is about purity for Jock, so don’t put too much effort into ferreting out his philosophy of pure math. I suspect that his odd position on the inexactness of 1.414213562373095, a number that you and I both know to be exact, stems not from philosophy but rather from a desire to cover up some awkwardness in his position.

    Two odd things happened today:

    1. As of just a couple days ago, Jock was perfectly fine with the idea that exact numbers retain their exactness when their decimal expansions are truncated. He wrote:

    It [the so-called “Germanic notation”] turned out to be quite helpful when the topic of conversation is whether the infinitely-precise-real 2,7 (of pure mathematics) behaves in the same manner as the measurement-derived-real 2.7 (of applied math). By way of example, 2,7 – 2,7 = 0,0 always, but 2.7 – 2.7 is going to have an error distribution.

    Recall that for him, “2,7” is the number the rest of us refer to as “2.7”. In that quote, he acknowledges that 2,7 is both infinitely precise (and therefore exact) and part of pure math. He also subtracts it from itself, yielding 0, which is further evidence that he regards it as single-valued and therefore exact.

    But “2,7” is a truncated decimal expansion. So in that quote, he is confirming that “2,7”, despite being truncated, remains both exact and part of pure math.

    Today, out of nowhere, that got flipped on its head. Now truncated decimal expansions are to be expelled from pure math and into applied math, where they can be inexact:

    The moment you truncated that infinite decimal expansion you moved out of the world of pure math and into the world of applied math.

    And then the precision of the no-longer-exact number is determined by the point at which the decimal expansion is truncated:

    It is your decision to truncate it that matters. You chose to truncate it in a way that (hopefully) still retains sufficient precision for the purpose you intend to put it to.

    2. There are only two categories of real numbers in the world of JockMath — “infinite-precision reals”, which are exact, and “measurement-derived reals”, which are inexact. That means that these new truncated numbers, being inexact, must go into the “measurement-derived real” category. But they aren’t measurement-derived at all. They’re simply infinite-precision reals whose expansions have been truncated.

    Amusingly, that means that we now have a subset of “measurement-derived real numbers” that aren’t measurement-derived, aren’t real, and aren’t numbers. The name is 100% wrong for them.

    But as the name suggests, all of the measurement-derived reals are supposed to be measurement-derived, and Jock has said as much in earlier comments. So besides the weird new truncation rule, we also have this weird new oxymoronic phenomenon of non-measurement-derived measurement-derived reals.

    In my next comment, I’ll explain what I think is behind all of this sudden weirdness.

  37. Why all of this weirdness all of a sudden? I think it’s because Jock was in an uncomfortable position. You were pointing out to him that it makes no sense to deny that 1.414213562373095 is approximately equal to √2. It is silly, but Jock had already committed himself to that position by denying that any two exact numbers could ever be approximately equal.

    The way out of this awkward situation was for Jock to find an excuse for saying that 1.414213562373095 actually isn’t exact. Then he could apply his “encompassing” criterion (which turns out to be fatally flawed), wherein the distribution associated with the newly inexact number would encompass the √2, thus allowing him to declare that the two numbers were approximately equal, after all. Which he did.

    He needed the weird new rules in order to get 1.414213562373095 out of the exact “pure math” category and into the “applied math” category, within which it would become inexact with a sufficiently wide distribution.

    He still hadn’t explained why truncation should push a number out of the “pure math” category and into the “applied math” category, so you asked him about that. He is now struggling to come up with a philosophy of pure math that will plausibly allow him to justify the expulsion of the truncated numbers.

    He will probably want to dispute what I have written here, which is fine. My challenge to him is for him to offer a plausible alternate explanation of why he suddenly did a 180 on the issue of truncated numbers remaining in the realm of pure math while retaining their exactness, and why there are now “measurement-derived reals” that aren’t actually measurement-derived at all.

  38. I should add that there are many scenarios showing that it’s silly to argue that truncated numbers must become inexact within the domain of applied math. Here’s one:

    You’re doing some carpentry, and you need to convert a measurement from feet to inches. How do you do it? You multiply the measurement by the truncated number 12. Is it exact? Yes, of course. There are exactly 12 inches in a foot. It would make no sense to use an inexact number. The truncated 12 stays exact, even though it is in the domain of applied math.

    But if the truncated number 12 remains exact while in the domain of applied math, then so can the truncated number 1.414213562373095. Which puts Jock back in the ridiculous position of saying that 1.414213562373095 is not approximately equal to √2, since for him, no two exact numbers can ever be approximately equal.

  39. keiths: Your criterion is broken.

    Naah

    keiths: But “2,7” is a truncated decimal expansion.

    What a silly thing to write. A “truncated decimal” is a decimal that has been truncated. It refers to the provenance of the number, not its less than infinite length. We could equally well have been talking about “rounded” numbers, but sometimes people round down when they should not. Here’s the full paragraph that you excerpted:

    It is your decision to truncate it that matters. You chose to truncate it in a way that (hopefully) still retains sufficient precision for the purpose you intend to put it to. I have often used 1.414 because I can remember it, but other times I will go to 15 or 16SF, if I need better precision. Suddenly, context and provenance matter.

    that you can read that paragraph and then maintain that, in “one foot = 12 inches”, the 12 is a truncated number tells me all I need to know.

  40. In my first post about the square root of 2 I did not truncate anything. I wrote down another exact real number whose first 15 digits or so were the same as the square root of 2, and omitted all the trailing zeros.

    It is you who said I had truncated.

  41. aleta:

    I did not truncate anything.

    In the weird, weird world of JockMath, omitting the trailing zeroes actually qualifies as truncation, and it moves you out of the land of pure math and exact values and into the land of applied math and finite precision:

    Poor keiths is stuck in a world of infinite precision, where 1,414213562373095 is identical to 1,41421356237309500000000000000…
    and approximations don’t exist.

    Jock tends to make things up as he goes in order to get himself out of whatever current bind he’s stumbled himself into. The problem with that, besides the fact that he doesn’t announce or acknowledge that he’s making stuff up, is that he doesn’t think things through. His spur-of-the-moment changes backfire on him down the road.

    To any mathematically literate person, dropping the trailing zeros doesn’t change the number. (Neither does dropping the leading zeros.) They weren’t contributing anything to the value anyway. 34.000000… = 34.000 = 34.0 = 34. To Jock, that equation is false, and changing the number of trailing zeros actually creates a different number.

    What he can’t grasp, even after months, is that changing the number of trailing (or leading) zeros is merely a change in the representation, not a change in the underlying number itself.

    He is confused by the fact the number of trailing zeros can give a hint about the precision of a measurement such as “5.00 inches”, but of course the number itself — the 5.00 – is no different from 5.000… or 5.000 or 5.0 or 5. They’re all the same number, and it’s infinitely precise.

  42. Jock:

    …that you can read that paragraph and then maintain that, in “one foot = 12 inches”, the 12 is a truncated number tells me all I need to know.

    By your weird rules, it is a truncated number, since the decimal point and trailing zeros are absent. In JockMath, you can multiply feet by 12 to get inches, but you can’t multiply by 12.0, since the latter is a different number.

    It’s inane.

    Math-savvy folks know that 12 and 12.0 are the same number.

    The more you make stuff up on the fly, the more blocks you create for yourself to stumble over.

  43. aleta: The square root of two is a infinite decimal that starts 1.414213562373095…

    It doesn’t make sense to not be able to say that 1.414213562373095 is approximately equal to the the square root of 2.

    That looks like truncation to me…
    It does not matter whether you know the next decimal and decided not to write it down, or you do not know the next decimal, in either case you decided to stop there, knowing full well that the number you recorded was not equal to the square root of two. That’s truncation; you decided that 16SF was fit-for-purpose. Similarly, when doing a Newton-Raphson, you get to some level of precision and decide “That’s good enough, I’ll stop here”.
    Normal people occupy the world of sufficient precision.

  44. keiths:
    Math-savvy folks know that 12 and 12.0 are the same number.

    Actually, math-savvy people know that these two representations are useful for two different categories of purposes. One is a count, one is a truncation. Counts are not approximate, and it’s necessary that they not be approximate. Truncation implies that a precision suitable for the purpose has been obtained. So 12.0 won’t be suitable if precision higher than the nearest tenth is required. Using the integer 12 to imply precision to the nearest 10th is a category error.

  45. Yesterday, I wrote:

    I’ve also shown you that appending zeroes to the fractional part of a decimal representation does not change the underlying number. 3.0 really is equal to 3.00, as most grade-schoolers could tell you:

    3.0 =
    3 x 10^0 +
    0 x 10^-1

    3.00 =
    3 x 10^0 +
    0 x 10^-1 +
    0 x 10^-2

    The only difference is the “0 x 10^-2” term. That term is equal to zero. Adding zero to a number leaves the number unchanged. Therefore 3.0 is equal to 3.00. It’s obvious.

    I first made that argument in January. I have made it a bunch of times since then. I have repeatedly asked Flint and Jock to address it. Instead, they have just quietly ignored it.

    If either of them had seen a flaw in it, they would have pounced immediately. Instead, it’s been weeks and months of crickets. That tells you all you need to know.

    (petrushka, this is what I was talking about. The reason you’ll hear me repeating my arguments is because I can’t get these guys to address them, even when they get at the very core of what is being disputed. If you’re tired of hearing the same arguments again and again, urge Flint and Jock to address them rather than running away.)

  46. Flint,

    For the umpteenth time, numbers are distinct from their representations.

    “12” and “12.0” are just two ways of representing the same underlying number, which has the value

    1 x 10^1 +
    2 x 10^0

    See the argument just above, which you’ve been avoiding since January.

    To convert feet to inches, you can multiply by 12, 12.0, 12.0000, 12.0000…, 012, 00012.00, 1100 (in binary), and so on. Those are all just different representations of the same number.

  47. Jock, to aleta:

    That looks like truncation to me…
    It does not matter whether you know the next decimal and decided not to write it down, or you do not know the next decimal, in either case you decided to stop there, knowing full well that the number you recorded was not equal to the square root of two. That’s truncation; you decided that 16SF was fit-for-purpose.

    You just created another stumbling block for yourself.

    Suppose that aleta had gotten the number 1.414213562373095 from a random number generator rather than by copying the leading digits of √2. By your criteria, it would not be a truncated number, it would therefore remain exact, and it would not be approximately equal to √2.

    That means that the following nonsensical statement is true in JockWorld:

    1.414213562373095 is approximately equal to √2, and 1.414213562373095 is not approximately equal to √2.

    Keep going, Jock. You’re doing great.

  48. Yes, it appears that intention, a subjective judgment, is the distinction between 1.414 being a truncation and just another exact real number.

    Suppose a buy a cup of coffee for $3.14, with tax. Is that a truncation because it’s the first three digits of pi? I wouldn’t think so.

    So if I just see a number just sitting there to some number of decimal points, such as
    4.762352375, how do I know whether it is a truncation or just exactly what it is?

  49. aleta:

    So if I just see a number just sitting there to some number of decimal points, such as 4.762352375, how do I know whether it is a truncation or just exactly what it is?

    You don’t, of course. So if you ask Jock…

    Is 4.762352375 approximately equal to 4.7623?

    …the only answer he can honestly give, assuming he actually believes the weird rules he keeps inventing, is “I don’t know.”

    If both numbers are truncations, they are inexact, and the distribution associated with the second number encompasses the distribution associated with the first. That makes them approximately equal according to Jock’s “encompassing” rule (which is broken anyway, for other reasons).

    If neither number is a truncation, they both remain exact, and Jock’s rules require him to say that they are not approximately equal.

    If the first number is a truncation, but the second one isn’t, then the two numbers aren’t approximately equal. That’s because the distribution associated with the first does not encompass the value of the second.

    If the first number isn’t a truncation, but the second one is, then the two numbers are approximately equal, since the distribution associated with the second number encompasses the value of the first.

    All four of those are possible, because Jock doesn’t know the truncation status of either number. He therefore has no idea whether the numbers are approximately equal. The only answer he can give is “I don’t know.”

    It’s a trainwreck.

Leave a Reply