Sometimes very active discussions about peripheral issues overwhelm a thread, so this is a permanent home for those conversations.
I’ve opened a new “Sandbox” thread as a post as the new “ignore commenter” plug-in only works on threads started as posts.
And I’ll make my usual point that “two and a half men” was intended to be funny. Apparently you don’t get the joke.
Flint:
I get the joke, but the fact that even a small kid counts as one person doesn’t make your case for you.
To argue, as I do, that 1 and 1.0 are the same number does not imply that there are fractional people.
As I put it in an earlier comment, if there are 2 people in a room and another person enters, there are now 3 people in the room. If there are 2.0 people in a room and another person enters, there are now 3.0 people in the room.
The second statement is weird and redundant, because we all know that there aren’t fractional people, but the statement is nevertheless true. There is nothing wrong with the math or the logic.
2 really is the same number as 2.0, 0035 really is the same number as 35, 69.7 is the same number as …00069.7000…., and so on. My argument demonstrates it, and you haven’t refuted it.
Flint,
I’ve presented an argument showing that 12 and 12.0 are the same number. If I am wrong, and you are right, there must be an error in my argument. There’s no other way. There has to be an error (or errors). If there isn’t an error, then I am right about 12 and 12.0.
What is the error? Quote the part of my argument that you think is flawed, and explain why.
You can talk all you want about all these other things, but the bottom line is this: If I’m wrong, there has to be an error in my argument. If there isn’t an error in my argument, then I’m right.
You claim I’m wrong. Where, specifically, does my argument go off the rails?
Flint:
I can help you untangle this if you’ll actually listen to me.
Let’s say we have a metal rod that, unbeknownst to us, is exactly 4.95823401… inches long. That’s its true length, but we don’t know that. Let’s say that nobody knows the true length except for an omniscient God.
We stick the rod into our Meas-o-matic and the display reads “4.958 in”. It’s a measurement, so we know that it isn’t perfectly accurate. It’s inexact. We know that the true length isn’t 4.958 inches. There’s an error, so the true length is something other than 4.958 inches. God happens to know that the true length is 4.95823401… inches, but we don’t. All we know is that the true length is not 4.958 inches.
Ask yourself, in this example, what it is about the measurement “4.958 inches” that makes it an inexact measurement. The answer is that it’s inexact because our measurement, 4.958 inches, differs from the true length, 4.95823401… inches. That’s all that’s required. If the measured length differs from the true length, then the measurement is inexact. And since the measured length will always be different from the true length, the measurement has to be inexact. It can never be exact.
For the measurement to qualify as inexact, the only condition is that the measured length has to differ from the true length.
Note that nowhere in that statement does it require that the number 4.958 itself be inexact, nor that the number 4.95823401… be inexact. As long as 4.958 inches is different from 4.95823401… inches, the measurement is inexact. They just have to differ.
So far we’ve been talking about a measurement. Now let’s consider what the conditions are for a number to be exact or inexact. This is a different ballgame. For a number to be exact, it has to have one and only one value. It has to be equal to itself and to no other number. It has to be infinitely precise. For a number not to be exact, it has to have more than one value, which is what you and Jock claim for the MDRs.
Note that the conditions for a number to be exact or inexact are completely different from the conditions for a measurement to be exact or inexact.For a measurement to be inexact, the measured value has to differ from the true value. For a number to be inexact, it has to have more than one value. It can’t be infinitely precise.
Now ask yourself, is it possible for a measurement to be inexact while at the same time the numbers themselves are both exact? The answer is yes. As long as 4.958 inches is different from 4.95823401… inches, the measurement is inexact, and that remains true even if the numbers 4.958 and 4.95823401… are both exact.
I’ll pause here to see if you’re with me so far.
keiths:
Flint:
Don’t forget that for me (and aleta, and mathematicians), 0.1 and 0.10000… are the same number. Each time you tack a zero on the end, you’re adding zero to the number, and adding zero to a number leaves it unchanged.
0.1 = 0.10 = 0.1000000 = 000.1 = 0000.1000…
That’s why 12 and 12.0 are the same number. Tacking a decimal point and a zero onto the end of 12 is the same thing as adding zero to it. The number is still 12, and that’s what my argument demonstrates.
Every real number, without exception, has at least one infinite decimal expansion. It’s just that we can often replace the infinite expansion with a finite expansion by taking advantage of the fact that all of the digits beyond a certain point are zero.
I showed above that a measurement can be inexact even if the number used to express it is exact. In terms of the specific example I used:
1. The true length is 4.95823401… inches.
2. The number 4.95823401… is exact.
3. The measured length is 4.958 inches.
4. The number 4.958 is exact.
5. The measurement is inexact because the measured length (4.958 inches) differs from the true length (4.95823401… inches).
6. Thus the measurement “4.958 inches” is inexact despite the fact that the number 4.958 is exact.
Once you see it, it’s pretty obvious, but F&J’s intuition tells them that this can’t be right. The 4.958 cannot be exact, because that would amount to claiming that the measurement is exact. I hope they will study this argument, because it’s quite straightforward and it shows that their intuition is wrong. The inexactness of the measurement does not require that the number be inexact.
The argument is straightforward and sound, and hopefully F&J will be able to see that, but there is another objection that they might still wish to raise. “Your argument correctly shows that the number 4.958 isn’t required to be inexact,” they might say, “but it still is inexact, because it is the result of a measurement.”
To see why that’s wrong, consider the Meas-o-matic. In our scenario, the Meas-o-matic is accurate to the nearest thousandth of an inch, meaning that it always displays three digits to the right of the decimal point. Let’s stipulate that the maximum reading is 9.999 inches. That means that every reading will be of the form “d.ddd inches”, where the Ds represent the digits.
Note that the Meas-o-matic can display certain numbers but not others. The readout can be “7.220”, but it can never be “7.22037”. Why? Because there physically aren’t enough digits to display the second number.
So we stick our rod into the Meas-o-matic and get a reading of “4.958 inches”, and we write that down. Do we write “4.959”? No. Do we write “4.957”? No. Do we write “4.95823”? No. The number we write down is exactly 4.958. Does this mean that the measurement is exact? No, of course not. The measurement we write down is not equal to the true length.
We write down the exact number “4.958”, yet the measurement “4.958 inches” is inexact. No contradiction, no dishonesty, nothing problematic. It all fits together perfectly and consistently.
F&J’s intuition is wrong. We don’t need inexact numbers — the MDRs — in order to express inexact measurements. The MDRs are a solution in search of a problem.
There is no problem, so the MDRs aren’t needed. They’re useless.
Keith, I hope you’ll forgive my presumption, but I think you are being needlessly repetitive. You made your point quire clearly multiple times, I think, and aren’t going to have any impact saying the same things again.
I think the meta-issue behind this impasse are different metaphysical visions about the relationship between theoretical and applied math, and the nature of theoretical math as it is turned into models for real world application. There are probably also issues about infinity, as I gather from reading about Wildberger.
As you know,I agree with you. When we write down a measurement we write down a number whose theoretical referent is an exact real number, although all knowledgeable people know the measurement is not exact even though the number we are using to describe it is. We also know there is some error interval around the measurement in which the real measurement most likely lies,. Also we have good practical understanding of the error interval and what the practical consequences are of accepting our measured value.
As a bystander who knew nothing your previous arguments,, it’s obvious to me that some pretty big personality defensiveness and antagonisms have gotten established. My totally unasked for advice, and I again apologize some for butting in, is for you to consider your points well-establish and well-stated, at their core, and let the past go.
Sincerely,
Aleta
aleta,
There’s no question that I’ve been repetitive, and I agree with you that I’ve been quite clear, but this isn’t something that I’m doing merely for kicks. I suspect that there are a lot of competent readers who either came into this discussion already knowing what you and I know, or who got the point right away, and the repetition won’t be edifying for them (though they might find the discussion interesting or entertaining for other reasons).
However, Flint and Jock at least claim not to have gotten the point, and while I’m not certain about Jock, Flint strikes me as sincere, as when he wrote this just yesterday:
That indicates genuine confusion, and I believe Flint has the cognitive chops to overcome that confusion if he is willing to ponder my arguments and walk through them step-by-step. (Ditto for Jock, assuming that his disagreement is genuine.) That may never happen, but so far Flint has at least been willing to continue the discussion, and I haven’t yet concluded that the situation is hopeless. At some point I will give up — and in fact I had given up months ago when I let the subject drop, before Flint resurrected it recently.
The fact that we disagree seems to bother Flint, as he periodically takes little digs at what he perceives as my intransigence, even in totally unrelated threads. The very reason we are having this discussion today is that Flint resurrected the topic by posting this in walto’s unrelated thread “A Natural After-Life”:
He is bothered that I continue holding a belief (albeit one he has misconstrued) that he considers patently false, enough so that he brings the topic up repeatedly. (I ran across another example just today when I dipped into the Rationalizing Hell thread for other reasons.) It clearly bothers him.
I’ve been willing to explain to him why my position is correct, and I continue to be willing (up to a point), and that’s why I have been participating in this discussion.
Please ignore the discussion if you don’t find it edifying, interesting, or entertaining. Life is too short to spend time reading blog discussions that are of no value to you, especially when there’s so much else you could be doing instead. The power (and the scroll wheel) is in your hands.
Speaking strictly for myself, I find the discussion worthwhile for a number of reasons. I subscribe to the adage “If you really want to understand something, teach it”. The discipline of arguing for my position has made me aware of subtleties that I might otherwise have missed. It’s also been good practice at expressing myself clearly, and I’ve found that blog discussions in general have noticeably improved my writing over the years. It’s also recreation, in the sense that debating is a sport. It’s fun to spar with these guys. Lastly, it’s entertainment. I’ve gotten some good laughs during discussion.
So again, please don’t waste any time following the discussion if you don’t find it worthwhile. Life is just too short. For my part, I am willing to continue, at least for now.
aleta:
There are disagreements in both areas, but they aren’t merely a matter of starting from different philosophical presuppositions. For example, Jock’s belief that exact numbers cannot work in applied math (a belief that Flint shares, as far as I can tell) is provably false. He can always stipulate that “I forbid the use of exact numbers in applied JockMath”, but he claims more than that. He believes that a system of applied math that allows exact numbers is actually broken, which is incorrect, as both you and I have argued.
It’s similar for questions about infinity. The use of infinitely precise numbers in applied math doesn’t create problems, contra F&J. What would create problems is the assumption that measurements are exact, which fortunately seems to be a position that is rarely, if ever, taken.
Mathematics is all about axioms, rules, and definitions, and anyone is free to create a system based on their own personal choices for these things, and to tease out the implication of those choices. Non-Euclidean geometry is the classic example of a productive discipline within mathematics that was created by denying an axiom that everyone had taken for granted for centuries — namely, the parallel postulate.
F&J are free to adopt axioms, rules, and definitions that forbid the use of exact numbers to express measurements in applied math, but they cannot truthfully say that the prohibition is necessary in order to create a workable system. As you and I both understand.
I have no problem with this as it applies to measurements. I have been trying to say that by convention, an integer is a different data type from a measurement. In my philosophy, an integer simply is not a truncated (unexpanded) infinitely exact value. When I work within the set of all integers, keiths-numbers simply are not involved or even implied. Sheesh, even the normal consequences of inexact measurements are different from the normal consequences of wrong counts.
I do not read aleta as addressing this. She says
. And I think this is a good insight. Different metaphysical visions doesn’t mean one vision or the other is broken. I find it perfectly valid to view a number known to be inexact as having a theoretical referent which is exact, while still realizing that the number in question is not exact.
I notice that aleta, and keiths, carefully refrain from any mention of the entire field of discrete math, both in her explanations and in her classroom lessons.
Except for this little concept called “precision”. In decimal, each time you add another zero, you are saying the value is 10 times more precise. Even in grade school, I learned about significant figures.
I’ll make a deal with you. I will give you ten dollars in exchange for ten thousand dollars. After all, I only added zeroes which “leaves it unchanged”, right?
Beautifully put. Thank you.
I issue is when people forget about the error interval because they “know” that the number is exact.
Now, you might believe that nobody would be that stupid, but that would be an error on your part: meet keiths, who made that very error twice on the ChatGPT thread . Better, for their sake, to treat that number as a distribution. This applies to any decimal representation of a surd, the roots of Q(x), all statistical estimators, etc.
Yes, that is exactly what it implies. When you have two people, you have exactly two people. When you have 2.0 people, this implies that the number of people lies somewhere between 1.95 and 2.05. This is the meaning of the decimal point.
Flint writes, “I notice that aleta, and keiths, carefully refrain from any mention of the entire field of discrete math, both in her explanations and in her classroom lessons.”
I call baloney! 🙂 My example of the ball bearing is all about discrete math, because only units of a certain minimal size are even feasible. I haven’t mentioned the term “discrete” because I didn’t know the distinction, as opposed to continuous, was one of the issues here. All measurement involves discrete math. The theoretical foundations to which it refers may be continuous, but in practice only discrete sizes are of practical significance.
But I’ve skipped over a lot of the discussion that goes back to a thread I wasn’t part of, so maybe there is an issue here that I don’t know about.
Thank you jock. However, you write, “Better, for their sake, to treat that number as a distribution.” I think keith pointed out that you mean a range, or an interval. A distribution would be something else, I think. x = a ± e describes an interval, not a distribution.
By the way, did you see my quote by Wildberger?
Thank you, keith, for gracefully responded to my criticism. You’re right: I can obviously choose to not read a post, or a thread.
I disagree, strongly.
and x = a ± ε describes a distribution. keiths thinks that 100% confidence intervals exist, I don’t.
Yes. I didn’t really think that it went anywhere — not dispositive regarding our discussion. Don’t know whether I agree or not. Try his commentary on
The big mathematics divide: between “exact” and “approximate”
for size.
Jock, can you explain your disagreement to the statement “The theoretical foundations to which it refers may be continuous, but in practice only discrete sizes are of practical significance.”
Jaoc, how does x = a ± ε describes a distribution? See https://en.wikipedia.org/wiki/Probability_distribution.
An interval can be described on a number line, but a distribution requires two dimensions to represent. It describes how many times various things occur within an interval.
If I had 100 people each measure a ball bearing whose nominal size was 5 cm (a big ball bearing), and I found that the measurements ranged from 4.99 to 5.01, I could say the range, or iinterval, was 5 ± .01. If I knew how many times the measurement was 5.001, 5.002, 5.003, 4.999, 4.998, etc, then I would know something about the distribution.
In what way are you using the term distribution, and do you distinguish it from an interval?
aleta,
I’m a chemist by training.
The idea that, just because we only ever record values to a finite level of precision, using the ‘discrete’ units of our limit of quantitation with this assay, somehow means that we are oblivious to the practical consequences of getting it wrong due to the continuous nature of the underlying value — practical significance — is wrong.
Or did you mean something else by “only discrete sizes are of practical significance”?
I was referring to measurement, and to flint’s claim that somehow I was ignoring discrete math. Yes, the underlying phenomena may be continuous, and we may take that into account in our theories, but our measurements of the phenomena are necessary discrete.
A good example is time. We assume time is continuous (let’s not get into esoteric physcs), and we have worked really hard to be able to measure the passage of time very accurately, but no matter what, our measurements will have an error interval, and the precision with which we measure will have discrete size.
I thought this was the point about measurement that you have been making all along.
aleta,
ε denotes a random variable.
aleta:
Jock:
Jock, a suggestion. You mentioned Wildberger in response to a comment of mine:
You replied:
When you retrieve your Wildberger quotes, please make sure that they are responsive to my question about numerical methods.
I need to run a couple of errands, but I’ll comment further in a few hours.
jock writes, “ε denotes a random variable.”
Hmmm. I had assumed it was like epsilon in that it represented a maximum distance from the nominal value: x = a ± e meant the same as a – e < x < a + e.
Can you explain the meaning of x = a ± e if e stands for a random variable. It would seem that looking at e as a random variable would mean it would assign a probability to each one of the possible elements in the interval, and I don't see how that can apply to just taking a single measurement of something.
Can you explain more. Start fresh and assume I know nothing about discussions on past threads, please.
You could open a new thread as a fresh start. (You have the relevant permissions.) is also enabled here. Use dollar signs to enclose within text and double dollar signs for a separate line.
I say €2+2=4€ and €€1+1=2€€ becomes: I say and
becomes – when you switch euro to dollar.
is enabled by default. Use of dollar signs in other contexts can produce odd results.
I owe you
aleta, to keiths:
Flint:
Fantastic! You just agreed with what aleta said, which is what I have been saying since this discussion began back in January. We use exact numbers to express inexact measurements!
Do you mean it, or did you only agree because you were confused?
In response to the same quote from aleta, Jock writes:
Doubly fantastic! Jock, too, now agrees with me. However, he and Flint may both have been confused about what they were agreeing to. We’ll see.
This interval business started way back in January when Jock wrote:
LOL at the first two sentences, but the “true or false” question is the one we’re interested in here.
I responded:
Jock replied:
As aleta and I have both explained, those cannot be distributions, because distributions are represented by two-dimensional curves in which the height of the curve at each point indicates the relative probability of the value directly beneath it on the horizontal axis. That information is absent in “3 ± ε” and “3.0 ± ε”. Those are intervals (aka ranges), not distributions.
The interval p ± q is equivalent to “all x such that p – q ≤ x ≤ p + q”. Substituting ε for q does not change that. ε is a variable, and though it is used in a range [heh] of specific ways according to mathematical convention, it remains a variable and it can be substituted for q above.
The most common use of ε in math is to represent an arbitrarily small quantity. Think back to the formal definition of limits we all learned in calculus class, which makes use of intervals defined in terms of ± δ (on the x axis) and ± ε (on the y axis). In the definition, those intervals get “squeezed” because δ and ε approach 0.
So despite Jock’s protests, it’s perfectly fine to express intervals in terms of ± ε .
Regarding Jock’s question
We can observe that 3 and 3.0 are the same number, and like any other variable, ε represents the same value on both sides of the equation, so those two intervals are identical.
Jock:
If there are people who forget that measurements are inexact, the solution is to remind them that measurements are inexact, not to force them to use a broken and unnecessary number system.
The beautiful irony is that it was Jock who made the mistake, and it was the opposite of the mistake he just attributed to me. He treated an exact number as if it were inexact, thus introducing an entirely unnecessary error into what should have been an exact unit conversion.
The solution is to educate them, not to lie to them. The numbers we use to express measurements are exact, as aleta said and you agreed (perhaps inadvertently).
Flint:
Remember, data types are a computer science concept. In the old thread, you repeatedly confused data types with categories of number, and it really threw you off.
In math, an integer is a real number whose fractional part is zero. Like other real numbers, an integer can be the result of a measurement. The measurements “5 inches” and “5.0 inches” are both expressed using the exact real number 5, which is an integer.
We discussed this in the old thread. I’ll dig up the quotes tomorrow, but for now let me remind you that integers are real numbers, so operations on integers that produce integers are part of the real number system. In the past, you’ve mentioned factorials as something that cannot be done in the real number system, but that’s incorrect. 3! = 3.0! = 3 x 2 x 1 = 3.0 x 2.000 x 1 = 6.000… = 6. Every number appearing in that statement is an integer, and the statement is correct. Appending “.0” to an integer such as “3” does not make it a non-integer. It’s still an integer, and you can still apply the factorial operator, since the factorial operator works on all integers. “3.0! = 6” is a true statement.
It all fits perfectly within the real number system.
Flint,
The pic below shows what my calculator thinks about whether 12 and 12.0 are the same number.
My calculator isn’t broken. The problem is on your end.
Necessarily?
Questions:
1. Are integers a subset of reals?
2. Are integers on the real number number line?
3. On the real number line, are there any numbers between 12 and 12.0?
4. On the real number line, does 12 occupy the same position as 12.0?
5. By convention, integer arithmetic on computers produces integer results, but in real life, operations on integer data produces non integer results. Birth rate, for example.
6. Can anyone explain why data handling conventions should impact number theory? Do we even have a consistent and coherent set of rules for data presentation that apply to all cases?
Yes to all.
Depending on how you define “birth” (I’d be pushed to decide the exact transition and when to count my daughter’s birth), number of births is a count of events, raw data. How someone decides to manipulate that data usually involves assumptions, simplifications, approximations, rate calculations and predictions that may be expressed as integers or non-integers, depending on context.
Why do they need to? Context covers it.
Do we need to, except in context?
Context seems to be the topic of this discussion.
I followed the Wildberger rabbit hole for a ways. Far enough to discover that a minority of mathematicians are unhappy with the mainstream treatment of infinities.
I don’t see the relevance to this discussion.
My takeaway from this discussion, after far too much time, is that scientists want to treat numbers as data, and mathematicians as objects of logic.
I’m sure there are people who spend their entire careers thinking and writing about how to collect, reason about, and present data.
They are most likely, not the same people who worry about Cantor, Gödel, and that crowd.
Petrushka writes, “I followed the Wildberger rabbit hole for a ways. Far enough to discover that a minority of mathematicians are unhappy with the mainstream treatment of infinities. I don’t see the relevance to this discussion.”
My conclusions also.
It depends. It is a matter of convention. Most of the time, we follow the convetion that says they are a subset.
Again, this is a matter of convention — the same convention as in the previous question.
The distinction between 12 and 12.0 is notational. As such, it has nothing to do with the real numbers. The term “real number line” comes from the use of analogies, which are useful for communication but are not part of the mathematics.
I disagree. When doing integer arithmetic, the operation of division gives both a quotient and a remainder. And those are both integers.
When doing real arithmetic, the operation of division gives a quotient, which is a real number (except that division by zero is not allowed).
When computing a birth rate, one is doing real arithmetic, not integer arithmetic.
They don’t — unless you are using a non-standard meaning of “number theory”.
I call nonsense. There was nothing discrete about your example. You seem to be repeating keiths’ error of saying that because a number is inexact but can be expressed as exact (that is, without specifying error bars), therefore it IS exact! You yourself said, essentially, that in real life a number that LOOKS exact need only be “close enough for the purpose” and we all agree that close enough is good enough.
It has been an issue from the start. The whole point of the 3 ± ε example was to emphasize that this expression involves only integers, that is 3 ± ε = {…2,3,4…}. Not decimal values very close to 3.0.
NO measurement involves discrete math. Discrete math is not concerned with measurement in this way. ALL measurement necessarily involves continuous math, which we cannot honestly “pretend away” by saying a known inexact value is somehow magically infinitely exact if we’re careful not to point out that it isn’t. In practice, it’s necessary to recognize that there is a range of error, and that the specific notation used necessarily represents an approximation.
In the engineering world, measurements are generally specified with a level of precision. Nobody is going to ask that a part be X inches long, if it is required to be precise to the nearest ten thousandth of an inch. They will say “X ± .0001”. You might say the specification of precision is a concession that the measurement is NOT discrete, it has an error range within the specified necessary precision.
My point is talking about math is different from talking about the presentation and interpretation of data.
Teachers presumably have different expectations when asking math students to calculate a quotient, different from the expectations if they ask them to calculate a birth rate, or a loan payment.
The math used in most published research papers is pretty basic, but the required knowledge of collection methods, error range, and significance is pretty formidable, and differs from field to field. As an outsider reading papers, I’m often left thinking there are some Masonic secrets involved. Unwritten — but understood within the clan — conventions for the specific field of research.
I don’t see how it cannot apply; whatever the n, we have incomplete information about the nature of the distribution. For a single measurement, you will need the AMVR:
Think linear regression: Y = βX + ε
The use of symmetric intervals looks, to me, like applied math done wrong, except perhaps when n = 2.
Flint, I really don’t understand what you are saying, or perhaps how you are using the word discrete???
If I measure something with a tool that can only measure to the nearest tenth, then there is only a limited, discrete number of possible measurements. If the instrument can measure to the nearest hundredth, then there are ten times as many possible measurements, but still a discrete number, all of a discrete size. Measurement chops up a theoretically continuous set of values into discrete, disjoint subsets, because that is all we can access in the real world.
Error bars have absolutely nothing to do with discrete vs continuous. Any measurement can be expressed as an integer. I thought we passed this a long time ago.
Nontrivial counts are imprecise (votes, births, census). It is misleading to call them wrong. They are imprecise in exactly the same way as measurements of ball bearings are imprecise. Whether they are approximately correct depends entirely on the application. It’s not a math question. It’s really a question of consequences.
petrushka:
Yes. They are infinitely precise, like all real numbers. They have infinite decimal expansions. Together with the non-integers, they form the totality of the real numbers.
Yes.
No. An easy way to tell is to subtract 12 from 12.0. You get zero, which means they are the same number. There are no numbers between a number x and itself on the number line, so there are no numbers between 12 and 12.0. (Flint, would you like a photo of my calculator display showing that 12.0 – 12 is 0, or is the principle starting to sink in?)
Yes. Attaching “.0” to the end of “12” is tantamount to adding the quantity 0 x 0.1 to it. Adding zero to a number leaves it unchanged. The number x occupies the same point on the number line as the number x.
This will probably throw Flint for a loop, but the phrase “integer arithmetic” is ambiguous. In the context of computers, it refers to arithmetic performed on operands in integer format (as opposed to floating point format, for instance), yielding results in integer format. Outside of a computer context, it means arithmetic performed on integers, yielding integers. On a computer, the second kind of arithmetic can be performed using operands in integer format, but it can also be performed using operands in floating-point format. The number 12 can be stored in memory in integer format, but it can also be stored in memory in floating-point format. In both cases the underlying number is 12. It’s just that the representations are different, so the programmer uses different instructions to manipulate them.
By ‘number theory’, you presumably mean ‘mathematics’. If so, my answer is that data handling conventions do not affect math, but math definitely affects data handling conventions.
There are lots of different ways to present data, but math itself doesn’t care about presentation. A number x that is presented in umpteen different ways is still x in each case. Different representations, same number.
A separate note on the two types of integer division.
There is quotient/remainder division, which is the kind we typically learn first, and there is normal division, in which the quotient can be a non-integer. They are different algorithms, but both can be performed within the real number system. After all, integers are real numbers.
On a computer, quotient/remainder division is typically done on operands that are in integer format, while the other kind of division is done on operands in floating-point format.
Quotient/remainder division is typically done in integer format, because storing integers in integer format is more efficient than storing them in floating-point format, and the division can be performed using a single instruction (eg IDIV on an X86 machine). However, you can perform quotient/remainder division on integers stored in floating-point format. It’s just that processor instruction sets don’t include dedicated instructions for this purpose, so the division has to be done in software.
petrushka:
I’m hoping Jock will supply quotes supporting his claim about Wildberger. Namely, that Wildberger would agree with him that using numerical methods to solve equations is applied math, even when no measurements are involved, the equations do not represent or refer to anything in the real world, and the solutions aren’t being applied to anything in the real world.
To me, that’s a textbook example of pure math, and Jock’s position is bizarre. He’s essentially asserting that there is such a thing as unapplied applied math.
My takeaway, after far too much time, is that a couple of people here mistakenly believe that inexact measurements cannot be expressed using exact numbers, and that it is therefore necessary to use an entirely separate system of inexact numbers (which is an oxymoron) to express them.
The phrase “far too much time” reminded me of this comment I made on January 19th:
That was seven months ago.
Masonic secrets — no. There’s nothing “masonic” about it and they aren’t secret.
But yes, there are unwritten conventions. Human life is chock full of unwritten conventions. We are a social species, and that requires adhering to social conventions, which are mostly unwritten.
I prefer to avoid conjectures about what Wildberger might believe — or, for that matter, about what keiths might believe.
It’s applied math. There’s nothing bizarre about it.
What counts as “applied math” and what counts as “pure math” is itself a matter of unwritten convention. It’s hard to guess what Wildberger would say, since he clearly rejects some of the conventions of the mathematical community.
Neil:
Which is precisely why I’m asking Jock to supply quotes. We infer people’s beliefs from what they say or write. Jock claims that Wildberger would agree with him, and he presumably reached this conclusion by reading Wildberger’s words or by reading what someone else wrote about Wildberger’s views. I would like to see some evidence that Wildberger would in fact agree with Jock on the question regarding numerical methods. Short of ringing Wildberger up and asking him directly, the best solution is to look at Wildberger’s words.
At TSZ, everything you know about what others here believe is based on what they have written (unless you happen to have spoken with them).
Rationale, please.