and always = A and only a TSZ “nihilist” .would deny it, says Barry Arrington.

A=A is infallibly, necessarily true

What does this claim even mean? That something denoted by A is identical to something else also denoted by A? Clearly not.

That if we devise a system of logic in which we declare that A always equals A , A must always equal A? Well, duh.

That the only possible logic system is one in which A is always equal to A? Well, no – fuzzy logic is a very useful logic system, and A is sometimes only approximately equal to A, or may equal A if it passes some threshold of probability of being A.

So what does he even mean? Is his claim even coherent?

I see that as a queston about meaning and reference, rather than as a question about identity.

Erik,

To establish A = A, let A be some uncomputable number

Describe the uncomputable number in a hypothetical computer of infinite power, it will then attempt to do a comparison, it won’t halt, hence it cannot logically establish A = A, even if true.

That is the Halting Problem, hence A=A for uncomputable numbers can only be assumed, it cannot be proven.

Barry asserts this is a proposition:

One can accept is as an axiom. I accept as axiom for classical logic. No problem.

But the moment one says it is a proposition, then one is saying it can be proven either true or false. I just showed, it can not be proven as either true or false for uncomputable numbers. Russell pointed out once one starts making propositions out of axioms, one gets into Vicious Circles.

Russell thought he could build mathematics on classical logic. He and Whitehead took 362 pages to take classical logic and synthesize mathematical notions like:

1+1 = 2

But Gödel showed non-trivial math transcended classical logic. There were propositions which could not be decided.

A=A if taken as an axiom, it is true. If taken as a proposition, it is undecidable.

See:

https://en.wikipedia.org/wiki/Halting_problem

What is so “self-evidently” obvious in the finite realm ain’t so self-evident in the infinite realm. One can accept it axiomatically as true.

But this leads to a problem, “what perspective takes priority”.

Property( 4 quarters) = Property(100 pennies)

in one perspective they are equal, as in A=A, in another, A notEq A.

Do you think that law of identity is about identity of objects? Law of identity is a law of thought. It’s most directly about meaning and reference.

Like many things in English-speaking world, law of identity has a name that tends to mislead and confuse. Luckily I don’t have the kind of confusion that the OP expresses.

Maybe a good way to put it is that law of identity is about identifying concepts, not about identity of objects. To me the law most readily conveys the suggestion to use a term with a single definition throughout a context, which happens to be a very basic standard.

The mathematical logicians showed propositions in a consistent system are either:

1. true

2. false

3. undecidable

A=A is either true, false, or undecidable if we allow A to be non-trivial.

One can axiomatize A=A as always true for an arithmetic system. That is we force all propositions to be

1. True

2. False

but Gödel showed this leads to inconsistency by trying to make the arithmetic system complete, hence one will get (A = not A) for some A.

Rusell and Whitehead thought they could eliminate all undecidable propostions — CRASH. So much for the principles of right reason. Ha!

It properly belongs in formal logic. Meaning and reference are not involved there, because the logical inference is based on form rather than reference.

This is one point where continental philosophy differs from analytic. Continental philosophy is not so formalistic. Laws of thought are taken to be laws of thought in broader sense, not of some specialised form of logic.

But if that were what it meant, it would merely be a “good practice” heuristic, not an infallible truth, which is what Barry implies.

Can you say more about what you think Barry might mean when he says:

A=A is infallibly, necessarily true

What kind of entity is the A referred to?

not Nothing = not Nothing

Well, in print it looks reasonable, but we know from a semantic standpoint, there is something unwholesome about this statement, even though from a syntax stand pointed we’d assert it as true!

The problem with

Is that from a syntax stand point of classical logic, it looks right. But when one starts talking about what A means, it’s another story.

Russell and Whitehead tried to avoid this by building mathematics from the ground up on classical logic. That is to say, the notion of “is” or “equals” wasn’t even assumed! That’s why it took 362 pages to say it was possible to assert:

1+1 = 2

See the photo here:

https://en.wikipedia.org/wiki/Principia_Mathematica

Russell and Whitehead attempted to construct mathematics from scratch using the “Prinicples of Right Reason”. The project crashed after Gödel refuted it’s aims.

Let A = something

something = something

Do you believe that is infallibly true for all possible senses of “something”? One will say, “I meant well defined-objects”. Ok, so the above needs qualifiers, hence as it stands the statement is false, or not complete at best. QED.

The problem is that symbolically (syntactically), it looks correct, but once we deal with non-trivial objects, it’s not so straight forward because of the problem of meaning. If by “something” we meant the ACIII characters ‘s’, ‘o’, ‘m’, ‘e’, ‘t’, ‘h’, ‘i’, ‘n’, ‘g’ concatenated together it works, but if you mean something else, then the statement is up for grabs because meaning of

something=something

is up for grabs, even though syntactically it is correct.

Mathematicians realized the problem of

meaningposed similar problems in mathematics! Russell and Whitehead supposed that if they built mathematics up from pure syntax without any semantics they could drive out the spirit of meaning, and thus avoid ambiguities. As my Artificial Intelligence professor used to say in class every day about Russell and Whitehead — “CRASH”. 🙂Meaning cannot be driven out of mathematical symbology. More formal syntax does not exorcise the demons that lurk around the problem of meaning.

So

Therefore:

It has the same exact form as what is trying to be asserted at UD. Anyone here buy the above?

Things aren’t as straightforward as they seem.

PS

Case in point. Here is a proposition:

“The square root exists in a system that obeys the field axioms.”

It is true if one means reals, not true if one means rationals. The problem of meaning raises its ugly head again.

I wonder if “A=A” is necessarily true because it is a tautology. The problem with tautologies is that they are “inferentially sterile” — they imply

alltrue propositions, and are thereforeuseless.(Arrington insists that it is true because it is analytic, which is correct. But he does not seem to appreciate Wittgenstein’s point in the

Tractatus, which is that all analytic propositions are tautologous. If Arrington has an argument against Wittgenstein, I’ve not yet seen it.)A side point about Godel’s incompleteness theorem: what Godel proved is that no formal system rich enough to capture arithmetic can be proven complete

using that system itself. That’s different from first-order logic, whichcanbe proven completeusingfirst-order logic. There are proofs of the completeness of arithmetic, but only by introducing additional axioms that are themselves “ungrounded”, we might say.As I understand it — and perhaps I’m mistaken — the fly in the ointment isn’t “semantics”, but whether set theory is the only “syntax” you need in order to do arithmetic.

A “tautology” is a proposition that does not need to be proved

because it is logically self-evident.It “proves itself,” so to speak, because if you deny it, you must contradict yourself.To admit that it is a tautology is anathema, because that means agreeing with Barry Arrington. So most “skeptics” here take the “it’s not a tautology” option, because otherwise they’d have to admit it’s logically self-evident.

One might argue that frogs are frogs is a tautology. But here at the skeptical zone even that obvious truth is subject to challenge.

After all, the frogs on the left might be different from the frogs on the right. Then it would not be the case that frogs are frogs. Seriously.

Mung,I agree that analytic propositions are “true by meaning alone”, hence tautologies. Just as contradictions are “false by meaning alone”. An analytic proposition is one that is understood to be true by anyone who understands both subject and predicate, if the predicate is “contained” within the subject.

Or so Kant would have us believe.

Nevertheless, tautologies are useless, so what’s the point of insisting upon them?

It might be argued that there are non-tautologous analytic propositions, but no one here or at UD has made the case for such.

I have no problem saying,

and if I felt I needed to, I could say, “assume true for all A that make the statement true in classical logic”. I shouldn’t get further argument.

But the following is extraneous blather that doesn’t add force to any argument, just creates pointless arguments, and weakens credebility:

As pointed out, that isn’t true in non-classical logics. If one now offers it as a proposition vs. and axiom, it leads to the Vicious Circle Paradox.

https://en.wikipedia.org/wiki/Vicious_circle_principle

It evidence of circuitous reasoning, which doesn’t lend credibility to the argument.

If we’re talking physical objects, it become trivially useless when infallible “it is what it is, and not something else” or fallible when it maps physical objects to conceptual ones since mappings like the one below are contrivances of the human mind:

Property(4 quarters) = Property(100 pennies)

So the claim is already on shaky ground.

Consider this extension of the above

not (not B) = not (not B), necessarily, infallibly true

Syntactically, correct, but if we are talking numbers this is meaningless.

Suppose B = 0

what is not B? 1,2,3,5…??????? The set of all objects not 0? The set of all rationals not 0? …

It is not a well formed proposition. If we are restricting A to logical concepts, we’d want well-formed-formulas:

https://en.wikipedia.org/wiki/Well-formed_formula

So if A is not a well-formed-fomula, it’s gobbledygook.

I could invoke the Russell paradox.

Let B = set of all sets

What is not B?

This is not the sort of language that gives an argument credibility:

It’s an example of thinking one has made an infallible claim that really isn’t defensible.

Pretty much so, even though it’s not “merely” so. It’s a logically necessary presupposition.

He doesn’t imply it. He spells it out loud and clear. And he’s wrong about it. Laws of thought are not infallible truths. You can break them easily (and get into trouble accordingly).

Laws of thought are logically necessary presuppositions for any truth concept to obtain.

I could deconstruct his statements to demonstrate that he doesn’t think at all, not methodically and impartially anyway. But is he worth it?

If by “entity” you imply “physical object”, you are talking past the issue.

It should not be overlooked that the meaning of the equal sign “=” carries huge significance.

1=1

seems trivial enough, but to say

1+1 =2

is saying 1+1 is so much like 2 it is no different than saying

2=2

This raises the issue of what “equals” really means since clearly the string “1+1” is not the string “2”.

But this leads to the problem of circuitousness. 1 is not defined except by its relation to other numbers, and other numbers are not defined except by its relation to other numbers except 1.

Thus A=A being infallibly true for numbers relies on the ability of comparing circularly defined objects. We have to still hope it works.

We might want to define one property of equal signs, that is it must have one necessary property such as:

A=A is true

but that is an axiom, not a proposition!

Do we allow A to have meaning or A to be restricted to syntax only?

something = something

leads to all sorts of problems if we allow meaning to the symbols. If we forbade meaning to the symbols, then strictly speaking

1+1=2

could not possibly true because we are then comparing meanings not actual syntax symbols!

I credit Bill Dembski with one thing, he wanted to exorcise the problem of meaning in design recognition. Meaning is a pandoras box.

A tautology can be defined in three ways: (a) a proposition that is true because of its logical form, whatever its content, (b) a proposition whose contradictory is self-contradictory, or (c) a proposition whose predicate is necessarily contained in its subject.Though a proposition may be self-evident objectively, in itself, it may nto be self-evident subjectively, to a given human mind.– Peter Kreeft.

Socratic Logic.Of those, I would go with (a). And then I would argue that analytic sentences are not tautologies, because they are true due to the meanings of the terms rather than due to the form.

For that matter, I do not see an analytic sentence as true in all possible worlds. Rather, it is true only in those worlds where it is meaningful. If the meanings of the terms are incompatible with a possible world, then the analytic sentence might be gibberish in such a world. The idea that meanings can extend to other logical worlds already seems dubious to me.

Actually, 2 is the symbol

definedto be the successor to 1, or 1+1.The reason why analytic sentences are usually regarded as tautologies is because there is identity of meaning in the subject-term and the predicate-term. That’s true of tautologies, and it’s true of analytic statements as well. Put another way, there’s no new information in the predicate-term that’s not already present in some way in the subject-term.

I see your point here, and it’s an interesting one. However, a statement of identity such as “necessarily, whales are whales” doesn’t mean “there are whales in every possible world” , nor does it mean, “the word ‘whale’ is meaningful in every possible world”. It means, “in any world at which there are whales, a whale is a whale”.

In other words, you have to read “X=X” as “if X is possible, then for any X, necessarily X=X”.

Right, and it only seems difficult to appreciate because we’re using Arabic numerals. If we were using Roman numerals, the point would be far more obvious!

Sure. But “Whales are whales” is true by virtue of its form, and meaning doesn’t come into it at all. However, the truth of “Whales are vertebrates” does depend very much on meaning. So my preference would be to say that “Whales are vertebrates” is analytic but not tautological.

Neil Rickert,On second thoughts, “whales are vertebrates” is perhaps synthetic.

However, “vertebrates are chordates” seems analytic, true by virtue of our classification conventions, which establish meaning. It doesn’t seem tautological but it does seem analytic.

Not so straight forward. The “+” is an arithmetic operator. Strictly speaking, if we were using pure string comparison:

1+1 = 1+1

Contrast this with

1+0 = 1

or

0+0+0+0 = 0

What ends up happening is then, some symbols have special properties like the ZERO in addition and the ONE in multiplication and division.

So we create the axioms of the reals. Works well enough for many things till we take the two axioms for ZERO and ONE and do something like this:

1 / 0 = ?????

What seems so superficially straightforward quickly becomes not so straightforward. We then have to build huge complicated work arounds like limits in standard analysis or infinitesimals in non-standard analysis.

You and Zachriel are correct in as much as we define 1+1 = 2, but then the problem is if we define 2 as “1+1” how do we define 1? We define 1 as the multiplicative identity by which all numbers are themselves when multiplied by 1

like

1 x 3 =3

But then, 1 is defined then by all other numbers! So we have circularity in the definition. It’s the dictionary problem. Every word in the dictionary is defined by other words in the dictionary.

If this is the case, how then can we communicate because of the circularity of definition? We eliminate ambiguity by making models of the dictionary against real world objects. Real world objects kind of prevent circularity from causing problems in terms of meaning.

The challenge with conceptual systems is we may have introduced some fatal flaw in our representation. It might have an inherent contradiction. Gödel showed for non-Trivial systems capable of arithmetic, if they are complete, they will have a contradiction!

Consider again:

Property(4 quarters) = Property (100 pennies)

taking the real world to the conceptual world can lead to all sorts of mistakes since the mapping procedure is often contrived not to mention the conceptual model may have self-contradictory representations. That is the problem that bugs mathematicians, no matter how beautiful their formal conceptions are.

Here is a beautiful formal mathematical conception, that as far as I know has no analog in the real world:

http://www.mcescher.com/Biography/lw439f14.jpg

http://www.wikihow.com/images/e/e1/Draw-an-Impossible-Triangle-Step-15-preview.jpg

FWIW, there are lots of conceptually valid mathematical entities that are routinely tossed by physicists that pop out in their solutions to equations. Negative mass being one of them. Some conceptual solutions, like expanding space I personally think should be tossed, but well, too much money invested in the Big Bang.

But “whales are vertebrates” is not true by meaning

alone; it is also true by virtue of certain empirical facts we have discovered. That’s why we can conceive of non-vertebrate whales, whereas we can’t conceive of square circles. It’s “true by meaning alone” that makes a statement analytic.That is,

ifone accepts the analytic/synthetic distinction at all. Quine’s “Two Dogmas of Empiricism” is a really important contribution, and one of the dogmas is the analytic/synthetic distinction.* “Two Dogmas” was preceded by Morton White’s “The Analytic and the Synthetic: An Untenable Dualism”, and they wrote their papers in close correspondence as part of a three-pronged attack on C. I. Lewis. (The third prong was a paper by Nelson Goodman on probability.)More recently there’s been a return of metaphysical realism and formal semantics that has made the analytic/synthetic distinction respectable again, but it’s still an open question. In my view, the arguments against making the analytic/synthetic distinction are worth careful consideration.

But don’t tell Arrington! He’ll go into a coma!

*The other dogma is what he calls there reductionism, but which we today might call semantic atomism — the idea that truth-values can be assigned to sentences on an individual, case-by-case basis.

Ok so does that imply

Ok, so

1+1 = 2

But unless you define 1+1 as meaning the same thing as two, strictly speaking the string “1+1” is not the string “2”. Hence, except for trivial systems where substitution and meaning are disallowed, but only string comparisons, A=A is not infallible because if we allow meaning and substitution, then for non-trivial systems capable of arithmetic this leads to undecidable propositions. Hence if we allow A=A to be a proposition, it is not infallible. QED.

I use the word “regularity” as a kind of code to imply there is something real.

I do not need to worry about whether our perceptions are accurate or whether the map corresponds to the territory or whether it is all an illusion.

Regularity is regularity, so to speak. Finding regularity is useful, and finding regular relationships among regularities is even more useful. It doesn’t matter what’s behind the curtain.

Going off the rails a bit, regularity is what distinguishes science and engineering from revelation or authority or spiritualism. Regularity is by definition, third party verifiable.

The Peano axioms for natural numbers define 1 as the successor to 0, which is not a successor to any other number. 0 is assumed to exist. 2 is just the symbol for the successor to 1.

This seems to be confusing numeral with number.

I sheep

I sheep

II sheep

No arithmetic operator required