The basic biochemistry textbook I study from is *Lehninger Prinicples of Biochemistry*. It’s a well regarded college textbook. But there’s a minor problem regarding the book’s Granville-Sewell-like description of entropy:

The randomness or disorder of the components of a chemical system is expressed as entropy,Nelson, David L.; Cox, Michael M.. Lehninger Principles of Biochemistry (Page 23). W.H. Freeman. Kindle Edition.

Gag!

And from the textbook written by our very own Larry Moran:

enthalpy (H). A thermodynamic state function that describes the heat content of a system,

entropy (S). A thermodynamic state function that describes the randomness or disorder of a system.

Principles of Biochemistry, 5th Edition, GlossaryLaurence A. Moran, University of Toronto

H. Robert Horton, North Carolina State University

K. Gray Scrimgeour, University of Toronto

Marc D. Perry, University of Toronto

Choke!

Thankfully Dan Styer almost comes to the rescue as he criticized ID proponent Granville Sewell:

On 27 December 2005, Professor Granville Sewell’s essay “Evolution’s Thermodynamic Failure” appeared in an opinion magazine titled The American Spectator. The second paragraph of that essay conﬂates “entropy”, “randomness”, “disorder”, and “uniformity”. The third paragraph claims that “Natural forces, such as corrosion, erosion, ﬁre and explosions, do not create order, they destroy it.” Investigation of this second claim reveals not only that it is false, but also shows the ﬁrst conﬂation to be an error.

…

“Disorder” is an analogy for entropy, not a deﬁnition for entropy. Analogies are powerful but imperfect.When I say “My love is a red, red rose”, I mean that my love shares some characteristics with a rose: both are beautiful, both are ethereal, both are desirable. But my love does not share all characteristics with a rose: a rose is susceptible to aphid infection, my love is not; a rose should be fed 6-12-6 fertilizer, which would poison my love.Is Entropy Disorder?

Dan Styer criticizing Granville Sewell

Almost well said. Entropy is often correlated with disorder, but it isn’t the same thing as disorder. Shoe size is correlated with reading ability (i.e., small shoe size suggests we’re dealing with a toddler, therefore non-reader). Perhaps “analogy” is too generous a way to describe the mistake of saying entropy is disorder. But it’s a step in the right direction.

Styer’s comment leads to this rhetorical challenge by physicist Mike Elzinga to creationists:

Where are the order/disorder and “information” in that? Creationists need to explain where Clausius’s coining of the word entropy means that everything tends to disorder and decay.

Mike Elzinga

2lot trouble

Hear, hear.

But to single out creationists for equating entropy with disorder? How about biochemists like Lehninger (or the authors carrying Lehninger’s torch), or Larry Moran or the founder of statistical mechanics Ludwig Boltzmann himself who suggested entropy is disorder.

“In order to explain the fact that the calculations based on this assumption [“…that by far the largest number of possible states have the characteristic properties of the Maxwell distribution…”] correspond to actually observable processes, one must assume that an enormously complicated mechanical system represents a good picture of the world, and that all or at least most of the parts of it surrounding us are initially in a very

ordered— and therefore very improbable — state. When this is the case, then whenever two of more small parts of it come into interaction with each other, the system formed by these parts is also initially in an ordered state and when left to itselfit rapidly proceeds to the disordered most probable state.”

Boltzmann used a very poor choice of words. He (ahem) made a small mistake in a qualitative description, not a formal definition.

Chemistry professor Frank Lambert comments about Boltzmann’s unfortunate legacy of “disordered most probable states”:

That slight, innocent paragraph of a sincere man — but before modern understanding of q(rev)/T via knowledge of molecular behavior (Boltzmann believed that molecules perhaps could occupy only an infinitesimal volume of space), or quantum mechanics, or the Third Law — that paragraph and its similar nearby words are the foundation of all dependence on “entropy is a measure of disorder”. Because of it, uncountable thousands of scientists and non-scientists have spent endless hours in thought and argument involving ‘disorder’and entropy in the past century. Apparently never having read its astonishingly overly-simplistic basis, they believed that somewhere there was some profound base. Somewhere. There isn’t. Boltzmann was the source and no one bothered to challenge him. Why should they?

Boltzmann’s concept of entropy change was accepted for a century primarily because skilled physicists and thermodynamicists focused on the fascinating relationships and powerful theoretical and practical conclusions arising from entropy’s relation to the behavior of matter. They were not concerned with conceptual, non-mathematical answers to the question, “What is entropy, really?” that their students occasionally had the courage to ask. Their response, because it was what had been taught to them, was “Learn how to calculate changes in entropy. Then you will understand what entropy ‘really is’.”

There is no basis in physical science for interpreting entropy change as involving order and disorder.

So in slight defense of Granville Sewell and numerous other creationists from Henry Morris to Duane Gish to Kairos Turbo Encabulator Focus for equating entropy with disorder, when we ask “where did they get those distorted notions of entropy?”, we need look no further than many biochemists and textbook authors from reputable universities, and unfortunately one of the fathers of statistical mechanics, Ludwig Boltzmann himself!

Here is a list of chemistry books that treat entropy correctly:

EntropySite.Oxy.Edu

April 2014

The 36 Science Textbooks That Have Deleted “disorder” From Their Description of the Nature of Entropy

…..

Lehninger and Moran’s books aren’t on that list. Their books however did make the list of biochem books judged by some Australians as decreasing in fitness:

The initial impetus for this research was the discovery by the authors of a variety of common and consistent errors and misconceptions in pedagogical literature on the topic of thermodynamics in Biochemistry. A systematic survey was undertaken of material on thermodynamics in Biochemistry textbooks commonly used in Australian Universities over the period from the 1920s up to 2010.

..

Lehninger, Nelson and Cox

….

Garrett and Grisham

…

Moran and Scrimgeour

….

Jeremy M. Berg, John L. Tymoczko, Lubert Stryer in 2002

…

A few creationists got it right, like Gange:

Entropy and Disorder from a Creationist Book Endorsed by a Nobel Laureate

Here is chemistry professor Frank Lambert’s informal definition of entropy:

Entropy issimply a quantitative measure of what the second law of thermodynamics describes:the dispersal of energy in a process in our material world. Entropy is not a complicated concept qualitatively. Most certainly,entropy is not disorder nor a measure of chaos — even though it is thus erroneously defined in dictionaries or pre-2002 sources.

The is the correct Ludwig Boltzman Entropy as written by Planck:

S = k log W

where

S = entropy

k = boltzman’s constant

W = number of microstates

Also there is Clausius:

delta-S = Integral (dq/T)

where

delta-S = change in entropy

dq = inexact differential of q (heat)

As Dr. Mike asked rhetorically: “Where are the order/disorder and “information” in that?”

My answer: Nowhere! Most creationists (except guys like me and Robert Gange) don’t realize it’s nowhere.

Funny that keiths can do the calculations for his problems, but not for mine.

Take the “macrostate” Ma = [the sum of a pair of dice]. Do you agree that is a “macrostate” as you [keiths] are using the term?

What is the probability distribution?

What is the entropy?

What it actually is is confused.

Anyhow, I agree with Jock on this issue.

walto,

How so? See if you can make your case.

In my example, we calculate different entropy values because our macrostates are different. Our macrostates are different because we possess different information about thje microstate: I know that the sum is greater than ten, while you only know that it is greater than eight.

The knowledge is observer-dependent. Therefore so are the macrostates, the epistemic probability distributions, and the entropy values.

It’s straightforward. I’m surprised that you can’t see it.

No no no.

How could they

notbe observer-dependent?Try to formulate an actual argument this time.

Mung,

I focused on the simpler cases (where the distributions are uniform) for

yoursake, Mung. Still you were baffled by them. If you couldn’t dothosesimple entropy calculations, what hope is there that you’ll be able to handle the more complicated non-uniform cases?For the sake of brighter readers, here’s how you’d handle the non-uniform cases:

1. Since the distributions are non-uniform, you’ll need to do a Gibbs-style entropy calculation instead of the simpler Boltzmann-style calculation:

S = -Σ pi log2 pi

…where pi is the probability of the ith microstate. Note that in the special case where there are W possible microstates and the probabilities are equal at 1/W, this reduces to

S = -Σ pi log2 pi = -Σ 1/W log2 (1/W) = log2 W

…as expected.

2. A macrostate such as “the sum is greater than ten” still eliminates microstates, as before. The probabilities of the remaining microstates need to be rescaled so that they sum to one. When you do that, the distribution may still be non-uniform, however.

I gave an argument already. You (naturally) ignored it and repeated your position. Maybe threw in an insult or two also. Because, well, that’s what you do!

OK, one more time because I’m sweet that way (but absolutely nothing more for you until you go back to the beginning of this thread and answer every question asked of you. (OK, that seems too much; you only have to answer the ones you’ve been asked more than twice.) Instead of repeating things with absolute assurance, flinging insults and asking questions of others that they’ve already answered, YOU need to answer Sal’s challenges as well as mung’s questions and my own. Or you’re shut off.

It’s absolutely dispositive and simple enough for an eight year old.

Temperature is measurable, it was measureable even before the Boltzmann equation, so was heat. One can measure that a 20 gram ice cube is melted by 6660 Joules at 273.15K with thermometers and calorimeters.

One doesn’t need to plug things blindly, one can measure the macrostate variables like the temperature of the ice, and macrostate temperature changes in the calorimeter that measure the heat change in the ice. 🙂

Then one plugs the numbers into the Claussius Integral:

and gets

ΔS = 24.42 J/K

that is hardly blind.

You on the other hand have yet to demonstrate you can actually count the microstate changes from first principles especially going to the liquid state. Pauling gave a good estimate for the solid state of water, but even then it wasn’t exact.

You’re the one who said, it’s “counting microstate all the way”, but have yet to show your actual methods for counting the microstates in the liquid state. I already provided how Pauling made an estimated count in the solid state. In fact, I was the one who provided it for the readers, not you, and yet, even with me giving you head start you’re still unable to “count microstates all the way” for the simple example from a college exam. Why is that? I’ll explain for the reader.

Counting microstates exactly is extremely difficult. Even for monoatomic gases, which should be the most simple, we have to idealize the model away from the ugly details of reality so we can make the math tractable, but then we give up accuracy. That’s why we don’t actually count the microstates all the way, we make estimates, but once one goes to the liquid state, it can get insanely hard.

That’s why, you’ve probably yet to pony up the entropy change from ice to liquid water at 273.15K with a specific enthalpy of 333.55 J/K by “counting microstates all the way” because (aside from estimating techniques outlined in the Linus Pauling paper on ice) you can’t actually count the number of microstates exactly, can you!

If double Nobel Prize winner Pauling could only get an estimate of the number of microstates, why should I expect you to provide and exact answer by “counting microstates all the way”. But that’s the hole you dug for yourself when you said:

Now that we have the Boltzmann equation, we can in principle now show dQ/T is no longer purely phenomenological but show the heat Q is change of the TOTAL kinetic energies of all the molecules, and T is some sort of AVERAGE kinetic energy per degree of molecular freedom. Thus your claim

Is about as absurd as saying TOTALS and AVERAGES are rarely informative.

TOTALS and AVERAGES are certainly informative if that’s all that your instruments (like thermometers and calorimeters and weight scales or graduated cylinders or rulers, etc.) are able to measure. These instruments are designed to measure macrostates or changes in macrostates (like heat/internal energy). You can’t appeal to non-existent microstate counting machines nor missing information meters.

Every thing you said was just stressing irrelevancies that were derailing focus from the simple example where you had the chance to showcase how you can actually “count microstates all the way.” So show the readers how you actually count them all the way.

I provided information to the readers how to make estimated counts for monatomic gases and Pauling’s paper on ice, but even then the counts are only estimates which are experimentally show to be inexact. How do we know the entropy experimentally? By the very thing you said is rarely informative:

dS = dQ/T

walto,

Wow. No wonder you were reluctant to present that argument. It’s terrible.

No, there is no requirement that every microstate be actualized.

A macrostate corresponds to an ensemble of microstates, only one of which is actualized in the system at a time. The others are merely epistemically possible.

In my example, your macrostate corresponds to the following ensemble, because you know that the sum is greater than eight:

(3,6)

(4,5), (4,6)

(5,4), (5,5), (5,6)

(6,3), (6,4), (6,5), (6,6)

My macrostate corresponds to the following smaller ensemble, because I know that the sum is greater than ten:

(5,6)

(6,5), (6,6)

The macrostates are different because we possess different information about the exact microstate.

Macrostates are observer-dependent.As you said:

Is it not the case that all the microstates in the ensemble are equiprobable? ETA: And indistinguishable.

And is it not the case that there are multiple ensembles, else all this talk of multiple

macrostatesis nonsense?Typical keithsian nonsense. Who said every microstate had to be actualized? Go back and actually read this time.

Exactly. And it’s not observer-dependent, so nothing that supervenes on it can be either.

You’re terribly confused about every epistemic thing!

And I was right!

And now for some final humiliation for DNA_Jock after it became apparent what his conflations and equivocations were about. On November 8, 2016, he provided this in response to my melting ice-cube example at 273.15K with a specific ΔH = 333.55 J/g.

For starters ice doesn’t melt at 250K, and besides, his ΔH was for FREEZING supercooled water, not melting ice, and that is an important subtly because ice won’t spontaneously melt at 250K, even though supercooled water can freeze at 250K. So, already automatic fail.

It turns out, in an attempt to equivocate and conflate, and throw in datapoints without first referencing his sources, he made a further mistakes. After he finally provided his data sources and thus show his blatant equivocations, he revealed yet another error (or two, or there…). See this graph he eventually provided below:

For the ice cube example melting 273.15K with a specific ΔH = 333.55 J/g,

ΔH can be expressed in terms of J/mol instead of J/g. This can be converted to specific heat in moles rather than grams by multiplying it by the molecular weight of water of 18.02 moles/gram

(see: https://en.wikipedia.org/wiki/Properties_of_water)

ΔH = 333.55 J/g = 333.55 J/g x 18.02 g/mol = 6010.571 J/mol

curiously the paper from which DNA_Jock got is graph has slightly different value of 6012 J/mol for the ΔH at 273.15, most likely due to rounding issues. The differences being negligible.

http://www.phy.mtu.edu/~cantrell/agu2009-PosterSzedlakV5.pdf

This embarrassing again for DNA_Jock because if DNA_jock took this value in the paper of ΔH =6012 J/mol and converted it to J/g, it would be:

6012 J/mol x 1/18.02 mol/g = 333.62 J/g

and then used it to solve my example of a 20gram ice cube at 273.15K, the result is:

ΔS = 333.62 J/gram x 20 gram / 273.15 K = 24.42 J/K

The same figure I got earlier! Thus the rounding difference in ΔS are nothing to four significant digits!

But now let’s take this statement by DNA_Jock and see the holes in it:

That was a bizzare statement when I read it in reference to my example. It is bizarre because it equivocated the concepts in my example, but humiliating for DNA_Jock, it isn’t even right by his own terms in light of the graph.

Look at TΔS line on the graph, one can reverse engineer the specific ΔS at each point. If ΔS was independent of temperature, it would be a straight line, not a slightly curved line. So again, automatic fail for this statement:

For the special case of ice melting at 273.15K, the Gibbs free energy change is zero (explained later). For an isothermal phase change from ice melting to liquid water

ΔS = ΔQ/T

therefore

TΔS = ΔQ

for the special case of an isothermal normal process of normal ice melting (same temperature through the process) the Gibbs free energy change is defined as

ΔG = ΔH – TΔS = ΔH – ΔQ

What is ΔG? Well, it turns out one can compute it with a rather clumsy equation that is derived from the Gibbs-Helmholt equation:

http://cbc.arizona.edu/~salzmanr/480a/480ants/gibshelm/gibshelm.html

ΔG = 273.15K (6011 J) ( 1/ T – 1/273.15K)

where

ΔG = change in Gibbs free energy

T = temperature of the process

Now, for the special case of ice melting at 273.15K, the Gibbs free energy = 0 (from the above equation). Thus:

ΔG = ΔH – TΔS = ΔH – ΔQ = 0

Thus

ΔH = ΔQ only at 273.15K

!!!!!

So I could then as in my ice example:

ΔS = ΔQ/T = ΔH/T only at 273.15K

It will not hold true at other temperatures. To solve for the entropy change at other temperatures, we note for an isothermal freezing process (supercooled water to ice)

ΔG = ΔH – TΔS

implies

TΔS = ΔH – ΔG = ΔQ

which implies

ΔS = (ΔH – ΔG)/T = ΔQ/T

To compute ΔS we take L_effective from the paper which is this monstrosity:

ΔH = L_effective = L_f(Tm) – Integral [(c_w – c-I) dT]

using the above equations to compute ΔQ an even bigger monstrosity

ΔQ = ΔH – ΔG =

L_effective – ΔG = L_f(Tm) – Integral [(c_w – c-I) dT] ] – 273.15K (6011 J) ( 1/ T – 1/273.15K)

GASP!!!!!

But since we have the graphs available, we can alleviate having to punch all the numbers in.

So let’s take an example at 250K for FREEZING (not melting):

ΔH( at 250K) = 5000 J/mol (eyeballed from the graph)

ΔG (at 250K) = 273.15K (6011J/mol) ( 1/250K – 1/273.15K) = 556.61 J/mol

TΔS (at 250K) = ΔQ (at 250K) = ΔH – ΔG = [ L_effective = L_f(Tm) – Integral [(c_w – c-I) dT] ] – 273.15K (6011 J) ( 1/ T – 1/273.15K)

~= 5000 J/mol -557 J/mol = 4443 J/mol

which agrees materially with the graph of TΔS at 250K

a similar calculation at 240K can be conducted:

TΔS (at 240K) = ΔQ (at 240K) ~= ΔH – ΔG = 4500 J/mol – 830 J/mol = 3670 J/mol

which agrees materially with the graph of TΔS at 250K.

And finally at 273.15K

TΔS (at 273.15K) = ΔQ (at 273.15K) = ΔH – ΔG = 6011 J/mol – 0 J/mol = 6011 J/mol

which agrees materially with the graph.

So, DNA_Jock’s problems:

1. He equivocated my melting example with freezing

2. He insinuated I’d use ΔQ = ΔH for computations of entropy at all temperatures when in fact I would use ΔQ = ΔH-ΔG as I have shown above

3. Entropy is temperature dependent, you can see it visually on the graph for yourself of TΔS . If ΔS were temperature-independent, it TΔS would be a straight rather than curved line, and for those wanting some serious agony, one will see this monstrosity:

ΔS = [L_f(Tm) – Integral [(c_w – c-I) dT] ] – 273.15K (6011 J) ( 1/ T – 1/273.15K) ] / T

is not constant for all T. One can demonstrate that by taking the first derivative with respect to T and seeing if one gets zero. One need not do that, just look at the graph of TΔS and see it’s not a straight line. Therefore DNA_Jock is wrong on many levels for saying:

and

Seems you’d get materially the right answer if you actually freaking did the calculations right as I’ve demonstrated. Don’t use ΔQ= ΔH except at 275.15K, otherwise use ΔQ=ΔH – ΔG, and freaking known the difference between FREEZING and melting, and know a curved line for TΔS vs. T does not imply ΔS is temperature invariant if it’s curved. SHEESH!

ROFLMAO

From the reference Sal cribbed from:

Oh dear. Your calculation of ΔG above

assumesthat ΔH is constant; you are then using it to calculate how ΔH varies with temperature. You don’t see a problem with that approach?And my point all along was to ask how you (Sal, the Claussian) would go about deriving ΔH at a temperature other than the STP. You’ve managed to come up with a fourth way, which has the added benefit of being hopelessly wrong.

Effing brilliant!

Soooo, why do heat capacities vary with temperature, Sal?

keiths:

walto:

The

actualmicrostate isn’t observer-dependent, but the other microstates in the ensemble — those that are merely epistemically possible — are. That’s why your macrostate differs from mine in the example above. You possess different information from me, so your macrostate differs from mine.Macrostates are observer-dependent.This isn’t difficult, walto.

It’s too difficult for you, obviously. What you’ve written above is a pile of utter nonsense.

Nope, I used:

ΔH = L_effective = L_f(Tm) – Integral [(c_w – c-I) dT] ]

by using the graph. You fail again, DNA_Jock right after I showed how I got ΔH in the above comments! Do you like reflexively disregard what I just said explicitly like um:

Does that look like I used ΔG to calculate ΔH? No. But it doesn’t stop you from misrepresenting what I said. You kind of have a reflex for doing that. What’s the matter DNA_Jock? You can’t actually tear down the arguments I made so you deal with arguments I didn’t make? Tisk tisk.

I provided an approximation for ΔG from a university paper which is obviously good enough since it captures the essential points of the TΔS graph you provided.

ΔQ = ΔH – ΔG

If you don’t like that ΔG approximation provided by the university essay which I used, you can get the exact one, but you’re not providing the data nor the exact formula for ΔG. And my value for ΔG was obviously accurate enough.

But all this is mostly moot since we have the TΔS as a graph, all of which still shows you messed up and you conflate freezing with melting.

And even with all that said, you still can’t provide a detailed computation with worked out examples of you “counting microstates all the way” for a simple example like an ice cube melting. So neither you nor Keiths nor Mung can “count microstates all the way” nor compute missing information without relying on

dS = dQ/T

which you claim is rarely informative. Oh well, now you’re stuck.

Awh shucks. Thank you. That’s more than I can say for your performance in this discussion. Have you figured out FREEZING isn’t the same as melting, and that ΔS would give a straight line TΔS if ΔS were independent of temperature. In general also ΔH will not vary linearly with T either as shown by the graph (contrary to your option A premise). Btw, again:

ΔH = L_effective = L_f(Tm) – Integral [(c_w – c-I) dT] ]

does not have a ΔG term in it, so you should retract your errant claims.

Cheers.

An example of a macrostate is the volume of a container or the temperature (average kinetic energy of the particles per degree of molecular freedom, in other words an average energy of atoms or molecules with some adjustment) or the internal energy (sum of the kinetic energy of the atoms or molecules or whatever), or number of particles.

So here are 4 major macrostates used in macro thermodynamics of Claussius:

1. volume

2. temperature

3. internal energy (heat energy)

4. number of particles (atoms, molecules, whatever)

probably a few more, but those are the big ones

Are all those observer dependent in principle?

Of course they aren’t. He has no idea what he’s saying at this point. He’s just being petulant. There is one and only one correct macrostate. The rest don’t exist at all. There are VIEWS of what the macrostate is that are accurate to one degree or another. keiths reifies these for no good reason into separate macrostates and says that they–and the incorrect microstates that constitute them–are real things in the world, but observer-dependent.

It’s just a freshman-level confusion, similar to the reification of sense-data. That somebody thinks that X is F doesn’t create an “observer-dependent state of X being F.” Fortunately, thermodynamics doesn’t need it to (or, frankly, keiths’ useless, Meinongian descriptions at all).

As a measure of the paucity of information, entropy can be observer-dependent without imaginary microstates popping into being like purple unicorns or golden mountains at every turn–just because somebody is wrong about something.

Sal, at least on THIS issue, you and Jock (and I) seem to be in agreement. As Jock has said (roughly), it’s the specification of a macrostate that is observer dependent, not any actual macrostate itself. Those are, as you make clear above, states of the world.

My profuse apologies, Sal. You weren’t trying to calculate ΔH at temperatures other than STP. No, you were ballparking what ΔH was

off the graph, and using an approximation of Gibbs-Helmholtz (which approximation assumes ΔH is constant), and subtracting one from the other to arrive at a totally bogus value for TΔS at difference temperatures.It’s so deranged that I didn’t wrap my brain around it the first time.

As the authors noted, “TΔS has limited physical significance”.

No need to panic, though — that L_effective curve that you are eyeballing to estimate ΔH is, of course, Kirchhoff’s Law:

∂ΔH/∂T = ΔCp

It’s in its integrated form, which may have confused you.

🙂

So, apparently without realizing it, you have accepted that Kirchhoff’s Law is a good way of determining ΔH at a temperature other than the one specified in the back of your school chemistry book. Excellent! That was what I was trying to get across with the whole “ice at 250 / 252” question.

So that’s progress.

Now we can move on to the thorny question, why do the heat capacities vary with temperature?

Sal,

Those are

notmacrostates. They are variables that can be part of a macrostate.For an example of observer-dependent macrostates, see this comment.

For a thermodynamic example, see the Xavier/Yolanda example that you’ve been avoiding for most of the thread. In that example, Yolanda’s macrostate includes information about the isotopic concentrations, while Xavier’s doesn’t. As a result, they calculate different values for the entropy change.

Entropy is a function of the macrostate, and macrostates are observer-dependent. Therefore entropy is observer-dependent.

Keiths–master of the art of repeating what is false.

walto,

In this example, what is the “one correct macrostate”?

What about the Xavier/Yolanda example? What is the “one correct macrostate” there?

This one:

And now that you’ve answered your own question, you can turn to Sal’s, mung’s and mine.

Are you saying that the actual

microstateis the “one correct macrostate”?And what’s the difference between this and the Shannon entropy?

So can you now go back to the dice entropy thread and explain how or why I arrived at the ‘wrong’ answer(s) in that thread?

see here

https://en.wikipedia.org/wiki/Microstate_(statistical_mechanics)

Mung,

Shannon entropy is the superset, and the forms of entropy we’ve been discussing here are subsets. Thermodynamic entropy is one subset. Card deck entropy is another. Dice entropy is yet another.

My answer hasn’t changed since the last four times you asked me that question.

You picked one set of microstates — the 36 equiprobable outcomes of a random roll of two fair dice, one red and one green — and applied the wrong probability distribution to it. You should have used a uniform probability distribution over the 36 microstates, but instead you used a non-uniform distribution over the sum of the dice.

You were cribbing from a source you didn’t understand, so you got the wrong answer.

This is false.

For the probability distribution I gave in the Dice Entropy thread, what does your “Gibbs Entropy” formula give you for the entropy?

I wish to express my appreciation for Walto and colewd’s participation in this discussion and engaging Keiths’.

I now congratulate Keiths’ for being temporarily removed from my Ignore list since now he’s making such obvious conceptual errors and has done so for 1,578 comments or so now, but now it’s getting blatant. Same for DNA_Jock.

Seeing Keiths comments as Walto quoted them have now gotten too juicy to ignore.

Wiki describes a macrostate:

I made a rhetorical comment:

and Walto responds:

It is worth mentioning, if Yolanda has a perfectly cubic ice cube and Xavier has a more rectangular ice cube, strictly speaking, their ice cubes are in different macrostates, but in terms of classical thermodynamics regarding changes in entropy, those details of the macrostate are not necessary. And if the different shaped ice cubes (hence ice cubes in different macrostates in detail) have the same mass under normal conditions, then by convention they are treated as being in the same macrostate even though strictly speaking they are formally not, we say from practical purposes, they are and that’s how we also eliminate observer dependence of the exact macrostate because we treat systems as equivalent based on a very small number of variables.

Hence one does not need to list every detail of the macrostate to create meaningful statements about entropy or entropy change. That’s why from a practical standpoint, we can define the macrostate with so few variables.

And if those variables are objective and not dependent on observer, like Temperature, Number of Particles, Volume….then from a practical standpoint the macrostate is not observer dependent.

You know what, Keiths, I mis-stated, I made a mistake. They are variables that define the macrostate. See Keiths, it’s not so hard to say “I made a mistake”. That’s how we learn and improve our ability to communicate and conceptualize. So there, I admitted a mistake, and I’ll even credit you for correcting me. Thank you very much. 🙂

These variables define the macrostate, and in many cases we may not need an infinite list of variables (like shape of an ice cube, location of the ice cube in the universe, etc.) to sufficiently define the macrostate.

If Yolanda observes that the macrostate has exact cubic dimensions and in the mind of Xavier it is like a sculpture of Hillary Clinton (because he doesn’t observe the shape of the ice cube), but they have the essential macrostate variables, they’ll compute the same entropy as long as by convention they don’t take into account the shape of the ice “cube”.

Well. Well. I seem to be the only one not making a fool of myself. 😀

Sal,

You’re conspicuously avoiding the issue of isotopic concentrations. Did you think I wouldn’t notice?

Yolanda takes the isotopic concentrations into account. Xavier doesn’t. Their macrostates are different. They’re observer-dependent, and so is the entropy.

And that, by the way, is one of the reasons that entropy

cannotbe a measure of energy dispersal, as I explained here.Why not just acknowledge your mistake and move on?

keiths:

Mung:

No, it’s true. Sadly, you even got the correct answer initially, with some guidance from me.

You then retracted the correct answer and replaced it with an erroneous one.Why? Because you came across a calculation that came up with a different answer. They were solving a different problem, but you didn’t realize that.

You were cribbing from a source you didn’t understand.

This is also false.

I got the initial calculation right, and for the same problem I’d still get that result. I changed the problem, which is something I’ve repeatedly told you and you just continue to ignore.

I full well know the difference between the two distributions. If you look at my code in the Dice Entropy thread I even wrote tests for the cases with equiprobable outcomes and my code still arrives at the right answer for them.

*sniff* They grow up so fast.

Patrick:

Reminds me of OMagain’s comment.

Mung,

No, you didn’t. You told me that you changed your first answer because it was wrong:

Mung:

keiths:

Mung:

Why is it so frickin’ hard for you, walto, and Sal to admit your mistakes?

Mung,

For the umpteenth time, your mistake was to pick the wrong distribution to go with your microstates, as I explained above:

Here is your mistake, preserved in amber:

You should have used a uniform probability distribution, but you didn’t. You made a mistake because you didn’t understand the source you were cribbing from. Deal with it and move on with your life.

The probability distribution for the sums isn’t a uniform distribution. So no, I did not use a uniform distribution. On average a seven will appear twice as often as a four when tossing a pair of dice. Every craps player knows this.

There are three ways to make a four (1-3, 3-1, 2-2) and there are 6 ways to make a seven.

I admitted my mistake. But you didn’t like that. LoL. You said I was right the first time. But I changed my mind and decided to use the sum of a pair of dice, determine the probability distribution for the sums, and calculate the entropy for that. Why can’t you just live with it?

Mung,

Right, and that was your mistake. You needed a uniform distribution over the 36 microstates, but you used a non-uniform distribution over the 11 sums instead. You were cribbing from a source that you didn’t understand. It confused you and you gave the wrong answer.

No one is surprised. It fits the longstanding pattern.

Now, instead of admitting your mistake, you are trying to pretend that you “changed the problem” and that you were right all along.

That’s just pitiful, Mung.

Because you were on my ignore list since most of what you wrote is drivel and I didn’t see you mention it till now.

If there are different isotopes, say of ice made with typical hydrogen vs. ice made with deuterium, there will 20 grams of regular ice will have more particles than 20 grams of deuterium ice. There is a macrostate difference because we have an objective difference in the number of particles.

Deuterium molecular weight: roughly 2

Hydrogen molecular weight: roughly 1

Oxygen molecular weight: roughly 16

Regular Water (H20) molecular weight : 16 + 2×1 = 18

Heavy Water (H20) molecular weight : 16 + 2×2 = 20

Number of particles regular water 20 grams ice: 6.68 x 10^23

Number of particles heavy water 20 grams ice: 6.02 x 10^23

There is an objective difference in the number of particles which in general will result in a different number of microstates.

And here are some macro thermodynamic properties which are obviously different from regular water:

Well Gee Keiths, in the case of different istopic concentrations, it’s kind of pathetic if Xavier can’t figure out the substance he’s dealing with has fewer particles in it for a given weight, or conversely if he has one gallon of regular water in one experiment and a gallon of heavy water in another experiment and doesn’t notice the heavy water is 11% heavier and therefore dealing with a different substance with different thermodynamic properties that affect entropy change like specific heat! In fact he should notice with thermometers and calorimeters something is different about heavy water, and thus they have different entropies. This will be evident if one uses this definition of

dS = dQ/T

(which DNA_Jock says is rarely informative :roll:)

That’s not an observer dependent macrostate issue, that’s just a measurement error.

For a 20 gram heavy water ice:

Enthalpy of fusion: 6315 J/mol

Melting temperature: 276.97K

Moles : 20 grams x 1 mol / 20.02 grams = 1.00 mol

ΔS_heavy water = 6315 / 276.97K = 22.80 J/K

which is lower change in entropy than regular water measure earlier

ΔS_heavy water = 6315 / 276.97K = 24.42 J/K

not to mention the two substances have different melting temperatures. This isn’t about entropy’s dependence on observers, this is about errors in measuring entropy based on the observer’s measurement errors.

The lab will measure entropy differences. Your hypothetical superficially looks like you may have a point, but in lab practice, one should be able to tell differences in entropy if the isotopic concentrations are substantial enough to result in noticeable change in density and heat capacity which will give different entropy numbers. It will show up iwith thermometers and calorimeters, not your non-existent “missing information meters” or DNA_Jock’s non-existent microstate counting machines.

Btw, density is a generally accepted macrostate property. You can see why in the case of regular vs. heavy water. Xavier better realize with his instruments for heavy water vs. regular water:

more dense (and hence will have less particle per unit volume)

different heat capacities hence different entropy changes

The entropy difference in heavy vs. regular water will show up with themometers and calorimeters, so one will get different entropy changes if one measures entropy with

dS = dQ/T

which you seem to disparage as a definition of entropy in favor of “missing information” and which DNA_Jock says is rarely informative.

False. I said ΔH not just at STP (Standard Temperature and Pressure) but at various temperatures based on :

ΔH = L_effective = L_f(Tm) – Integral [(c_w – c-I) dT]

the authors point out in another paper

ΔH = Lf

and

Lf = L_effective

https://pdfs.semanticscholar.org/d6a2/3e2c6fe8ae31721f5d66c9b675ecf702bcb0.pdf?_ga=1.130787279.1985272492.1473671670

so

ΔH = Lf = L_effective = L_f(Tm) – Integral [(c_w – c-I) dT]

which results in

ΔH = L_f(Tm) – Integral [(c_w – c-I) dT]

which is what I said here:

http://theskepticalzone.com/wp/in-slight-defense-of-granville-sewell-a-lehninger-larry-moran-l-boltzmann/comment-page-32/#comment-149261

versus your usual misrepresentation of what I said.

So, no I wasn’t confused, you’re just attributing arguments to me which I didn’t make. Since the authors were so generous to provide a graph of L_effecitve, I used their numbers for

ΔH = Lf = L_effecitve = = L_f(Tm) – Integral [(c_w – c-I) dT]

Which brings up a mistake you won’t acknowledge:

The TΔS graph shows entropy change isn’t temperature independent because it is curved line, not a straight line. So your claim “assume the entropy change is temperature-independent” is false and ridiculously obvious.

Since L_effecitve = ΔH and the L_effecive line isn’t straight either, so your claim “ΔH will vary linearly with T” is also false and ridiculously obvious.

So no need to try to go pedagocial on me since you don’t even understand the basics between non-straightl lines and straight lines.

So go back to high school and look at the equation of a straight line:

y = mx + b

A straight line looks like the one below:

Does the equation of L_effective (which is ΔH) look like the form of a straight line with respect to T?

L_effective = ΔH = L_f(Tm) – Integral [(c_w – c-I) dT]

or does the equation of TΔS look like a straight line with respect to T?

TΔS (at 250K) = ΔQ (at 250K) = ΔH – ΔG = [ L_effective = L_f(Tm) – Integral [(c_w – c-I) dT] ] – 273.15K (6011 J) ( 1/ T – 1/273.15K)

No, and it shows up on the graph you yourself provided. So your claims are false and obviously so based on the graph you yourself provided and right under your nose. Hilarious.

For the reader’s benefit, DNA_Jock claims this:

and also

Entropy Change is represented by ΔS.

If “entropy change is temperature-independent” as DNA_Jock claims, then ΔS would be constant, then TΔS would be a straight line, not a curved line.

But unfortunately fior DNA_Jock, TΔS is a curved line from the data he provided to argue against me regarding the ΔS (change in entropy) of melting ice at 273.15K. [ Not to mention he uses data for FREEZING supercooled water at all sorts of other temperatures than melting normal ice at 273.15K. This was a highly creative derailment and red herring on his part,.]

I drew a straight solid green line on the graph DNA_jock provided to highlight the fact the dashed red TΔS curve is not a straight line.

It appears DNA_jock has some elementary school problems distinguishing straight lines from non-straight lines so I thought I’d help him out. Of course, if one wanted something a bit more formal, he could try taking the first derivative of the following equation with respect to T and see if he ends up with a constant which would be the hard way to demonstrate the point so blatantly obvious in the graph that TΔS is not a straight line therefore ΔS is not constant therefore “entropy change is temperature-independent” as DNA_Jock erroneously asserted.

ΔS = ΔQ/T = (ΔH – ΔG)/T = ([ L_effective = L_f(Tm) – Integral [(c_w – c-I) dT] ] – 273.15K (6011 J) ( 1/ T – 1/273.15K)) /T

But that is not needed because a picture is worth a thousand words. Hehehe.

Sal, I really think this is largely a quibble. I’ve got a couple of high school chemistry books around my house. I guess they’re old-fashioned, because they define entropy as “disorder.” But what is disorder? What would be order? The definition that keiths is so keen that you use simply notes that disorder is best understood as uncertainty as to various parameters of our choice, or, in other words, absence of information that would determine values for all the variables in question. Communication/information theory has nicely quantified that sort of uncertainty or absence of information into bits (binary digits).

So what keiths is so interested in fighting about changes nothing about anything. E.g., one still needs to use thermometers and calorimeters to determine thermodynamic conditions, just as always. He simply wants to repeat (a million times if necessary) that the entropy is best understood as being matter of what we don’t know about those values.

He likes to fight about stupid things. BTW, don’t you think he should do an OP on why (other) people won’t simply admit when they’re wrong. He’s, like, an expert!

ETA: BTW, who are those people? Here, he mentions you, me and mung. Elsewhere, it’s been Alan Fox, Neil and KN.

It’s a diverse group, which may seem mysterious, but I’ve figured it out! It turns out that the culprits are anybody who has ever disagreed with keiths about anything and hasn’t subsequently agreed with him! Weird, huh?

This is false.

I’ve never claimed I’ve been “right all along.” In fact, what I said was, I made a mistake.

I did admit I made a mistake. Are we reading the same thread?

I actually think there’s more support for the order/disorder view than for the ‘energy dispersal’ view. I think keiths just doesn’t understand order.

I decided to order my dice by their sums. I decided that the configurations that matter are the totals.

“Order” is less specific, so it’s less susceptible to counter-examples. I think the move to energy dispersal was an attempt to make the concept more determinate or something, more, you know,….scientific. But it doesn’t quite work, as you and Jock and keiths have pointed out. I don’t want to put words in his mouth, but I take it that Sal’s support for it has mostly been resistance to any sense that the “real thermodynamical science” doesn’t count anymore. But, of course it does and most of the sparring here has been ridiculous from the get-go. (Or at least, Sal’s dispute with keiths and keiths with me. I’m not competent to comment on the disputes between Sal and Jock.)

The residual argument about whether macrostates are observer-dependent is also a completely senseless quibble. Is what one believes or specifies to be the case a microstate or macrostate? If you say “Yes,” then of course they are observer-dependent. If one says rather that each such specification should be seen as a belief/description or set of beliefs or descriptions about the properties of the (objective) situation then they aren’t observer-dependent, only the specifications are. I prefer an ontology that doesn’t multiply microstates and macrostates for no reason. As indicated, I believe that macrostates supervene on microstates and descriptions may be correct (when they get the world right) or incorrect (when they get it wrong). And, in my view, such specifications may also be more or less accurate (when they get more or less of the relevant variables right). I don’t think it makes sense to call microstates or macrostates true or false, accurate or inaccurate. It only is the specifications or descriptions of the world that can have those properties. That is a philosophical, not a scientific, position. The science is absolutely indifferent to the ontological choice here.

So again, it’s a stupid, ridiculous, pointless, nonsensical dispute. You know, the kind keiths loves.

I think we are making progress, Sal.

In your efforts to promote a Clausian view of entropy above Boltzmann, you have repeatedly offered up your high school physics homework.

1) You (Sal) have provided your calculation of the entropy change in 500 coins when they are heated from 298K to 373K, viz:

I asked

You will notice that your calculation assumes that the heat capacity of copper IS constant, hence my question…

2) You (Sal) have on a number of occasions provided your calculation of the entropy of melting of 20g of ice:

Seeing another opportunity to see if you understand any of these equations that you plug numbers into, I wanted to see if you understood the importance of heat capacities, so I responded:

I got a little carried away here, and rounded 251.165K down to 250K. My mistake; but it has zero effect on the concepts we were discussing.

More importantly, I hope readers will observe that I am offering Sal three different ways that he, Sal, the fan of Clausius, might go about doing HIS calculations when the numbers he needs are not listed in the back of his chemistry book; hence the repetition of “you could”.

There followed a lot of huffing and puffing from Sal – my favorites were when he referred to ∂ΔH/∂T as a second derivative(!!), and failed to see that IFF ΔCp were a non-zero constant then “ΔH will vary linearly with T.” Oh!, how could I forget Sal’s claim that if Cp is a constant then ΔCp = zero? There was evidently some confusion around the meaning of the symbol Δ.

This is high school science. My goal, once more, was to get Sal to realize that heat capacities

vary with temperature: you cannot treat them as universal constants the way he did in his copper calculation.Particularly spittle-infested was his claim that

That struck me as taking wrong to a whole new level, so I pulled up an example of researchers using Kirchhoff’s Law (∂ΔH/∂T = ΔCp) to address the question of ice freezing at non-standard temperatures (Szedlak). Sal’s reaction, hilariously, is to use Szedlak’s curve

ΔH = Lf = L_effecitve = = L_f(Tm) – Integral [(c_w – c-I) dT]

[sic]

in an attempt to prove wrong a claim I never made. That’s fine, I don’t mind, because Sal has (inadvertently?) admitted that Kirchhoff’s Law (∂ΔH/∂T = ΔCp) applies to the melting of ice.

(For readers without calculus background, Sal’s “L_effecitve” equation is identical to ∂ΔH/∂T = ΔCp, just integrated.)

As I said previously,

Here’s a simple, fun conceptual question for Sal:

Sal has, in response to Dr. Mike, provided a plot that included negative temperatures (reproduced below).

If Dr Braun placed a system at 10K in contact with a system at

minus10K, would the spontaneous energy flow beA) from the system at 5K to the system at

minus10Kor

B) from the system at

minus10K to the system at 5Kand how would Clausius describe this flow of energy?

Imho, the most valuable comment in this discussion was by Larry Moran who happens to be a biochemistry professor and textbook author:

The problem with entropy is that unlike other physical properties like length, volume, velocity, mass (weight), acceleration, temperature, etc. — there is nothing in our everyday experience to familiarize it to our intuitions.

In general we don’t have protracted discussion about what the meaning of “length” or “volume” is. Though temperature is a somewhat esoteric, we intuitively understand the difference between hot and cold.

Entropy is nothing like this, but ΔS (change in entropy) shows up repeatedly when we analyze chemical and mechanical systems and we can’t get the final accounting to work many times without including it in our equations. It starts to feel like a nuisance concept you can’t live without even though it is not well understood. But even if we don’t really understand it in the qualitative sense, thankfully we can calculate it, which is probably one of the most incredible ironies in science.

Since “entropy” isn’t intuitive like volume, what is the next best metaphor?

Agreed.

As metaphors go, there is never a “right” answer, only which ones help conceive of things best. Lambert “energy dispersal”. Lord Kelvin himself used the word “dissipation”. Dr. Mike says “spreading around of energy” to describe the 2nd law. Keiths wants to define it as “missing information”.

The use of metaphors become moot if one can calculate entropy. The old saying goes, “you’ll understand entropy if you calculate it.” From a practical standpoint, that is mostly what many engineer or chemists need, and certainly it will suffice for most college and high school level understanding. That’s why I’ve gone to great length to show the superiority of just calculating entropy values.

Most of the fighting and arguing are just part of what goes on in the internet. I enjoy the trash talking but it is an excellent opportunity to extend my own personal thought process whereby I refine my own understanding and clean up my teaching materials that will be eventually published to the ID and creationist community. I caught a few numerical and algebra errors in the course of this discussion (which I try to report as it comes to my attention).

I’ll humor the back-and-forth for as long as the exchanges improve my understanding and the teaching examples I’m trying to collect. The ice cube example is now one of my favorite examples along with the Sakur-Tetrode spreadsheet I put together.

But this whole observer-dependent entropy does not sit well with me. I didn’t dispute the “missing information” definition as inherently wrong, just superfluous and gratuitous.

I’ve suggested the macro thermodynamic definition by Claussius (who was the one who coined the word “entropy”) to be the one of most practical and wide ranging in utility.

dS = dQ/T (reversible)

The other important definition is the micro thermodynamic description of Boltzann/Gibbs. It is the most comprehensive and connects the Claussius macro thermodynamic viewpoint (heat and temperature) to the micro thermodynamic viewpoint (microstates of molecular systems).

S = kB ln W

But even though Boltzmann is the most comprehensive and arguably the most important in the scheme of things (with it’s definition of entropy in terms of microstates of a collection of molecules) from a practical standpoint, all most no one goes around counting the number of microstates in a system. It is often inferred from macro thermodynamics of Claussius, that’s because it is hard to examine a system one molecule at a time. It’s really a pointless discussion whether the Claussius or Boltzmann are more important, both are needed in theory and practice.

I take sharp exception to the claim, however, that the Claussius definition of “dS = dQ/T reversible is rarely informative”. Claussius is frequently all we have to go on because the Boltzmann computation is usually mathematically intractable and we certainly don’t have microstate counting machines.

Making the thermodynamics a subset of Shannon theory is largely superfluous and gratuitous, imho, and as Keiths painfully demonstrated. He can’t really compute entropy numbers starting with “missing information”, he has to compute “missing information” by first computing entropy the old- fashioned Claussius way for melting ice cubes (and who knows what else) to compute missing information. If that’s the case for Keiths, then imho, the “missing information” model is largely superfluous and not fit for teaching. The links to the college lectures and exams hopefully conveyed the point that the “missing information” approach is largely ignored by the mainstream in practice.

This whole “observer dependence” stuff strikes me as useless at best and entirely wrong at worst. I wasn’t too interested in the minutia of Keiths definitions since he wasn’t bringing home the bacon on a simple question of the entropy change of a 20gram melting ice cube. That rendered that part of the debate moot from a practical standpoint, but imho, Keiths got it wrong. Entropy, as far as practice is concerned, is expected to be an objective property of a system much like length, volume, temperature, etc. The only drawback is that entropy is so esoteric that it has little in the way of everyday metaphors to describe it. I think disorder is an in-appropriate metaphor and energy dispersal is the better one — but that discussion is mostly moot if one can calculate it from their definitions, which obviously Keiths can’t do after 1600 comments or so.