In Slight Defense of Granville Sewell: A. Lehninger, Larry Moran, L. Boltzmann

The basic biochemistry textbook I study from is Lehninger Prinicples of Biochemistry. It’s a well regarded college textbook. But there’s a minor problem regarding the book’s Granville-Sewell-like description of entropy:

The randomness or disorder of the components of a chemical system is expressed as entropy,

Nelson, David L.; Cox, Michael M.. Lehninger Principles of Biochemistry (Page 23). W.H. Freeman. Kindle Edition.

Gag!

And from the textbook written by our very own Larry Moran:

enthalpy (H). A thermodynamic state function that describes the heat content of a system, entropy (S). A thermodynamic state function that describes the randomness or disorder of a system.

Principles of Biochemistry, 5th Edition, Glossary

Laurence A. Moran, University of Toronto
H. Robert Horton, North Carolina State University
K. Gray Scrimgeour, University of Toronto
Marc D. Perry, University of Toronto

Choke!

Thankfully Dan Styer almost comes to the rescue as he criticized ID proponent Granville Sewell:

On 27 December 2005, Professor Granville Sewell’s essay “Evolution’s Thermodynamic Failure” appeared in an opinion magazine titled The American Spectator. The second paragraph of that essay conflates “entropy”, “randomness”, “disorder”, and “uniformity”. The third paragraph claims that “Natural forces, such as corrosion, erosion, fire and explosions, do not create order, they destroy it.” Investigation of this second claim reveals not only that it is false, but also shows the first conflation to be an error.

“Disorder” is an analogy for entropy, not a definition for entropy. Analogies are powerful but imperfect. When I say “My love is a red, red rose”, I mean that my love shares some characteristics with a rose: both are beautiful, both are ethereal, both are desirable. But my love does not share all characteristics with a rose: a rose is susceptible to aphid infection, my love is not; a rose should be fed 6-12-6 fertilizer, which would poison my love.

Is Entropy Disorder?
Dan Styer criticizing Granville Sewell

Almost well said. Entropy is often correlated with disorder, but it isn’t the same thing as disorder. Shoe size is correlated with reading ability (i.e., small shoe size suggests we’re dealing with a toddler, therefore non-reader). Perhaps “analogy” is too generous a way to describe the mistake of saying entropy is disorder. But it’s a step in the right direction.

Styer’s comment leads to this rhetorical challenge by physicist Mike Elzinga to creationists:

Where are the order/disorder and “information” in that? Creationists need to explain where Clausius’s coining of the word entropy means that everything tends to disorder and decay.

Mike Elzinga
2lot trouble

Hear, hear.

But to single out creationists for equating entropy with disorder? How about biochemists like Lehninger (or the authors carrying Lehninger’s torch), or Larry Moran or the founder of statistical mechanics Ludwig Boltzmann himself who suggested entropy is disorder.

“In order to explain the fact that the calculations based on this assumption [“…that by far the largest number of possible states have the characteristic properties of the Maxwell distribution…”] correspond to actually observable processes, one must assume that an enormously complicated mechanical system represents a good picture of the world, and that all or at least most of the parts of it surrounding us are initially in a very ordered — and therefore very improbable — state. When this is the case, then whenever two of more small parts of it come into interaction with each other, the system formed by these parts is also initially in an ordered state and when left to itself it rapidly proceeds to the disordered most probable state.

Ludwig Boltzmann

Boltzmann used a very poor choice of words. He (ahem) made a small mistake in a qualitative description, not a formal definition.

Chemistry professor Frank Lambert comments about Boltzmann’s unfortunate legacy of “disordered most probable states”:

That slight, innocent paragraph of a sincere man — but before modern understanding of q(rev)/T via knowledge of molecular behavior (Boltzmann believed that molecules perhaps could occupy only an infinitesimal volume of space), or quantum mechanics, or the Third Law — that paragraph and its similar nearby words are the foundation of all dependence on “entropy is a measure of disorder”. Because of it, uncountable thousands of scientists and non-scientists have spent endless hours in thought and argument involving ‘disorder’and entropy in the past century. Apparently never having read its astonishingly overly-simplistic basis, they believed that somewhere there was some profound base. Somewhere. There isn’t. Boltzmann was the source and no one bothered to challenge him. Why should they?

Boltzmann’s concept of entropy change was accepted for a century primarily because skilled physicists and thermodynamicists focused on the fascinating relationships and powerful theoretical and practical conclusions arising from entropy’s relation to the behavior of matter. They were not concerned with conceptual, non-mathematical answers to the question, “What is entropy, really?” that their students occasionally had the courage to ask. Their response, because it was what had been taught to them, was “Learn how to calculate changes in entropy. Then you will understand what entropy ‘really is’.”

There is no basis in physical science for interpreting entropy change as involving order and disorder.

Boltzmann

So in slight defense of Granville Sewell and numerous other creationists from Henry Morris to Duane Gish to Kairos Turbo Encabulator Focus for equating entropy with disorder, when we ask “where did they get those distorted notions of entropy?”, we need look no further than many biochemists and textbook authors from reputable universities, and unfortunately one of the fathers of statistical mechanics, Ludwig Boltzmann himself!

Here is a list of chemistry books that treat entropy correctly:
EntropySite.Oxy.Edu

April 2014

The 36 Science Textbooks That Have Deleted “disorder” From Their Description of the Nature of Entropy
…..

Lehninger and Moran’s books aren’t on that list. Their books however did make the list of biochem books judged by some Australians as decreasing in fitness:

Is the evolution of biochemistry texts decreasing fitness? A case study of pedagogical error in bioenergetics

The initial impetus for this research was the discovery by the authors of a variety of common and consistent errors and misconceptions in pedagogical literature on the topic of thermodynamics in Biochemistry. A systematic survey was undertaken of material on thermodynamics in Biochemistry textbooks commonly used in Australian Universities over the period from the 1920s up to 2010.
..
Lehninger, Nelson and Cox
….
Garrett and Grisham

Moran and Scrimgeour
….
Jeremy M. Berg, John L. Tymoczko, Lubert Stryer in 2002

A few creationists got it right, like Gange:
Entropy and Disorder from a Creationist Book Endorsed by a Nobel Laureate

Here is chemistry professor Frank Lambert’s informal definition of entropy:

Entropy is simply a quantitative measure of what the second law of thermodynamics describes: the dispersal of energy in a process in our material world. Entropy is not a complicated concept qualitatively. Most certainly, entropy is not disorder nor a measure of chaos — even though it is thus erroneously defined in dictionaries or pre-2002 sources.

Entropy

The is the correct Ludwig Boltzman Entropy as written by Planck:

S = k log W

where
S = entropy
k = boltzman’s constant
W = number of microstates

Also there is Clausius:

delta-S = Integral (dq/T)

where
delta-S = change in entropy
dq = inexact differential of q (heat)

As Dr. Mike asked rhetorically: “Where are the order/disorder and “information” in that?”

My answer: Nowhere! Most creationists (except guys like me and Robert Gange) don’t realize it’s nowhere.

1,720 thoughts on “In Slight Defense of Granville Sewell: A. Lehninger, Larry Moran, L. Boltzmann

  1. There is no more widespread error in chemistry and physics texts than the identification of a thermodynamic entropy increase with a change in the pattern of a group of macro objects. The classic example is that of playing cards. Shuffling a new deck is widely said to result in an increase in entropy in the cards.

    Nonsense.

  2. walto,

    That was really interesting. I’m wondering to what extent the argument in this thread has resulted from the subtle shift in the meaning of the term that Lambert has outlined.

    There’s been no effect, as far as I can see, because Sal, DNA_Jock, and I all agree that thermodynamic entropy is not a measure of macroscopic disorder.

    The dispute has been over which of the alternative interpretations of entropy is correct.

  3. Joe Felsenstein:

    Here is Lambert’s position on the shuffled cards / messy desk issue.(I’m not taking a position).

    I agree with Lambert, and Lambert’s claims can be extended as a pointed criticism of creationist arguments that use the 2nd law such as Sewell, Duane Gish, Henry Morris and lots of the crowd over at UncommonDescent.

    I provided a criticism sympathetic to Lambert’s criticism that was particularly aimed at Creationists and ID proponents supporting Sewell’s views. I used mostly textbook knowledge that would be expected of undergraduates in Chemistry, Physics and Mechanical Engineering here.

    When I argued with my old friend Granville over these matters, he admitted he could calculate the entropy change from the dead state to living state. Such inability to make such calculation shows the weakness in the argument!

    I did calculations that showed a warm living human has more entropy than a lifeless ice cube (and by way of extension a dead frozen rat!). These would be calculations a college science student can do and see for themselves the folly of IDists using entropy arguments for ID.

    Here is my criticism and a comment from that thread that showed a warm living human has more entropy than a lifeless ice cube:

    2LOT and ID entropy calculations (editorial corrections welcome)

    A warm living human has substantially more thermodynamic entropy than a lifeless ice cube. This can be demonstrated by taking the standard molar entropies of water and ice and estimating the entropy of water in a warm living human vs entropy of water in a lifeless ice cube.

    http://en.wikipedia.org/wiki/Water_(data_page)

    Std Molar Entropy liquid water: 69.95 J/mol/K
    Std Molar Entropy ice: 41 J/mol/K

    A human has more liquid water, say 30 liters, than an ice cube (12 milliliters).

    Let S_humum be the entropy of a human, and S_ice_cube the entropy of an ice cube.

    Order of magnitude entropy numbers:

    S_human > 30 liters * 55.6 mol/liter * 69.95 J/K = 116,677 J/K

    S_ice_cube ~= 0.012 liters * 55.6 mol/liter * 41 J/K = 27 J/K approximately (ice is a little less dense than liquid water, but this is inconsequential for the question at hand).

    Thus warm living human has more entropy than a lifeless cube of ice.

    So why do creationists worry about entropy increasing in the universe as precluding evolution? Given that a warm living human has more entropy than an ice cube, then it would seem there are lots of cases where MORE entropy is beneficial.

    Ergo, the 2nd law does not preclude evolution. Other lines of reasoning should be used by ID proponents to criticize evolution, not the 2nd law.

  4. keiths: If you’re talking about thermodynamic entropy, as we have been so far in the thread, then I would say that there is only a slight difference between before and after in terms of entropy — assuming that the cards are mechanically unaffected by the shuffling and that their temperature remains unchanged. With those stipulations, each of the cards individually has the same thermodynamic entropy as before, but the deck as a whole has slightly higher entropy because you know less than you did before about its arrangement. That is, you know less about the microstate of the whole deck than you did before, even though your (lack of) knowledge of the individual cards’ microstates remains the same.

    Doesn’t Lambert dispute the relevance of what we know about the arrangement of the deck to thermodynamic entropy? I thought he said that what we know or don’t know about the microstate of the whole deck makes no difference. And doesn’t he attribute the view that knowledge or ignorance of that information affects thermodynamic entropy to the conflation Shannon’s use of the term with the thermodynamic meaning?

    Again, that’s how I understood that paper. I could definitely be wrong.

  5. Mung:

    There are in fact different interpretations of entropy. Sal is right about that.

    Of course. The problem is that the interpretations that Sal favors are incorrect. They fail in certain cases.

    Meanwhile, he hasn’t been able to come up with a single scenario in which the “entropy as missing knowledge” interpretation — the one that DNA_Jock and I favor — fails.

  6. Walto:

    That was really interesting. I’m wondering to what extent the argument in this thread has resulted from the subtle shift in the meaning of the term that Lambert has outlined.

    You’re very astute. 🙂

    That is the central point of contention right now. The contention centers around this claim by Keiths:

    Keiths:

    Energy is not dispersed when you take volumes of two gases at the same temperature and allow them to mix, but entropy most certainly increases.

    Keiths equivocated the meaning of “dispersed”. It is actually tedious to show the equivocation, and my comments were aimed at demonstrating his equivocation. DNA_jock bought into that equivocation as well.

    The first part was to show that several academics had a different usage of dispersed than the one Keiths is using. That was blatantly evidenced by the lecture slide I kept posting which said:

    Kinetic energy is dispersed over a larger volume!

    The interpretation is simple. The pink molecules get dispersed therefore the energy in the pink molecules get dispersed too. The blue molecules get dispersed therefore the energy in the blue molecules gets dispersed too. This will hold true even when the pink and blue molecules individually exchange energy because with a large number of pink and blue molecules, things average out, and there is no net change in total energy of all the pink molecules and likewise for the blue molecules.

    On average, this is somewhat like me exchanging my one dollar bill for someone else’s one dollar bill, and doing this constantly. I emphasize the word “somewhat”. A more accurate version is the pink molecules may individually do an exchange of money with blue molecules. Some pink molecules may get short changed, and some do the short changing, but on the whole the pink molecules have no net change in their total money. Money is a metaphor for energy in this illustration.

    During expansion of the gas in the diagram below, the energy in the pink molecules just more spread out than it was before the barriers were removed.

    To me it seems Keiths is obsessed with tracking the position of the dollar bills and coins rather than doing accounting of the net total energy of the pink molecules, which never changes before and after spreading out.

    This diagram shows the effective spreading out of pink molecules and is extensible even in a mixing scenario. It shows the pink molecules (and therefore the energy in the pink molecules) spreading out.

  7. walto: Doesn’t Lambert dispute the relevance of what we know about the arrangement of the deck to thermodynamic entropy?

    Probably. And it’s also likely that Sal takes the same position. For example, perhaps some cards in the deck are colder than other cards in the deck.

    But does how cold one card in the deck is, relative to the temperature of other cards in the deck, affect where in the deck that card is, or how many yes/no questions are required to specify where in the deck that specific card is?

    Most people would say no.

    ETA: Oh, and you’re so astute, walto.

  8. Many words can be used in several different ways in several different contexts. Can entropy be used more than one way?

    Are some here trying to claim that scientists own the definition? I think not. It can be a thermodynamic quality, and also a comment on a messy room.

    I don’t get it, are some doubting that?

  9. Sal, to walto:

    That is the central point of contention right now.

    No, it isn’t.

    Walto is talking about Lambert’s discussion of card shuffling.

  10. stcordova: To me it seems Keiths is obsessed with tracking the position of the dollar bills and coins rather than doing accounting of the net total energy of the pink molecules, which never changes before and after spreading out.

    Oh good. We’re back to coin tossing. Assume we put 50 coins in the freezer for 50 minutes and we put 50 coins in the oven for 50 minutes. Then we toss them and they all turn up heads.

    Calculate the entropy.

  11. Mung: Probably. And it’s also likely that Sal takes the same position. For example, perhaps some cards in the deck are colder than other cards in the deck.

    But does how cold one card in the deck is, relative to the temperature of other cards in the deck, affect where in the deck that card is, or how many yes/no questions are required to specify where in the deck that specific card is?

    Most people would say no.

    ETA: Oh, and you’re so astute, walto.

    I guess I would say no –being all astute and everything.

    But in spite of my high level of astuity, I didn’t realize that there was a question, not only about the meaning of ‘entropy’ but also about the meaning of ‘dispersal’–at least according to Sal. Do you concur with that?

    But I mean, even if there ARE two misunderstandings about those two words, neither of these meaning questions should result in big fights where there is a will to understand each other, no?

  12. walto,

    Doesn’t Lambert dispute the relevance of what we know about the arrangement of the deck to thermodynamic entropy? I thought he said that what we know or don’t know about the microstate of the whole deck makes no difference.

    Yes, and he’s not quite right about that. Close, but not quite right.

    Lambert’s main point, with which I heartily agree, is that thermodynamic entropy is not a measure of macroscopic disorder. That is, it’s not the same as the second kind of entropy I discussed in this comment:

    On the other hand, you can define an entropy in which you ignore the thermodynamic details and instead just concentrate on the “logical” picture. In that case, the microstate of the deck is just the exact sequence in which the cards are ordered at any given time, and the entropy is just the logarithm of the number of microstates that are compatible with what you know about the arrangement of the deck.

    That kind of entropy is not thermodynamic entropy. Lambert’s complaint is that people conflate the two, and he’s right that it’s a serious mistake to do so.

    When you’re dealing with thermodynamic entropy, your concern is with the physical microstate of the system. So instead of merely worrying about the order of the cards, you’re worried about how all of the atoms in all of the cards are arranged, and what they’re doing.

    When you’re looking at that enormously complicated picture, in which there’s far more complexity and detail inside each card than there is in the ordering among cards, then the order of the cards makes little difference to the entropy of the entire system. But it still makes a small difference. Your “missing knowledge” of the system’s microstate is very slightly higher than it would be if the cards had remained unshuffled. Lambert’s mistake is, in effect, to claim that small = zero.

    That’s why it’s almost but not quite correct to say, as Lambert does, that the order of the cards is irrelevant to the thermodynamic entropy of the system.

  13. phoodoo,

    Many words can be used in several different ways in several different contexts. Can entropy be used more than one way?

    Yes, and in fact I do so here when I talk about two different kinds of entropy of a card deck: thermodynamic entropy and “logical” entropy.

    What causes problems is when people conflate different kinds of entropy, apply the wrong kind of entropy to a problem, or refer to something that isn’t entropy as if it were.

    IDers and creationists are notorious for trying to equate thermodynamic entropy with disorder and then arguing that if order arises, it constitutes a violation of the Second Law of Thermodynamics.

  14. keiths,

    Thanks, but I then take I that you’re saying that Lambert is simply wrong when he says we have to divorce thermodynamic entropy (the first kind you talk about) with anybody’s ignorance of this or that. He says that view of the matter is incorrect. Can you say why you think he’s wrong about that (instead of simply asserting that he is)?

    Thanks.

  15. walto,

    Thanks, but I then take I that you’re saying that Lambert is simply wrong when he says we have to divorce thermodynamic entropy (the first kind you talk about) with anybody’s ignorance of this or that. He says that view of the matter is incorrect. Can you say why you think he’s wrong about that (instead of simply asserting that he is)?

    I’ve been explaining throughout the thread why he (and Sal) are incorrect.

    The comments most relevant to the ‘missing knowledge’ question are here…

    Sal, quoting Harvey Leff:

    S = 2NkB ln 2 for distinguishable gases,

    0 for identical gases.

    Let that sink in, Sal. Do you think the laws of physics “look” to see whether the gases are distinguishable to an observer before deciding whether and how the energy should disperse?

    The idea is ludicrous. The gas behavior, and any consequent energy dispersal (or lack thereof), is the same whether the molecules in Leff’s experiment are “black” or “white”. The only difference is that the observer can look to see where the “black” molecules are versus the “white” molecules, thus distinguishing between microstates that would be indistinguishable if the molecules were all the same “color.”

    Entropy is a measure of missing knowledge, not of energy dispersal.

    …and here:

    Here’s a thought experiment that proves the point.

    Imagine there are two observers, Yolanda and Xavier. Yolanda has the necessary equipment to distinguish between two isotopes of gas X, which we’ll call X0 and X1. Xavier lacks this equipment.

    Someone sets up a gas A/B experiment of the kind we’ve been discussing. The A chamber contains nothing but isotope X0, and the B chamber contains nothing but isotope X1. The partition is removed and the gases mix. How does the entropy change?

    Xavier, who can’t distinguish between isotopes X0 and X1, thinks the entropy doesn’t change. As far as he’s concerned, X was evenly distributed before mixing, and it’s evenly distributed after. He can’t distinguish between the before/after microstate ensembles.

    Yolanda, on the other hand, sees a massive change in entropy. There’s a huge difference in the before and after microstate ensembles: before the partition is removed, all of the X0 molecules are on one side, and all of the X1 molecules are on the other. After mixing, both isotopes are evenly distributed. This huge difference means that Yolanda sees a large entropy increase.

    Xavier sees no entropy increase. Yolanda sees a large entropy increase. Who is right?

    The answer is both.

    Yolanda possesses knowledge about the system that Xavier lacks, so the entropy increase is different for her than it is for him.

    Entropy is a measure of missing knowledge, not of energy dispersal.

  16. walto:

    I didn’t realize that there was a question, not only about the meaning of ‘entropy’ but also about the meaning of ‘dispersal’–at least according to Sal. Do you concur with that?

    No, I think Sal and I agree on the meaning of ‘dispersal’. It’s just that Sal wants to pretend that ‘the dispersal of energy’ means the same thing as ‘the dispersal of energy associated with each species’.

    This comment explains why:

    Sal,

    Let’s review:

    1. You endorsed Lambert’s definition of entropy as a measure of energy dispersal. That was wrong, because the energy distribution does not change in the mixing cases I described.

    2. Realizing your error, though not admitting it, you amended the definition to take account of species:

    It is the dispersal of energy associated with each species individually, not the dispersal of energy on the whole mixed solution.

    Unfortunately, your amended definition is also wrong, because it only applies to mixing cases. Entropy increases are not limited to mixing cases, obviously. You can have an entropy increase without “dispersal of species.”

    Lambert was smart enough to avoid that mistake, which is why you don’t see “species” mentioned in his definition of entropy.

    Also, your definition fails in the isotope-mixing case I described here.

    Your two definitions have failed, but you haven’t come up with a single case in which entropy cannot be seen as a measure of missing knowledge regarding a system’s microstate.

    Apart from the protection of your fragile ego, what reason can you offer for sticking to definitions that are known to be bad rather than accepting one that works?

  17. keiths,

    Why conclude the answer is Both rather than that Yolanda is in a better epistemic position to determine what’s going on than Xavier?

    BOTH is clearly the wrong conclusion to come to if, for example, one of two observers of a scene is colorblind and one isn’t, or one observer of a fly’s eye has a microscope and one doesn’t, etc. The natural conclusion would seem to be that Xavier has less information and is not in as good a position as Yolanda to opine. That, at least, is the conclusion we’d make in most such cases.

  18. Here endeth our survey of statistical mechanics. We have touched upon only a very few elementary applications of what is, in fact, a science of the utmost generality. A completely rigorous construction of the foundations of statistical mechanics presents deep problems of an essentially philosophical nature — problems still not fully resolved, after almost a century of work by a succession of profound scholars, beginning with L. Boltzmann.

    Finally, statistical mechanics offers us the immense intellectual satisfaction of rendering transparent what thermodynamics leaves opaque: that is, statistical mechanics casts a brilliant light of interpretation throughout most of the realm in which classical thermodynamics can afford us little more than phenomenological description.

    – Leonard K. Nash. Elements of Statistical Thermodynamics

  19. Yet, for all its immense power, thermodynamics is a science that fails to reward man’s quest for understanding.

    – Leonard K. Nash

  20. walto,

    Why conclude the answer is Both rather than that Yolanda is in a better epistemic position to determine what’s going on than Xavier?

    She is in a better epistemic position than Xavier, which means she has more information about what’s going on. Therefore, from her vantage point, the entropy of the system is always lower than it is for Xavier.

    If you’re tempted to think that her answer is the “right” answer, however, because she’s in a better epistemic position, don’t. Like Xavier’s, her knowledge of the system’s microstate is incomplete. There are (in principle) observers who are in better and better epistemic positions than both Yolanda and Xavier, all the way up to Damon the Demon, who knows the exact microstate of the system at all times.

    For Damon, the entropy is always zero, because there is only one possible microstate for the system to be in — the one he already knows the system is in — and the logarithm of one is zero. But if you take that to be the “right” answer, because Damon is in the best possible epistemic position, then you have to regard the entropy of everything as being zero, which renders the Second Law useless. (It isn’t violated, because the Second Law allows for entropy to remain constant, but for entropy to remain zero all the time is pretty uninformative and not very helpful.)

    The problem is solved when you accept that there is no single “right” answer and that the entropy is observer-dependent. It depends on how much knowledge each particular observer is missing about the detailed state of the system.

  21. Sal, DNA_Jock, and I all agree that thermodynamic entropy is not a measure of macroscopic disorder.

    Agree.

    The dispute has been over which of the alternative interpretations of entropy is correct.

    Half agree. At issue is this statement by Keiths and supported by DNA_jock:

    I appreciate Lambert’s efforts to expose the “entropy is disorder” misconception, but the idea he’s offering in its place — “entropy is energy dispersal” — also appears to be a misconception.

    Energy is not dispersed when you take volumes of two gases at the same temperature and allow them to mix, but entropy most certainly increases.

    Keiths said energy is not dispersed in the mixing of two gases.

    Keiths is equivocating what Lambert means by “energy is dispersed”.

    I was able to figure out what Lambert meant, and it didn’t surprise me to find others arrived at the same interpretation as mine as witnessed by this lecture slide.

    What Keiths means by energy dispersal in the mixing of gases is not what Lambert and others meant by energy dispersal. Keiths was using the wrong accounting procedure to perceive the energy dispersal.

    The correct procedure is to track the energy dispersal of each species before and after mixing, not (as Keiths did) tracking the average energies of some randomly chosen molecule in the solution (not caring if is a pink or blue molecule so to speak). If Keiths bothered to look at the pink molecules in isolation, he’d realize the energy of pink molecules was concentrated in the left chamber before mixing and dispersed throughout both chambers after mixing. The folly of his interpretation was really borne out when I used monoatomic molecules in one chamber and diatomic molecules in the other. In the case of the diatomic molecule the energy prior to mixing is clearly not dispersed as it is compared to after mixing.

    I haven’t seen Keiths acknowledge his equivocations and errors. Also DNA_Jock wasn’t astute enough to see the Kieths equivocation, in fact he swallowed the cool-aid with total lack of insight and critical thinking.

    Congratulations to DNA_jock for an epic fail yet again over a trivial matter.

  22. Mung: Even so, there’s nothing wrong with being a retail store clerk.

    Assistant clerk in training.

  23. Sal,

    For everyone’s entertainment, here’s a challenge I’d like you to undertake.

    You claim that entropy is a measure of energy dispersal; I say it’s a measure of an observer’s missing knowledge about the detailed state of a system.

    Read the following thought experiment. If you’re right and I’m wrong, you should be able to explain to us how Xavier and Yolanda can look at the same physical system and get different values for the entropy and the change in entropy:

    Here’s a thought experiment that proves the point.

    Imagine there are two observers, Yolanda and Xavier. Yolanda has the necessary equipment to distinguish between two isotopes of gas X, which we’ll call X0 and X1. Xavier lacks this equipment.

    Someone sets up a gas A/B experiment of the kind we’ve been discussing. The A chamber contains nothing but isotope X0, and the B chamber contains nothing but isotope X1. The partition is removed and the gases mix. How does the entropy change?

    Xavier, who can’t distinguish between isotopes X0 and X1, thinks the entropy doesn’t change. As far as he’s concerned, X was evenly distributed before mixing, and it’s evenly distributed after. He can’t distinguish between the before/after microstate ensembles.

    Yolanda, on the other hand, sees a massive change in entropy. There’s a huge difference in the before and after microstate ensembles: before the partition is removed, all of the X0 molecules are on one side, and all of the X1 molecules are on the other. After mixing, both isotopes are evenly distributed. This huge difference means that Yolanda sees a large entropy increase.

    Xavier sees no entropy increase. Yolanda sees a large entropy increase. Who is right?

    The answer is both.

    Yolanda possesses knowledge about the system that Xavier lacks, so the entropy increase is different for her than it is for him.

    Entropy is a measure of missing knowledge, not of energy dispersal.

    They’re looking at the same system, Sal. If you’re right that entropy is a measure of energy dispersal — a purely physical process — they should get the same answer when they determine the entropy. They don’t. How do you explain that?

    I can explain it easily. Xavier and Yolanda have differing amounts of knowledge about the state of the system, so of course they get different values for the entropy. Entropy is a measure of their missing knowledge, after all.

    My definition works. Yours doesn’t.

  24. Sal:

    I was able to figure out what Lambert meant, and it didn’t surprise me to find others arrived at the same interpretation as mine as witnessed by this lecture slide.

    Sal Logic: If two or more people make the same mistake, it’s no longer a mistake.

    Sal is clinging to that slide like a security blanket. Too bad it’s wrong.

  25. keiths:
    walto,

    She is in a better epistemic position than Xavier, which means she has more information about what’s going on.Therefore, from her vantage point, the entropy of the system is always lower than it is for Xavier.

    If you’re tempted to think that her answer is the “right” answer,however, because she’s in a better epistemic position, don’t.Like Xavier’s, her knowledge of the system’s microstate is incomplete. There are (in principle) observers who are in better and better epistemic positions than both Yolanda and Xavier, all the way up to Damon the Demon, who knows the exact microstate of the system at all times.

    For Damon, the entropy is always zero, because there is only one possible microstate for the system to be in — the one he already knows the system is in — and the logarithm of one is zero. But if you take that to be the “right” answer, because Damon is in the best possible epistemic position, then you have to regard the entropy of everything as being zero, which renders the Second Law useless. (It isn’t violated, because the Second Law allows for entropy to remain constant, but for entropy to remain zero all the time is pretty uninformative and not very helpful.)

    The problem is solved when you accept that there is no single “right” answer and that the entropy is observer-dependent.It depends on how much knowledge each particular observer is missing about the detailed state of the system.

    Just because Yolanda’s epistemic position isn’t perfect doesn’t mean it’s not better than Xavier’s. It’s better than Xavier’s because it’s more accurate–i.e., closer to being right.

    The natural conclusion is, of course, that Xavier and Yolanda aren’t both right–they’re both wrong. But Yolanda is closer to being right. This all should be obvious.

    So the real meat of your post is in the final section, about Damon. In that passage, you say that, given any microstate, someone with complete knowledge will always come to the conclusion that there is no entropy. But that claim just begs the question against Lambert and others who hold that the concept is NOT epistemic. Lambert would simply deny that, given a thermodynamic understanding of entropy, Damon would determine each microstate to have the same entropy. Lambert would say that when one makes such statements one is simply conflating the thermodynamic concept that he is interested in, with Shannon’s quite different concept.

    It’s no response to simply insist that a Laplacian demon would calculate all microstates at zero entropy when your adversary denies that! You are relativizing the notion of entropy the same way some of my freshman students relativize truth when they say that whatever somebody believes is “true for them.” I know truth is not like that, but I’m willing to be convinced regarding entropy. You’ll have to do a better job than just saying it over and over though.

  26. walto:

    The natural conclusion is, of course, that Xavier and Yolanda aren’t both right–they’re both wrong. But Yolanda is closer to being right. This all should be obvious.

    What is the “right” answer? How much entropy before mixing? How much after?

  27. DNA_Jock and Keiths,

    Are you going to concede the pink molecules (and therefore the energy of the pink molecules) becomes more dispersed after the value is opened than before? Or are you guys going wallow in you self-imposed ignorance?

    Otherwise, explain to the readers how in the bottom of the diagram after the valve is opened how the pink molecules (and therefore their energy) is not more dispersed than before the value was opened.

    You guys got caught making a junior high school mistake, and now you’ll try to save face at all costs. Too funny.

    HAHAHA!

  28. keiths:
    walto:

    What is the “right” answer?How much entropy before mixing?How much after?

    Why should I know this? If I don’t, does that suggest to you there isn’t a right answer? I don’t know the cube root of 897,333,285.001 either.

  29. walto:

    Here, from Physics Forum, is a discussion (basically a spanking) of the claim that entropy is observer-relative:

    Could you link to the specific comment(s) in which you think the “spanking” is taking place, and summarize the key point(s) in your own words?

  30. keiths:
    walto:

    Could you link to the specific comment(s) in which you think the “spanking” is taking place, and summarize the key point(s) in your own words?

    Why? Are you incapable of reading it yourself?

  31. keiths, do you have any reason for supposing this quantity is, unlike most scientifically measured quantities, observer-relative or don’t you? Do you just like the idea?

    Why is the analogy between Yolanda with somebody with ordinary sight and Xavier with someone who is colorblind a bad analogy? Why is entropy different from, e.g., heat? Maybe Yolanda has a thermometer and Xavier doesn’t. If she says some object is 57 degrees F and he says it’s 65, are they BOTH right?

  32. Sal,

    Obviously, the “pink” molecules are more dispersed afterwards than before, and so is the energy. Neither DNA_Jock nor I have argued otherwise.

    Now repeat the experiment, starting with an equal number of “blue” molecules in the other chamber. After mixing, are the pink molecules more dispersed? Yes, of course. Are the blue molecules more dispersed? Yes, of course. Is energy more dispersed? No, because its distribution hasn’t changed.

    Cue your protest: “But…but…the blue energy has dispersed into the pink chamber, and the pink energy has dispersed into the blue chamber!”

    …or some variation of that. Right?

  33. walto,

    If you can’t tell us what the “right” answer is, can you at least tell us how one could go about determining it?

  34. walto:

    Here, from Physics Forum, is a discussion (basically a spanking) of the claim that entropy is observer-relative:

    keiths:

    Could you link to the specific comment(s) in which you think the “spanking” is taking place, and summarize the key point(s) in your own words?

    walto:

    Why? Are you incapable of reading it yourself?

    I read it, but didn’t see any “spanking” taking place. Apparently “spanking”, like entropy, is observer-relative.

    If you link to the comment(s) in question and summarize the key points, I’ll have a better idea of why you think the observer-relative position has been “spanked”, and I can respond.

  35. I guess I’d do this: (from https://www.chem.wisc.edu/deptfiles/genchem/netorial/modules/thermodynamics/entropy/entropy04.htm)

    Measuring Entropy

    One useful way of measuring entropy is by the following equation:

    DS = q/T (1)

    where S represents entropy, DS represents the change in entropy, q represents heat transfer, and T is the temperature. Using this equation it is possible to measure entropy changes using a calorimeter. The units of entropy are J/K.

    The temperature in this equation must be measured on the absolute, or Kelvin temperature scale. On this scale, zero is the theoretically lowest possible temperature that any substance can reach. At absolute 0 (0 K), all atomic motion ceases and the disorder in a substance is zero.

    How are the Kelvin and Celsius
    temperature scales related?

    The absolute entropy of any substance can be calculated using equation (1) in the following way. Imagine cooling the substance to absolute zero and forming a perfect crystal (no holes, all the atoms in their exact place in the crystal lattice). Since there is no disorder in this state, the entropy can be defined as zero. Now start introducing small amounts of heat and measuring the temperature change. Even though equation (1) only works when the temperature is constant, it is approximately correct when the temperature change is small. Then you can use equation (1) to calculate the entropy changes. Continue this process until you reach the temperature for which you want to know the entropy of a substance (25 ºC is a common temperature for reporting the entropy of a substance).

    The Thermodynamics Table lists the entropies of some substances at 25 ºC. Note that there are values listed for elements, unlike DHfº values for elements. The reason is that the entropies listed are absolute, rather than relative to some arbitrary standard like enthalpy. This is because we know that the substance has zero entropy as a perfect crystal at 0 K; there is no comparable zero for enthalpy. The fact that a perfect crystal of a substance at 0 K has zero entropy is sometimes called the Third Law of Thermodynamics.

    Is there something necessarily observer-relative about this measurement?

  36. Keiths:

    Cue your protest: “But…but…the blue energy has dispersed into the pink chamber, and the pink energy has dispersed into the blue chamber!”

    You can’t color the energy. There is no color in the famous equation of kinetic energy:

    KE = 1/2 m v^2

    That’s your problem.

    You can speak of the energy residing in the blue molecules however.

    The energy in the blue molecules is more spread out. It doesn’t matter that the energy present in the blue molecules may have flowed in and out with the pink molecules.

    I can take a whole bunch of coins that belong to me and spread them out on the table. It doesn’t matter if the coins were owned by blue aliens before I got a hold of them, they are my coins, and my coins can get spread out. In fact if a blue alien trades his coins for mine on the table, but there is no net loss by either of us, my coins can still be spread out!

    In like manner for the pink molecules after mixing, they have their energy. It doesn’t matter if after mixing some of the energy residing in some of the pink molecules previously resided in some blue molecules, that doesn’t change the fact that at any given instant after mixing the energy in the pink molecules is more dispersed.

    In any case, the way I describe it is the way Lambert, Evans, Townsend, and just about every author cited describe it.

    In fact, model of mixing entropy as the sum of the expansion entropy is independent of the energy dispersal description, you’ll find it in literature which call entropy disorder (I provided one such link earlier).

    You want to equivocate the intended meaning of phrases by others for your own self-delusional purposes, that’s up to you, but I was able to figure out how Lambert and others were using their terms. You obviously refuse to be charitable in your reading of what they write.

    Suit yourself and be right in your own mind with your own mis-readings of intended meanings by others. You insist you have to interpret what Lambert says by Keiths’ interpretation of what Lambert says, not Lambert’s interpretation of what Lambert says.

  37. Sal,

    You can’t color the energy. There is no color in the famous equation of kinetic energy:

    KE = 1/2 m v^2

    That’s your problem.

    No, that’s your problem.

    More on this after I finish my response(s) to walto.

  38. keiths:
    walto:

    keiths:

    walto:

    I read it, but didn’t see any “spanking” taking place.Apparently “spanking”, like entropy, is observer-relative.

    If you link to the comment(s) in question and summarize the key points, I’ll have a better idea of why you think the observer-relative position has been “spanked”, and I can respond.

    I have summarized the key points in my own posts here. There’s just a lot more detail there. If you successfully respond to my posts, that should take care of the relevant posts on that site, which just hammer home that claims that entropy measurements are observer-relative are mistaken. I’ve suggested the same thing here, and was interested to see what professionals, other than Lambert, say about this. From what I’ve been able to find out (which is not dispositive, of course), you seem to be pretty much on your own on this issue.

    I still don’t know why you take this position. For all I know, you have good reasons for it and have simply not posted them yet. To date, just begging the question against Lambert.

  39. walto: Is there something necessarily observer-relative about this measurement?

    It’s a shame that Mike Elzinga and OlegT no longer comment here. There is quite a bit buried in old threads. Sal helpfully linked to a comment by Mike Elzinga here.

  40. Hey Keith’s

    keep it up. I’m pretty sure you are wrong but I find the argument to be interesting.
    I want to see you answer the objections for a while to get a handle on this .

    I want to see how it all goes.

    The presupositionalist in me might say
    “well of course thermodynamics is observer dependent the authoritative observer is God therefore thermodynamics presupposes God.”

    Regardless it’s an interesting point of view that I’ve never seen before

    peace

  41. walto,

    When I first encountered the idea that entropy was observer-relative, it seemed bizarre, counterintuitive, and maybe even a bit spooky. And fishy. Why, if entropy is observer-relative, are there standard, published tables of the entropy of various substances? If my knowledge is relevant, why don’t I have to take it into account when I look up an entropy value in the table? Everyone gets the same answer when they consult the table. Shouldn’t the answers differ if entropy is observer-relative?

    The solution to the conundrum is actually quite simple, upon reflection. In real life, it’s almost always true that different observers have the same (lack of) knowledge of the system’s microstate at any given time. If they’re all in the same state of knowledge relative to the system’s microstate, then they’ll all get the same answer when determining the entropy — even though it’s observer-relative.

    For example, standard molar entropy tables give the entropies for different solids at specified temperatures. The user of the table is assumed to know nothing about the system other than its composition and its temperature. Well, if all observers of a system know the same things about it — its composition and temperature, but nothing else about the microstate — then of course they should get the same “answer” for the entropy.

    So the problem with the method you quoted is that it essentially makes the same assumptions as the tables. The observer measures the heat entering the system, notes the corresponding temperature change, and infers the entropy change from the equation. This is legitimate only because the observer is assumed to have no other knowledge of the system’s microstate. The value of W in Boltzmann’s formula encompasses all potentially accessible microstates at the given temperature. In real life, that’s generally true, and the measurements are practically useful. But in theory the observers can have vastly different information about the microstate, and thus W can be much smaller for them.

  42. Alan Fox: It’s a shame that Mike Elzinga and OlegT no longer comment here. There is quite a bit buried in old threads. Sal helpfully linked to a comment by Mike Elzinga here.

    Thanks, Alan. That was indeed helpful. And I see this issue has come up before! I particularly like this remark by Elizabeth on the following page of comments:

    The problem we are seeing in his argument is that life does not go from disorder to order in the sense of decreasing entropy. He seems to have misunderstood the meaning of the word “entropy”. The 747 has the same amount of entropy as the junkyard, approximately, even though, to a human being, it is much more ordered.

    In other words “low entropy” does not mean “ordered” in the sense of “tidy” or “useful” or “pretty”. It only means “ordered” in the sense of some things being in a higher energy state than others. And so life does not in fact decrease entropy, even though it certainly makes things more ordered in those other senses. It makes things less ordered in the sense of having greater entropy.

    I now see why Mung is agreeing with keiths on this issue.

  43. keiths: If they’re all in the same state of knowledge relative to the system’s microstate, then they’ll all get the same answer when determining the entropy

    Right. Like two people having ordinary eyesight opining on the color of a chestnut.

    keiths: even though it’s observer-relative.

    That’s the question-begging part. WHY SHOULD WE BELIEVE IT IS OBSERVER-RELATIVE?

    keiths: The solution to the conundrum is actually quite simple, upon reflection.

    Why do you think there’s a conundrum in the first place? There’s nothing to solve if we take entropy as an objective quality. Why do you think it isn’t? (Hint: Saying it isn’t, isn’t giving a reason.)

  44. walto,

    The short answer is that distinguishability is key, and distinguishability is observer-relative.

    That’s why I was careful in my thought experiment to specify that Yolanda could tell the difference between the two isotopes while Xavier could not. It’s also why the molecules are “pink” and “blue” in Sal’s examples, even though the pinkness and blueness have no impact whatsoever on the underlying physics.

    Distinguishability is observer-relative, and differences in distinguishability lead to differences in knowledge among observers.

Leave a Reply