The basic biochemistry textbook I study from is Lehninger Prinicples of Biochemistry. It’s a well regarded college textbook. But there’s a minor problem regarding the book’s Granville-Sewell-like description of entropy:
The randomness or disorder of the components of a chemical system is expressed as entropy,
Nelson, David L.; Cox, Michael M.. Lehninger Principles of Biochemistry (Page 23). W.H. Freeman. Kindle Edition.
Gag!
And from the textbook written by our very own Larry Moran:
enthalpy (H). A thermodynamic state function that describes the heat content of a system, entropy (S). A thermodynamic state function that describes the randomness or disorder of a system.
Principles of Biochemistry, 5th Edition, Glossary
Laurence A. Moran, University of Toronto
H. Robert Horton, North Carolina State University
K. Gray Scrimgeour, University of Toronto
Marc D. Perry, University of Toronto
Choke!
Thankfully Dan Styer almost comes to the rescue as he criticized ID proponent Granville Sewell:
On 27 December 2005, Professor Granville Sewell’s essay “Evolution’s Thermodynamic Failure” appeared in an opinion magazine titled The American Spectator. The second paragraph of that essay conflates “entropy”, “randomness”, “disorder”, and “uniformity”. The third paragraph claims that “Natural forces, such as corrosion, erosion, fire and explosions, do not create order, they destroy it.” Investigation of this second claim reveals not only that it is false, but also shows the first conflation to be an error.
…
“Disorder” is an analogy for entropy, not a definition for entropy. Analogies are powerful but imperfect. When I say “My love is a red, red rose”, I mean that my love shares some characteristics with a rose: both are beautiful, both are ethereal, both are desirable. But my love does not share all characteristics with a rose: a rose is susceptible to aphid infection, my love is not; a rose should be fed 6-12-6 fertilizer, which would poison my love.Is Entropy Disorder?
Dan Styer criticizing Granville Sewell
Almost well said. Entropy is often correlated with disorder, but it isn’t the same thing as disorder. Shoe size is correlated with reading ability (i.e., small shoe size suggests we’re dealing with a toddler, therefore non-reader). Perhaps “analogy” is too generous a way to describe the mistake of saying entropy is disorder. But it’s a step in the right direction.
Styer’s comment leads to this rhetorical challenge by physicist Mike Elzinga to creationists:
Where are the order/disorder and “information” in that? Creationists need to explain where Clausius’s coining of the word entropy means that everything tends to disorder and decay.
Mike Elzinga
2lot trouble
Hear, hear.
But to single out creationists for equating entropy with disorder? How about biochemists like Lehninger (or the authors carrying Lehninger’s torch), or Larry Moran or the founder of statistical mechanics Ludwig Boltzmann himself who suggested entropy is disorder.
“In order to explain the fact that the calculations based on this assumption [“…that by far the largest number of possible states have the characteristic properties of the Maxwell distribution…”] correspond to actually observable processes, one must assume that an enormously complicated mechanical system represents a good picture of the world, and that all or at least most of the parts of it surrounding us are initially in a very ordered — and therefore very improbable — state. When this is the case, then whenever two of more small parts of it come into interaction with each other, the system formed by these parts is also initially in an ordered state and when left to itself it rapidly proceeds to the disordered most probable state.”
Boltzmann used a very poor choice of words. He (ahem) made a small mistake in a qualitative description, not a formal definition.
Chemistry professor Frank Lambert comments about Boltzmann’s unfortunate legacy of “disordered most probable states”:
That slight, innocent paragraph of a sincere man — but before modern understanding of q(rev)/T via knowledge of molecular behavior (Boltzmann believed that molecules perhaps could occupy only an infinitesimal volume of space), or quantum mechanics, or the Third Law — that paragraph and its similar nearby words are the foundation of all dependence on “entropy is a measure of disorder”. Because of it, uncountable thousands of scientists and non-scientists have spent endless hours in thought and argument involving ‘disorder’and entropy in the past century. Apparently never having read its astonishingly overly-simplistic basis, they believed that somewhere there was some profound base. Somewhere. There isn’t. Boltzmann was the source and no one bothered to challenge him. Why should they?
Boltzmann’s concept of entropy change was accepted for a century primarily because skilled physicists and thermodynamicists focused on the fascinating relationships and powerful theoretical and practical conclusions arising from entropy’s relation to the behavior of matter. They were not concerned with conceptual, non-mathematical answers to the question, “What is entropy, really?” that their students occasionally had the courage to ask. Their response, because it was what had been taught to them, was “Learn how to calculate changes in entropy. Then you will understand what entropy ‘really is’.”
There is no basis in physical science for interpreting entropy change as involving order and disorder.
So in slight defense of Granville Sewell and numerous other creationists from Henry Morris to Duane Gish to Kairos Turbo Encabulator Focus for equating entropy with disorder, when we ask “where did they get those distorted notions of entropy?”, we need look no further than many biochemists and textbook authors from reputable universities, and unfortunately one of the fathers of statistical mechanics, Ludwig Boltzmann himself!
Here is a list of chemistry books that treat entropy correctly:
EntropySite.Oxy.Edu
April 2014
The 36 Science Textbooks That Have Deleted “disorder” From Their Description of the Nature of Entropy
…..
Lehninger and Moran’s books aren’t on that list. Their books however did make the list of biochem books judged by some Australians as decreasing in fitness:
The initial impetus for this research was the discovery by the authors of a variety of common and consistent errors and misconceptions in pedagogical literature on the topic of thermodynamics in Biochemistry. A systematic survey was undertaken of material on thermodynamics in Biochemistry textbooks commonly used in Australian Universities over the period from the 1920s up to 2010.
..
Lehninger, Nelson and Cox
….
Garrett and Grisham
…
Moran and Scrimgeour
….
Jeremy M. Berg, John L. Tymoczko, Lubert Stryer in 2002
…
A few creationists got it right, like Gange:
Entropy and Disorder from a Creationist Book Endorsed by a Nobel Laureate
Here is chemistry professor Frank Lambert’s informal definition of entropy:
Entropy is simply a quantitative measure of what the second law of thermodynamics describes: the dispersal of energy in a process in our material world. Entropy is not a complicated concept qualitatively. Most certainly, entropy is not disorder nor a measure of chaos — even though it is thus erroneously defined in dictionaries or pre-2002 sources.
The is the correct Ludwig Boltzman Entropy as written by Planck:
S = k log W
where
S = entropy
k = boltzman’s constant
W = number of microstates
Also there is Clausius:
delta-S = Integral (dq/T)
where
delta-S = change in entropy
dq = inexact differential of q (heat)
As Dr. Mike asked rhetorically: “Where are the order/disorder and “information” in that?”
My answer: Nowhere! Most creationists (except guys like me and Robert Gange) don’t realize it’s nowhere.
Nonsense.
walto,
There’s been no effect, as far as I can see, because Sal, DNA_Jock, and I all agree that thermodynamic entropy is not a measure of macroscopic disorder.
The dispute has been over which of the alternative interpretations of entropy is correct.
I agree with Lambert, and Lambert’s claims can be extended as a pointed criticism of creationist arguments that use the 2nd law such as Sewell, Duane Gish, Henry Morris and lots of the crowd over at UncommonDescent.
I provided a criticism sympathetic to Lambert’s criticism that was particularly aimed at Creationists and ID proponents supporting Sewell’s views. I used mostly textbook knowledge that would be expected of undergraduates in Chemistry, Physics and Mechanical Engineering here.
When I argued with my old friend Granville over these matters, he admitted he could calculate the entropy change from the dead state to living state. Such inability to make such calculation shows the weakness in the argument!
I did calculations that showed a warm living human has more entropy than a lifeless ice cube (and by way of extension a dead frozen rat!). These would be calculations a college science student can do and see for themselves the folly of IDists using entropy arguments for ID.
Here is my criticism and a comment from that thread that showed a warm living human has more entropy than a lifeless ice cube:
Doesn’t Lambert dispute the relevance of what we know about the arrangement of the deck to thermodynamic entropy? I thought he said that what we know or don’t know about the microstate of the whole deck makes no difference. And doesn’t he attribute the view that knowledge or ignorance of that information affects thermodynamic entropy to the conflation Shannon’s use of the term with the thermodynamic meaning?
Again, that’s how I understood that paper. I could definitely be wrong.
That’s a low bar to get under!
Mung:
Of course. The problem is that the interpretations that Sal favors are incorrect. They fail in certain cases.
Meanwhile, he hasn’t been able to come up with a single scenario in which the “entropy as missing knowledge” interpretation — the one that DNA_Jock and I favor — fails.
Even so, there’s nothing wrong with being a retail store clerk.
🙂
You’re very astute. 🙂
That is the central point of contention right now. The contention centers around this claim by Keiths:
Keiths equivocated the meaning of “dispersed”. It is actually tedious to show the equivocation, and my comments were aimed at demonstrating his equivocation. DNA_jock bought into that equivocation as well.
The first part was to show that several academics had a different usage of dispersed than the one Keiths is using. That was blatantly evidenced by the lecture slide I kept posting which said:
The interpretation is simple. The pink molecules get dispersed therefore the energy in the pink molecules get dispersed too. The blue molecules get dispersed therefore the energy in the blue molecules gets dispersed too. This will hold true even when the pink and blue molecules individually exchange energy because with a large number of pink and blue molecules, things average out, and there is no net change in total energy of all the pink molecules and likewise for the blue molecules.
On average, this is somewhat like me exchanging my one dollar bill for someone else’s one dollar bill, and doing this constantly. I emphasize the word “somewhat”. A more accurate version is the pink molecules may individually do an exchange of money with blue molecules. Some pink molecules may get short changed, and some do the short changing, but on the whole the pink molecules have no net change in their total money. Money is a metaphor for energy in this illustration.
During expansion of the gas in the diagram below, the energy in the pink molecules just more spread out than it was before the barriers were removed.
To me it seems Keiths is obsessed with tracking the position of the dollar bills and coins rather than doing accounting of the net total energy of the pink molecules, which never changes before and after spreading out.
This diagram shows the effective spreading out of pink molecules and is extensible even in a mixing scenario. It shows the pink molecules (and therefore the energy in the pink molecules) spreading out.
Probably. And it’s also likely that Sal takes the same position. For example, perhaps some cards in the deck are colder than other cards in the deck.
But does how cold one card in the deck is, relative to the temperature of other cards in the deck, affect where in the deck that card is, or how many yes/no questions are required to specify where in the deck that specific card is?
Most people would say no.
ETA: Oh, and you’re so astute, walto.
Many words can be used in several different ways in several different contexts. Can entropy be used more than one way?
Are some here trying to claim that scientists own the definition? I think not. It can be a thermodynamic quality, and also a comment on a messy room.
I don’t get it, are some doubting that?
Sal, to walto:
No, it isn’t.
Walto is talking about Lambert’s discussion of card shuffling.
Oh good. We’re back to coin tossing. Assume we put 50 coins in the freezer for 50 minutes and we put 50 coins in the oven for 50 minutes. Then we toss them and they all turn up heads.
Calculate the entropy.
I guess I would say no –being all astute and everything.
But in spite of my high level of astuity, I didn’t realize that there was a question, not only about the meaning of ‘entropy’ but also about the meaning of ‘dispersal’–at least according to Sal. Do you concur with that?
But I mean, even if there ARE two misunderstandings about those two words, neither of these meaning questions should result in big fights where there is a will to understand each other, no?
walto,
Yes, and he’s not quite right about that. Close, but not quite right.
Lambert’s main point, with which I heartily agree, is that thermodynamic entropy is not a measure of macroscopic disorder. That is, it’s not the same as the second kind of entropy I discussed in this comment:
That kind of entropy is not thermodynamic entropy. Lambert’s complaint is that people conflate the two, and he’s right that it’s a serious mistake to do so.
When you’re dealing with thermodynamic entropy, your concern is with the physical microstate of the system. So instead of merely worrying about the order of the cards, you’re worried about how all of the atoms in all of the cards are arranged, and what they’re doing.
When you’re looking at that enormously complicated picture, in which there’s far more complexity and detail inside each card than there is in the ordering among cards, then the order of the cards makes little difference to the entropy of the entire system. But it still makes a small difference. Your “missing knowledge” of the system’s microstate is very slightly higher than it would be if the cards had remained unshuffled. Lambert’s mistake is, in effect, to claim that small = zero.
That’s why it’s almost but not quite correct to say, as Lambert does, that the order of the cards is irrelevant to the thermodynamic entropy of the system.
phoodoo,
Yes, and in fact I do so here when I talk about two different kinds of entropy of a card deck: thermodynamic entropy and “logical” entropy.
What causes problems is when people conflate different kinds of entropy, apply the wrong kind of entropy to a problem, or refer to something that isn’t entropy as if it were.
IDers and creationists are notorious for trying to equate thermodynamic entropy with disorder and then arguing that if order arises, it constitutes a violation of the Second Law of Thermodynamics.
keiths,
Thanks, but I then take I that you’re saying that Lambert is simply wrong when he says we have to divorce thermodynamic entropy (the first kind you talk about) with anybody’s ignorance of this or that. He says that view of the matter is incorrect. Can you say why you think he’s wrong about that (instead of simply asserting that he is)?
Thanks.
walto,
I’ve been explaining throughout the thread why he (and Sal) are incorrect.
The comments most relevant to the ‘missing knowledge’ question are here…
…and here:
walto:
No, I think Sal and I agree on the meaning of ‘dispersal’. It’s just that Sal wants to pretend that ‘the dispersal of energy’ means the same thing as ‘the dispersal of energy associated with each species’.
This comment explains why:
keiths,
Why conclude the answer is Both rather than that Yolanda is in a better epistemic position to determine what’s going on than Xavier?
BOTH is clearly the wrong conclusion to come to if, for example, one of two observers of a scene is colorblind and one isn’t, or one observer of a fly’s eye has a microscope and one doesn’t, etc. The natural conclusion would seem to be that Xavier has less information and is not in as good a position as Yolanda to opine. That, at least, is the conclusion we’d make in most such cases.
Here endeth our survey of statistical mechanics. We have touched upon only a very few elementary applications of what is, in fact, a science of the utmost generality. A completely rigorous construction of the foundations of statistical mechanics presents deep problems of an essentially philosophical nature — problems still not fully resolved, after almost a century of work by a succession of profound scholars, beginning with L. Boltzmann.
…
Finally, statistical mechanics offers us the immense intellectual satisfaction of rendering transparent what thermodynamics leaves opaque: that is, statistical mechanics casts a brilliant light of interpretation throughout most of the realm in which classical thermodynamics can afford us little more than phenomenological description.
– Leonard K. Nash. Elements of Statistical Thermodynamics
Yet, for all its immense power, thermodynamics is a science that fails to reward man’s quest for understanding.
– Leonard K. Nash
walto,
She is in a better epistemic position than Xavier, which means she has more information about what’s going on. Therefore, from her vantage point, the entropy of the system is always lower than it is for Xavier.
If you’re tempted to think that her answer is the “right” answer, however, because she’s in a better epistemic position, don’t. Like Xavier’s, her knowledge of the system’s microstate is incomplete. There are (in principle) observers who are in better and better epistemic positions than both Yolanda and Xavier, all the way up to Damon the Demon, who knows the exact microstate of the system at all times.
For Damon, the entropy is always zero, because there is only one possible microstate for the system to be in — the one he already knows the system is in — and the logarithm of one is zero. But if you take that to be the “right” answer, because Damon is in the best possible epistemic position, then you have to regard the entropy of everything as being zero, which renders the Second Law useless. (It isn’t violated, because the Second Law allows for entropy to remain constant, but for entropy to remain zero all the time is pretty uninformative and not very helpful.)
The problem is solved when you accept that there is no single “right” answer and that the entropy is observer-dependent. It depends on how much knowledge each particular observer is missing about the detailed state of the system.
Agree.
Half agree. At issue is this statement by Keiths and supported by DNA_jock:
Keiths said energy is not dispersed in the mixing of two gases.
Keiths is equivocating what Lambert means by “energy is dispersed”.
I was able to figure out what Lambert meant, and it didn’t surprise me to find others arrived at the same interpretation as mine as witnessed by this lecture slide.
What Keiths means by energy dispersal in the mixing of gases is not what Lambert and others meant by energy dispersal. Keiths was using the wrong accounting procedure to perceive the energy dispersal.
The correct procedure is to track the energy dispersal of each species before and after mixing, not (as Keiths did) tracking the average energies of some randomly chosen molecule in the solution (not caring if is a pink or blue molecule so to speak). If Keiths bothered to look at the pink molecules in isolation, he’d realize the energy of pink molecules was concentrated in the left chamber before mixing and dispersed throughout both chambers after mixing. The folly of his interpretation was really borne out when I used monoatomic molecules in one chamber and diatomic molecules in the other. In the case of the diatomic molecule the energy prior to mixing is clearly not dispersed as it is compared to after mixing.
I haven’t seen Keiths acknowledge his equivocations and errors. Also DNA_Jock wasn’t astute enough to see the Kieths equivocation, in fact he swallowed the cool-aid with total lack of insight and critical thinking.
Congratulations to DNA_jock for an epic fail yet again over a trivial matter.
Assistant clerk in training.
Sal,
For everyone’s entertainment, here’s a challenge I’d like you to undertake.
You claim that entropy is a measure of energy dispersal; I say it’s a measure of an observer’s missing knowledge about the detailed state of a system.
Read the following thought experiment. If you’re right and I’m wrong, you should be able to explain to us how Xavier and Yolanda can look at the same physical system and get different values for the entropy and the change in entropy:
They’re looking at the same system, Sal. If you’re right that entropy is a measure of energy dispersal — a purely physical process — they should get the same answer when they determine the entropy. They don’t. How do you explain that?
I can explain it easily. Xavier and Yolanda have differing amounts of knowledge about the state of the system, so of course they get different values for the entropy. Entropy is a measure of their missing knowledge, after all.
My definition works. Yours doesn’t.
Sal:
Sal Logic: If two or more people make the same mistake, it’s no longer a mistake.
Sal is clinging to that slide like a security blanket. Too bad it’s wrong.
Just because Yolanda’s epistemic position isn’t perfect doesn’t mean it’s not better than Xavier’s. It’s better than Xavier’s because it’s more accurate–i.e., closer to being right.
The natural conclusion is, of course, that Xavier and Yolanda aren’t both right–they’re both wrong. But Yolanda is closer to being right. This all should be obvious.
So the real meat of your post is in the final section, about Damon. In that passage, you say that, given any microstate, someone with complete knowledge will always come to the conclusion that there is no entropy. But that claim just begs the question against Lambert and others who hold that the concept is NOT epistemic. Lambert would simply deny that, given a thermodynamic understanding of entropy, Damon would determine each microstate to have the same entropy. Lambert would say that when one makes such statements one is simply conflating the thermodynamic concept that he is interested in, with Shannon’s quite different concept.
It’s no response to simply insist that a Laplacian demon would calculate all microstates at zero entropy when your adversary denies that! You are relativizing the notion of entropy the same way some of my freshman students relativize truth when they say that whatever somebody believes is “true for them.” I know truth is not like that, but I’m willing to be convinced regarding entropy. You’ll have to do a better job than just saying it over and over though.
Here, from Physics Forum, is a discussion (basically a spanking) of the claim that entropy is observer-relative:
https://www.physicsforums.com/threads/isnt-entropy-relative.2965/
walto:
What is the “right” answer? How much entropy before mixing? How much after?
DNA_Jock and Keiths,
Are you going to concede the pink molecules (and therefore the energy of the pink molecules) becomes more dispersed after the value is opened than before? Or are you guys going wallow in you self-imposed ignorance?
Otherwise, explain to the readers how in the bottom of the diagram after the valve is opened how the pink molecules (and therefore their energy) is not more dispersed than before the value was opened.
You guys got caught making a junior high school mistake, and now you’ll try to save face at all costs. Too funny.
HAHAHA!
Why should I know this? If I don’t, does that suggest to you there isn’t a right answer? I don’t know the cube root of 897,333,285.001 either.
walto:
Could you link to the specific comment(s) in which you think the “spanking” is taking place, and summarize the key point(s) in your own words?
Why? Are you incapable of reading it yourself?
keiths, do you have any reason for supposing this quantity is, unlike most scientifically measured quantities, observer-relative or don’t you? Do you just like the idea?
Why is the analogy between Yolanda with somebody with ordinary sight and Xavier with someone who is colorblind a bad analogy? Why is entropy different from, e.g., heat? Maybe Yolanda has a thermometer and Xavier doesn’t. If she says some object is 57 degrees F and he says it’s 65, are they BOTH right?
Is entropy measurable? Anyone can answer.
Sal,
Obviously, the “pink” molecules are more dispersed afterwards than before, and so is the energy. Neither DNA_Jock nor I have argued otherwise.
Now repeat the experiment, starting with an equal number of “blue” molecules in the other chamber. After mixing, are the pink molecules more dispersed? Yes, of course. Are the blue molecules more dispersed? Yes, of course. Is energy more dispersed? No, because its distribution hasn’t changed.
Cue your protest: “But…but…the blue energy has dispersed into the pink chamber, and the pink energy has dispersed into the blue chamber!”
…or some variation of that. Right?
Lambert has answered Yes. See his paper that Joe F. linked above.
walto,
If you can’t tell us what the “right” answer is, can you at least tell us how one could go about determining it?
walto:
keiths:
walto:
I read it, but didn’t see any “spanking” taking place. Apparently “spanking”, like entropy, is observer-relative.
If you link to the comment(s) in question and summarize the key points, I’ll have a better idea of why you think the observer-relative position has been “spanked”, and I can respond.
I guess I’d do this: (from https://www.chem.wisc.edu/deptfiles/genchem/netorial/modules/thermodynamics/entropy/entropy04.htm)
Is there something necessarily observer-relative about this measurement?
Not that I can tell.
You can’t color the energy. There is no color in the famous equation of kinetic energy:
KE = 1/2 m v^2
That’s your problem.
You can speak of the energy residing in the blue molecules however.
The energy in the blue molecules is more spread out. It doesn’t matter that the energy present in the blue molecules may have flowed in and out with the pink molecules.
I can take a whole bunch of coins that belong to me and spread them out on the table. It doesn’t matter if the coins were owned by blue aliens before I got a hold of them, they are my coins, and my coins can get spread out. In fact if a blue alien trades his coins for mine on the table, but there is no net loss by either of us, my coins can still be spread out!
In like manner for the pink molecules after mixing, they have their energy. It doesn’t matter if after mixing some of the energy residing in some of the pink molecules previously resided in some blue molecules, that doesn’t change the fact that at any given instant after mixing the energy in the pink molecules is more dispersed.
In any case, the way I describe it is the way Lambert, Evans, Townsend, and just about every author cited describe it.
In fact, model of mixing entropy as the sum of the expansion entropy is independent of the energy dispersal description, you’ll find it in literature which call entropy disorder (I provided one such link earlier).
You want to equivocate the intended meaning of phrases by others for your own self-delusional purposes, that’s up to you, but I was able to figure out how Lambert and others were using their terms. You obviously refuse to be charitable in your reading of what they write.
Suit yourself and be right in your own mind with your own mis-readings of intended meanings by others. You insist you have to interpret what Lambert says by Keiths’ interpretation of what Lambert says, not Lambert’s interpretation of what Lambert says.
Sal,
No, that’s your problem.
More on this after I finish my response(s) to walto.
I have summarized the key points in my own posts here. There’s just a lot more detail there. If you successfully respond to my posts, that should take care of the relevant posts on that site, which just hammer home that claims that entropy measurements are observer-relative are mistaken. I’ve suggested the same thing here, and was interested to see what professionals, other than Lambert, say about this. From what I’ve been able to find out (which is not dispositive, of course), you seem to be pretty much on your own on this issue.
I still don’t know why you take this position. For all I know, you have good reasons for it and have simply not posted them yet. To date, just begging the question against Lambert.
It’s a shame that Mike Elzinga and OlegT no longer comment here. There is quite a bit buried in old threads. Sal helpfully linked to a comment by Mike Elzinga here.
Hey Keith’s
keep it up. I’m pretty sure you are wrong but I find the argument to be interesting.
I want to see you answer the objections for a while to get a handle on this .
I want to see how it all goes.
The presupositionalist in me might say
“well of course thermodynamics is observer dependent the authoritative observer is God therefore thermodynamics presupposes God.”
Regardless it’s an interesting point of view that I’ve never seen before
peace
walto,
When I first encountered the idea that entropy was observer-relative, it seemed bizarre, counterintuitive, and maybe even a bit spooky. And fishy. Why, if entropy is observer-relative, are there standard, published tables of the entropy of various substances? If my knowledge is relevant, why don’t I have to take it into account when I look up an entropy value in the table? Everyone gets the same answer when they consult the table. Shouldn’t the answers differ if entropy is observer-relative?
The solution to the conundrum is actually quite simple, upon reflection. In real life, it’s almost always true that different observers have the same (lack of) knowledge of the system’s microstate at any given time. If they’re all in the same state of knowledge relative to the system’s microstate, then they’ll all get the same answer when determining the entropy — even though it’s observer-relative.
For example, standard molar entropy tables give the entropies for different solids at specified temperatures. The user of the table is assumed to know nothing about the system other than its composition and its temperature. Well, if all observers of a system know the same things about it — its composition and temperature, but nothing else about the microstate — then of course they should get the same “answer” for the entropy.
So the problem with the method you quoted is that it essentially makes the same assumptions as the tables. The observer measures the heat entering the system, notes the corresponding temperature change, and infers the entropy change from the equation. This is legitimate only because the observer is assumed to have no other knowledge of the system’s microstate. The value of W in Boltzmann’s formula encompasses all potentially accessible microstates at the given temperature. In real life, that’s generally true, and the measurements are practically useful. But in theory the observers can have vastly different information about the microstate, and thus W can be much smaller for them.
Thanks, Alan. That was indeed helpful. And I see this issue has come up before! I particularly like this remark by Elizabeth on the following page of comments:
I now see why Mung is agreeing with keiths on this issue.
Right. Like two people having ordinary eyesight opining on the color of a chestnut.
That’s the question-begging part. WHY SHOULD WE BELIEVE IT IS OBSERVER-RELATIVE?
Why do you think there’s a conundrum in the first place? There’s nothing to solve if we take entropy as an objective quality. Why do you think it isn’t? (Hint: Saying it isn’t, isn’t giving a reason.)
walto,
The short answer is that distinguishability is key, and distinguishability is observer-relative.
That’s why I was careful in my thought experiment to specify that Yolanda could tell the difference between the two isotopes while Xavier could not. It’s also why the molecules are “pink” and “blue” in Sal’s examples, even though the pinkness and blueness have no impact whatsoever on the underlying physics.
Distinguishability is observer-relative, and differences in distinguishability lead to differences in knowledge among observers.