In Slight Defense of Granville Sewell: A. Lehninger, Larry Moran, L. Boltzmann

The basic biochemistry textbook I study from is Lehninger Prinicples of Biochemistry. It’s a well regarded college textbook. But there’s a minor problem regarding the book’s Granville-Sewell-like description of entropy:

The randomness or disorder of the components of a chemical system is expressed as entropy,

Nelson, David L.; Cox, Michael M.. Lehninger Principles of Biochemistry (Page 23). W.H. Freeman. Kindle Edition.

Gag!

And from the textbook written by our very own Larry Moran:

enthalpy (H). A thermodynamic state function that describes the heat content of a system, entropy (S). A thermodynamic state function that describes the randomness or disorder of a system.

Principles of Biochemistry, 5th Edition, Glossary

Laurence A. Moran, University of Toronto
H. Robert Horton, North Carolina State University
K. Gray Scrimgeour, University of Toronto
Marc D. Perry, University of Toronto

Choke!

Thankfully Dan Styer almost comes to the rescue as he criticized ID proponent Granville Sewell:

On 27 December 2005, Professor Granville Sewell’s essay “Evolution’s Thermodynamic Failure” appeared in an opinion magazine titled The American Spectator. The second paragraph of that essay conflates “entropy”, “randomness”, “disorder”, and “uniformity”. The third paragraph claims that “Natural forces, such as corrosion, erosion, fire and explosions, do not create order, they destroy it.” Investigation of this second claim reveals not only that it is false, but also shows the first conflation to be an error.

“Disorder” is an analogy for entropy, not a definition for entropy. Analogies are powerful but imperfect. When I say “My love is a red, red rose”, I mean that my love shares some characteristics with a rose: both are beautiful, both are ethereal, both are desirable. But my love does not share all characteristics with a rose: a rose is susceptible to aphid infection, my love is not; a rose should be fed 6-12-6 fertilizer, which would poison my love.

Is Entropy Disorder?
Dan Styer criticizing Granville Sewell

Almost well said. Entropy is often correlated with disorder, but it isn’t the same thing as disorder. Shoe size is correlated with reading ability (i.e., small shoe size suggests we’re dealing with a toddler, therefore non-reader). Perhaps “analogy” is too generous a way to describe the mistake of saying entropy is disorder. But it’s a step in the right direction.

Styer’s comment leads to this rhetorical challenge by physicist Mike Elzinga to creationists:

Where are the order/disorder and “information” in that? Creationists need to explain where Clausius’s coining of the word entropy means that everything tends to disorder and decay.

Mike Elzinga
2lot trouble

Hear, hear.

But to single out creationists for equating entropy with disorder? How about biochemists like Lehninger (or the authors carrying Lehninger’s torch), or Larry Moran or the founder of statistical mechanics Ludwig Boltzmann himself who suggested entropy is disorder.

“In order to explain the fact that the calculations based on this assumption [“…that by far the largest number of possible states have the characteristic properties of the Maxwell distribution…”] correspond to actually observable processes, one must assume that an enormously complicated mechanical system represents a good picture of the world, and that all or at least most of the parts of it surrounding us are initially in a very ordered — and therefore very improbable — state. When this is the case, then whenever two of more small parts of it come into interaction with each other, the system formed by these parts is also initially in an ordered state and when left to itself it rapidly proceeds to the disordered most probable state.

Ludwig Boltzmann

Boltzmann used a very poor choice of words. He (ahem) made a small mistake in a qualitative description, not a formal definition.

Chemistry professor Frank Lambert comments about Boltzmann’s unfortunate legacy of “disordered most probable states”:

That slight, innocent paragraph of a sincere man — but before modern understanding of q(rev)/T via knowledge of molecular behavior (Boltzmann believed that molecules perhaps could occupy only an infinitesimal volume of space), or quantum mechanics, or the Third Law — that paragraph and its similar nearby words are the foundation of all dependence on “entropy is a measure of disorder”. Because of it, uncountable thousands of scientists and non-scientists have spent endless hours in thought and argument involving ‘disorder’and entropy in the past century. Apparently never having read its astonishingly overly-simplistic basis, they believed that somewhere there was some profound base. Somewhere. There isn’t. Boltzmann was the source and no one bothered to challenge him. Why should they?

Boltzmann’s concept of entropy change was accepted for a century primarily because skilled physicists and thermodynamicists focused on the fascinating relationships and powerful theoretical and practical conclusions arising from entropy’s relation to the behavior of matter. They were not concerned with conceptual, non-mathematical answers to the question, “What is entropy, really?” that their students occasionally had the courage to ask. Their response, because it was what had been taught to them, was “Learn how to calculate changes in entropy. Then you will understand what entropy ‘really is’.”

There is no basis in physical science for interpreting entropy change as involving order and disorder.

Boltzmann

So in slight defense of Granville Sewell and numerous other creationists from Henry Morris to Duane Gish to Kairos Turbo Encabulator Focus for equating entropy with disorder, when we ask “where did they get those distorted notions of entropy?”, we need look no further than many biochemists and textbook authors from reputable universities, and unfortunately one of the fathers of statistical mechanics, Ludwig Boltzmann himself!

Here is a list of chemistry books that treat entropy correctly:
EntropySite.Oxy.Edu

April 2014

The 36 Science Textbooks That Have Deleted “disorder” From Their Description of the Nature of Entropy
…..

Lehninger and Moran’s books aren’t on that list. Their books however did make the list of biochem books judged by some Australians as decreasing in fitness:

Is the evolution of biochemistry texts decreasing fitness? A case study of pedagogical error in bioenergetics

The initial impetus for this research was the discovery by the authors of a variety of common and consistent errors and misconceptions in pedagogical literature on the topic of thermodynamics in Biochemistry. A systematic survey was undertaken of material on thermodynamics in Biochemistry textbooks commonly used in Australian Universities over the period from the 1920s up to 2010.
..
Lehninger, Nelson and Cox
….
Garrett and Grisham

Moran and Scrimgeour
….
Jeremy M. Berg, John L. Tymoczko, Lubert Stryer in 2002

A few creationists got it right, like Gange:
Entropy and Disorder from a Creationist Book Endorsed by a Nobel Laureate

Here is chemistry professor Frank Lambert’s informal definition of entropy:

Entropy is simply a quantitative measure of what the second law of thermodynamics describes: the dispersal of energy in a process in our material world. Entropy is not a complicated concept qualitatively. Most certainly, entropy is not disorder nor a measure of chaos — even though it is thus erroneously defined in dictionaries or pre-2002 sources.

Entropy

The is the correct Ludwig Boltzman Entropy as written by Planck:

S = k log W

where
S = entropy
k = boltzman’s constant
W = number of microstates

Also there is Clausius:

delta-S = Integral (dq/T)

where
delta-S = change in entropy
dq = inexact differential of q (heat)

As Dr. Mike asked rhetorically: “Where are the order/disorder and “information” in that?”

My answer: Nowhere! Most creationists (except guys like me and Robert Gange) don’t realize it’s nowhere.

1,697 thoughts on “In Slight Defense of Granville Sewell: A. Lehninger, Larry Moran, L. Boltzmann

  1. Gee guys look at that graph, what do you think the enthalpy of fusion is for freezing at the normal melting point of normal ice at 273.15K in J/mol? Eh, somewhere around 6011 J/mol wouldn’t you say (kind of hard to eyeball it from the graph, but that’s my estimate)? How about in J/gram? Just divide 6011 J/mol by the molecular weight of water per gram:

    (6011 J/mol ) / (1 gram / 18.02 gram) = 333.55 J/g

    🙂

    Isn’t it amazing that corresponds in magnitude to the figure of enthalpy change I provided of 333.55 J/g?

    Goodness, you’d think if DNA_Jock could spoon fed himself from the information in the paper he provided, he could start to answer the example question of entropy change of melting a normal 20gram ice cube at normal temperature of 273.15K.

    Oh wait, too bad for him, since he said:

    dQ/T is rarely informative

    So he won’t be able to use dQ nor T. Tisk tisk.

    He’ll have to resort to using keiths’ and mung’s patented technique of:

    1. computing missing information
    2. counting microstates

    Too bad for DNA_jock, he might not be able to do it without first using :

    dS = dQ/T

    Oh, but that would be cheating since DNA_Jock said in the modern day entropy is derived by “counting microstates all the way.” So count away DNA_Jock. Hahaha!

    But if he used the freaking graph he himself provided and used dQ/T to compute entropy change, well let’s see what would happen:

    T melting = 273.15
    ΔH = Lf = 6011 J/mol = (6011 J/mol ) / (1 gram / 18.02 gram) = 333.55 J/g

    starting with Claussius (which DNA_Jock says is rarely informative):

    dS = dQ/T

    then integrating

    ΔS = ΔQ/T

    ΔQ for 20 grams ice

    ΔQ = 20 grams x 333.55 J / gram = 6671 J

    thus

    ΔS = ΔQ/T = 6671 J / 273.15K = 24.42 J/K

    exactly the figure I’ve provided earlier. Too bad DNA_Jock, better luck equivocating next time, but this time you got caught. Now compute that figure without using dQ/T since from your own school of the DNA_Jock school of thermodynamics you said:

    dQ/T is rarely informative

    — DNA_Jock

    Read it and weep bud. Hahaha!

  2. Excellent, Sal, you are making progress.
    I’ve been trying to get you to apply Kirchhoff’s Law to the ΔH-fusion of ice for a while.
    As I noted:

    I was trying to walk Sal through the implications (on how ΔH will vary with temperature) of Kirchhoff’s Law (viz: ∂ΔH/∂T = ΔCp). Sal seemed to be having difficulty with the meaning of the symbol Δ. He finally looked it up, and discovered that it’s called “Kirchhoff’s Law of Thermochemistry“, decided that it didn’t apply to a phase change, and crowed

    The ΔH in question is dispersal of energy from the surroundings to melt the ice, and that is different from the ΔH of a non-existent chemical reaction involving reactants and products and that are used as inputs to Kirchoff’s laws.

    I found Sal’s claim that there were two different types of ΔH and that therefore Kirchhoff’s Law did not apply to phase changes a particularly delightful demonstration that he didn’t understand Hess’s Law / the First Law of Thermodynamics (1LoT), so I posted an image from Szedlak where they apply Kirchhoff to the freezing of ice, and I made the comment:

    Oh dear, Sal doesn’t understand the First Law of Thermondynamics.

    So what does Sal do in response? He demonstrates that he doesn’t understand the 1LoT by attempting to draw a distinction between the ΔH of melting and the ΔH of freezing!!
    Sal, the ΔH of freezing is identical to the ΔH of melting, just with the opposite sign. If you are studying one, you are studying the other. This is really, really basic 1LoT stuff.

    Since then, (a red letter day!) Sal has learned that Kirchhoff’s Law does apply (although not without a lot of huffing and puffing):.

    So liquid water could supercool to 250K and then turn to a snowflake (or whatever piece of ice), but the snow flake, if it melts, will have to go to 273.15K at atmospheric pressure in order to melt, and at that temperature the enthalpy of fusion when the ice melts is in general not the same as the latent heat of fusion of the supercooled liquid water as it freezes into ice (like a snowflake or whatever). It’s right there in the diagram! Freaking look at the diagram of Lf at about 273.15K where all the lines converge and that is about 6004 J/mol which is 333.55 J/g. It certainly isn’t 333.55 J/g (6004 J/mol) at other temperatures is it? Sheesh!

    That’s what I’ve been trying to tell you, Sal. ΔH varies with temperature. It does NOT vary depending on whether you are talking about melting or freezing. It cannot, thanks to the 1LoT.
    At T1, ΔHfus = – ΔHsolid.
    At T2, ΔHfus = – ΔHsolid.
    but they differ.
    Try drawing a little diagram, Sal:

    Ice at 273 <—-> Water at 273

    Ice at 253 <—-> Water at 253

    Going East – West in this diagram involves ΔHfus = – ΔHsolid
    Going North – South involves the integral of Cp dT
    (the amount of Heat absorbed / released depends on the heat capacity (Cp), which is a function of temperature)
    Now, this is the important bit: The change in H (and in S) must be independent of the path chosen. This is Hess’s Law, a.k.a. the First Law of Thermodynamics. It should be obvious that this also means that ΔHfus = – ΔHsolid, otherwise you’d have created a way of magically generating heat.
    Soooo, given ΔHfus at one temperature, how do we calculate ΔHfus at a different temperature?
    Here’s a counterfactual hypothetical, try not to quote-mine it:

    If the heat capacities of ice and water were identical over the range of temperatures in question, then ΔHfus would not vary with temperature.

    But that’s not true, so we have to get a little more sophisticated.
    We could assume (it’s not a bad assumption) that the heat capacities (Cp-ice and Cp-water) will differ by a fixed amount. In which case, ΔH will vary linearly with temperature. This phrase might be familiar.
    Or we could get full-on rigorous, and recognize that ΔCp will vary with temperature, and go all calculussy, thus:
    ∂ΔH/∂T = ΔCp (Kirchhoff’s Law)
    As I asked you originally, you (Sal, the fan of Clausius) could determine the ΔH three different ways:

    Option A: Assume ΔH will vary linearly with T
    Option B: Use ∂ΔH/∂T = ΔCp
    Option C: Measure it with one of those fancy DSC machines

    How many different answers will you get and, most importantly, WHY?
    I am trying to get you to see that when you treat Heat Capacities as if they were universal constants and blithely plug them into your Clausian equations, you are indulging in pure phenomenology, and completely missing the underlying physics.
    Why do heat capacities vary with temperature?

  3. Excellent, Sal, you are making progress.

    You’re just getting called on your equivocations, I guess that’s progress.

    If you could, help Keiths solve the problem of melting a typical 20 gram ice cube in typical conditions at 273.15K with a specific enthalpy of fusion of 333.55 J/g.

    Other than that, you’re just stalling and derailing.

    Btw, have you figured out, freezing isn’t the same thing as melting. Tell the readers if you think typical ice (like that in a snowflake in atmospheric conditions) will melt at 250K?

    Cheers. 🙂

  4. Dear Readers,

    “Typical” ice (i.e. ice Ih) will not melt at 250K; it will, however, melt at 252K.
    If you press on it hard enough…
    Ever been ice skating, Sal?
    😮
    and ΔHfus = – ΔHsolid.

    Your turn, Sal. Tell the readers why heat capacities vary with temperature.

  5. DNA_Jock,

    I am trying to follow the discussion.

    I am trying to get you to see that when you treat Heat Capacities as if they were universal constants and blithely plug them into your Clausian equations, you are indulging in pure phenomenology, and completely missing the underlying physics.
    Why do heat capacities vary with temperature?

    Does calculating missing information help you understand this? If so can you show me how?

  6. colewd: Does calculating missing information help you understand this? If so can you show me how?

    If you are willing to accept that “more microstates correspond to the macrostate” is equivalent to ‘missing information’, then yes I can.
    Otherwise, no.

  7. stcordova: If you and Keiths and Mung can’t do the calculation or the counting, just say so instead of wasting everyone’s time with the equivocations you just got called on.

    Sorry, but I’m busy right now trying to calculate the entropy of this thread.

  8. DNA_Jock: Since then, (a red letter day!) Sal has learned that Kirchhoff’s Law does apply (although not without a lot of huffing and puffing):.

    Yes, Salvador is by no means stupid. Just the opposite in fact.

  9. In the other thread, Patrick writes:

    Sorry for the delay in replying. Life is still too interesting by half.

    Your reply didn’t actually answer Mike Elzinga’s challenge from the perspective of your view on entropy.

    That’s right. Here’s what I said:

    My position is not that entropy is subjective, but rather that it is observer-dependent. It’s a measure of the gap between the macrostate — what an observer knows about the exact state of the system — and what would be required to pin down the exact microstate.

    Can you think of any scenario, including Mike’s, in which entropy is not a measure of the missing information as described above? If so, please present it along with an explanation of why you think the missing information interpretation cannot successfully be applied to it.

    Patrick:

    While I still don’t have the time to properly engage in the discussion, you are mistaken about entropy being observer dependent. Whether or not one knows the specific microstate, entropy is related to the number of microstates consistent with the macrostate defined by variables such as P, V, and T. It is related to the amount of energy in the system that is no longer available to do work. That doesn’t change based on the observer.

    You evidently haven’t been following the discussion. We’ve addressed that issue many times.

    See the Xavier/Yolanda thought experiment and subsequent exchanges.

    Macrostates are observer-dependent, and therefore so is entropy.

  10. DNA_Jock,

    If you are willing to accept that “more micro states correspond to the macro state” is equivalent to ‘missing information’, then yes I can.

    So more micro states would include:
    -molecular density
    -molecular energy
    -total molecular volume
    -size of the molecule

    Do you consider “more micro states corresponding to the macro state” observer dependent. If so, how?

  11. Mung,

    On the previous page, I posed some problems for you:

    The probability distribution you use for entropy calculations should accurately reflect whatever relevant information you possess about the microstate. It should correspond to the macrostate, in other words.

    See if you can figure out the correct epistemic probability distribution for each of the following macrostates, assuming fair dice and a random, blindfolded throw:

    I. For two dice, one red and one green, where the microstates are in (r, g) form:

    a) Your friend, an accurate and honest observer, tells you that r + g, the sum of the numbers on the two dice, is greater than eight.

    b) She tells you that the number on the green die, g, is one.

    c) She tells you that the sum is twelve.

    d) She tells you that r and g are the last two digits, in order, of the year in which the Japanese surrendered to the US at the end of WWII.

    e) She tells you that g is prime.

    f) She tells you that r raised to the g is exactly 32.

    g) She tells you that g minus r is exactly one.

    After you’ve answered those, we can move on to scenarios in which the microstates are defined as sums in r + g form.

    Do you need a hint?

  12. keiths: Do you need a hint?

    Yes. But take it slow and easy. Let’s work through one case and make sure I understand that one. It sounds to me more like you’re talking about conditional probabilities.

    Are we agreed that after the fact information doesn’t change the underlying probability distribution?

    Envision a set of nodes, one node on the far left and eight nodes on the far right. There’s on node on the left with two paths bifurcating to two nodes on it’s left. Each of those bifurcate to two nodes to their left (a total of four nodes). Those four then bifurcate into two paths connecting to the final eight nodes.

    Let’s use a coin toss to decide which path to take starting from the single node on the far left and arriving at one of the eight nodes on the far left. If it is heads we take the upper path and if it is tails we take the lower path. How many coin tosses do we need to get from the node on the far left top a node on the far right?

    We agree it is three, I hope.

    Now you tell me that you ended up at the top left node. I can infer that the three tosses were HHH. It still took three tosses, regardless of what I now know.

    Do you agree?

    And log2 8 = 3

    The “entropy” doesn’t change after the fact.

  13. Boltzmann’s famous equation relating entropy to missing information is engraved on his tombstone.

    – John Scales Avery, Information Theory and Evolution. p. 83.

    Salvador.

  14. Mung,

    It sounds to me more like you’re talking about conditional probabilities.

    Yes, that’s one way to think about it. We’re asking, “What is the epistemic probability distribution given the macrostate?“. We can then determine the entropy from that probability distribution.

    Are we agreed that after the fact information doesn’t change the underlying probability distribution?

    No, and this is crucial. Entropy is a measure of missing information, so anything that changes the amount of missing information changes the entropy — even after the dice have been thrown.

    I’m drawing an important distinction between metaphysical probability and epistemic probability here. See if the following, taken from an earlier comment of mine, makes sense to you:

    1. You’re blindfolded. You vigorously shake a standard pair of fair dice, one red and one green, and toss them onto a craps table. They come to rest in one particular configuration: there’s a six on the red die and a three on the green die. That is the exact microstate [although you don’t know that].

    2. The dice have been thrown, and they landed with a six on the red die and a three on the green die. It’s over and done. The metaphysical probability of (red=6, green=3) is now one. That’s how things turned out; the other microstates are no longer live possibilities. Their metaphysical probabilities are now zero.

    3. You don’t know that. No one has told you anything about the result of the throw, and you are still blindfolded. As far as you know, the dice could be in any of the 36 possible microstates, with equal epistemic probability. W for you is 36, and the entropy is

    S = log2(36) ≈ 5.17 bits

    Even though there is one exact microstate, with a metaphysical probability of one, you don’t know which microstate that is. The entropy you calculate is not a function of the metaphysical probability distribution, but of the epistemic probability distribution, which for you is currently a distribution that assigns a probability of 1/36 to each of the 36 possible microstates.

    Are you with me so far? Do you understand the distinction I’m drawing between metaphysical and epistemic probability?

  15. keiths: Do you understand the distinction I’m drawing between metaphysical and epistemic probability?

    I do not. I think all probabilities are epistemic. Perhaps begin with what you mean by a metaphysical probability.

    Yes, that’s one way to think about it. We’re asking, “What is the epistemic probability distribution given the macrostate?“. We can then determine the entropy from that probability distribution.

    Take the “macrostate” Ma = [the sum of a pair of dice]. Do you agree that is a “macrostate” as you are using the term?

    What is the probability distribution?

    What is the entropy?

  16. Mung,

    Perhaps begin with what you mean by a metaphysical probability.

    I explained it already.

    Once the dice have been thrown, there is only one metaphysically possible microstate: however they landed. In the example above, it was (red=6, green=3):

    2. The dice have been thrown, and they landed with a six on the red die and a three on the green die. It’s over and done. The metaphysical probability of (red=6, green=3) is now one. That’s how things turned out; the other microstates are no longer live possibilities. Their metaphysical probabilities are now zero.

    You know that the dice have landed. You know that there is only one metaphysical possibility for the microstate. However, you don’t know which microstate is the actual one. As far as you know, the actual microstate could be any of the 36 possibilities, with equal probability.

    The actual microstate has a metaphysical probability of one, and the other microstates have metaphysical probabilities of zero. Yet all of them have epistemic probabilities of 1/36 from your perspective.

    The entropy is a function of the epistemic probability, not the metaphysical probability. Assuming you have no other information about the outcome, the entropy is therefore

    S = log2(36) ≈ 5.17 bits

  17. keiths: Once the dice have been thrown, there is only one metaphysically possible microstate: however they landed. In the example above, it was (red=6, green=3):

    This is of absolutely no relevance to calculating the entropy. Entropy is an average. One specific outcome of a roll of the dice does not change the average. Neither does the single toss of a coin.

  18. Try to focus, Mung.

    I wrote:

    The entropy is a function of the epistemic probability, not the metaphysical probability.

  19. keiths: The entropy is a function of the epistemic probability, not the metaphysical probability.

    Metaphysical probabilities are irrelevant. I get it. Why did you bring them up? I already told you I thought all probabilities are epistemic. So what the hell are we arguing about, lol?

    After an event occurs that is the event that occurred and not some other event. A coin that lands face up did not land tails up. I get it. This is basic logic.

    The only possibilities that matter are epistemic.

    Entropy is an average. Any single event/outcome doesn’t change the entropy.

    ETA: Cross-Posted. LoL. But still so on point.

  20. Communication theory, however, is not directly concerned with the amount of information associated with the occurrence of a specific event or signal. It deals with sources rather than with particular messages. “If different messages contain different amounts of information, then it is reasonable to talk about the average amount of information per message we can expect to get from the source – the average for all the different messages the source may select. (George A. Miller)”

    Knowledge and the Flow of Information

    It is this average amount of information which is called entropy. The outcome of a specific event has nothing to do with it.

  21. Mung,

    Metaphysical probabilities are irrelevant. I get it. Why did you bring them up?

    Because I’m trying to teach you something. It’s important, but it’s going right over your head.

    The fact that entropy depends on the epistemic probability distribution, and not on the metaphysical probability distribution, tells you that entropy is observer-dependent.

    Different observers can possess different amounts of relevant information about the microstate. If they do, the epistemic probability distribution is different for each of them, and so is the entropy.

    Likewise, the same observer can possess different amounts of information about the same microstate at different points in time. If so, the epistemic probability distribution (and hence the entropy) changes for that observer, even though the microstate remains the same.

    If entropy is a measure of the missing information, as you acknowledge, then how could it not change when the amount of missing information changes?

    Slow down and think about it, Mung.

  22. keiths: If entropy is a measure of the missing information, as you acknowledge, then how could it not change when the amount of missing information changes?

    Slow down and think about it, keiths.

    More specifically, address this post.

    The specific outcome doesn’t affect the number of choices needed to arrive at that specific outcome. If we’re tossing a coin TTT would get us to the bottom right node just as HHH would get us to the top right node.

    In both cases the calculation of the entropy is log2 8 = 3.

    The fact that someone observes that the outcome was the bottom right node does not change that.

  23. keiths: Because I’m trying to teach you something. It’s important, but it’s going right over your head.

    The fact that entropy depends on the epistemic probability distribution, and not on the metaphysical probability distribution, tells you that entropy is observer-dependent.

    I don’t believe in metaphysical probability distributions. I don’t even know what the phrase means. Presumably, you mean by “metaphysical probability distribution” a probability distribution that exists independently of any observer. Such as?

    Yes, whatever you’re trying to teach me is going right over my head.

    If you toss a coin, and it lands heads up, does that mean that every time you toss a coin it will land heads up?

    In the case of tossing a coin, what is the metaphysical probability distribution and what is the epistemic probability distribution, and how do you determine the difference between the two?

    The more times you toss a coin, assuming it is a fair coin, the closer the number of heads observed will approach the number of tails observed. Do you disagree?

    Calculate the entropy.

    Calculate the difference between the metaphysical entropy and the epistemic entropy.

  24. Mung,

    Yes, whatever you’re trying to teach me is going right over my head.

    Sorry, Mung. I tried, but it appears that you simply aren’t bright enough to get it.

  25. keiths: Sorry, Mung. I tried, but it appears that you simply aren’t bright enough to get it.

    That’s fine. But I get to weigh your evaluation of how “bright” I am against your failures to respond to questions that I have posed.

    Have you figured out yet what a thermodynamic process is and why “there is no process” does not follow from a lack of change in entropy?

  26. keiths: Sorry, Mung. I tried, but it appears that you simply aren’t bright enough to get it.

    Give me a reason to believe in “metaphysical” probability distributions.

  27. Take a die with 36 sides. Tossing the die, it will rest on on of the 36 faces with equal probability. Toss the die. The actual face that the die lands on is irrelevant to the probability distribution. The “entropy” of the die is a function of the probability distribution prior to any particular toss of the die. The actual outcome of a particular throw of the die doesn’t change the probability distribution. The actual outcome of a particular throw of the die doesn’t change the entropy.

    I won’t go so far as to say that keiths denies this, but he does seem to suffer from an inability to say that it is so.

  28. Mung,

    That’s fine. But I get to weigh your evaluation of how “bright” I am against your failures to respond to questions that I have posed.

    If your goal is simply to make yourself feel better, there are lots of unconvincing rationalizations available to you.

    I hope instead that you’ll try to understand entropy. There’s plenty of material in this thread to help you, if you’ll go back and review it.

    Good luck.

  29. Mung: So if I decide to define a ‘microstate’ in terms of the sum of the two faces I can do that?

    keiths: Sure. Just make sure the probability distribution you choose is appropriate for that microstate specification.

    Mung: Q1: How many probability distributions are appropriate for that microstate specification?

    keiths: …

    Mung: Q2: What are the probability distributions that are appropriate for that microstate specification?

    keiths: …

    You never answered. Is it because you don’t know? Is it because you don’t want to say? If you don’t know, just say so.

  30. keiths,

    You claim that entropy is defined as missing information. What type of information is included in the second law of thermodynamics.

  31. colewd,

    Do you agree with this: From Wiki

    Statistical mechanics gives an explanation for the second law by postulating that a material is composed of atoms and molecules which are in constant motion. A particular set of positions and velocities for each particle in the system is called a microstate of the system and because of the constant motion, the system is constantly changing its microstate. Statistical mechanics postulates that, in equilibrium, each microstate that the system might be in is equally likely to occur, and when this assumption is made, it leads directly to the conclusion that the second law must hold in a statistical sense. That is, the second law will hold on average, with a statistical variation on the order of 1/√N where N is the number of particles in the system. For everyday (macroscopic) situations, the probability that the second law will be violated is practically zero. However, for systems with a small number of particles, thermodynamic parameters, including the entropy, may show significant statistical deviations from that predicted by the second law. Classical thermodynamic theory does not deal with these statistical variations.

  32. colewd,

    I haven’t confirmed the 1/√N figure, but yes, I agree with that excerpt.

    You claim that entropy is defined as missing information. What type of information is included in the second law of thermodynamics.

    We’ve been over this again and again in this thread. The Second Law concerns entropy, and entropy is a measure of missing information — the gap between what is actually known about the microstate and what it would take to specify the microstate exactly. Entropy is the information gap between the macrostate and the exact microstate, in other words.

  33. I’ve given up on Mung, but for the sake of anyone else following along, here are the answers to the questions I posed to him above:

    The probability distribution you use for entropy calculations should accurately reflect whatever relevant information you possess about the microstate. It should correspond to the macrostate, in other words.

    See if you can figure out the correct epistemic probability distribution for each of the following macrostates, assuming fair dice and a random, blindfolded throw:

    I. For two dice, one red and one green, where the microstates are in (r,g) form:

    With no feedback about the result of the throw, all 36 microstates are equally probable. The entropy is therefore

    S = log2(36) ≈ 5.17 bits

    a) Your friend, an accurate and honest observer, tells you that r + g, the sum of the numbers on the two dice, is greater than eight.

    This rules out a lot of microstates, leaving only those in which r + g is greater than eight:
    (3,6)
    (4,5), (4,6)
    (5,4), (5,5), 5,6)
    (6,3), (6,4), (6,5), (6,6)

    Note that although some microstates have been ruled out, the remaining ten microstates are equally probable, just as they were after the throw but before we learned anything about the result. None has gained an advantage over the others. The probability of each is therefore 1/10, because the total probability must add up to one.

    The entropy is

    S = log2(10) ≈ 3.32 bits

    b) She tells you that the number on the green die, g, is one.

    This information eliminates all but six of the microstates:
    (1,1), (1,2), (1,3), (1,4), (1,5), (1,6)

    They’re equally likely, so the probability of each is 1/6. The entropy is

    S = log2(6) ≈ 2.58 bits

    c) She tells you that the sum is twelve.

    All of the microstates are ruled out but one:
    (6,6)

    Its probability is now one, and the entropy is

    S = log2(1) = 0 bits

    d) She tells you that rand g are the last two digits, in order, of the year in which the Japanese surrendered to the US at the end of WWII.

    The Japanese surrendered in 1945, so the only possible microstate is
    (4,5)

    As above, the probability of that single remaining microstate is one, and the entropy is

    S = log2(1) = 0 bits

    e) She tells you that g is prime.

    The value of g is restricted to the prime numbers 2, 3, and 5, but r is free to take on all values from 1 to 6. The possible microstates are

    (1,2), (1,3), (1,5)
    (2,2), (2,3), (2,5)
    (3,2), (3,3), (3,5)
    (4,2), (4,3), (4,5)
    (5,2), (5,3), (5,5)
    (6,2), (6,3), (6,5)

    There are 18 remaining microstates, so each has a probability of 1/18. The entropy is

    S = log2(18) ≈ 4.17 bits

    Note that the entropy is one bit less than when all 36 microstates were possible. That makes sense — we’ve reduced the number of possibilities by a factor of two, so there is one bit less of missing information.

    f) She tells you that r raised to the g is exactly 32.

    This eliminates all microstates except for (2,5). Its probability is one and the entropy is

    S = log2(1) ≈ 0 bits

    g) She tells you that g minus r is exactly one.

    The possible microstates are
    (1,2)
    (2,3)
    (3,4)
    (4,5)
    (5,6)

    The probability of each is 1/5, and the entropy is

    S = log2(5) ≈ 2.32 bits

  34. Now suppose that you roll the dice and your honest, accurate friend whispers to you that the sum r + g is greater than eight. To me she whispers that the sum is greater than ten.

    For you, there are ten epistemically possible microstates — all those in which the sum is greater than eight:
    (3,6)
    (4,5), (4,6)
    (5,4), (5,5), 5,6)
    (6,3), (6,4), (6,5), (6,6)

    For me, there are only three possible microstates — those in which the sum is greater than ten:
    (5,6)
    (6,5), (6,6)

    The entropy for you is

    S = log2(10) ≈ 3.32 bits

    From my perspective, the entropy is

    S = log2(3) ≈ 1.58 bits

    Entropy is observer-dependent. It depends on the amount of missing information, which can vary from observer to observer.

  35. Finally, suppose that you roll the dice. Your friend hasn’t told you anything yet, so each of the 36 microstates is possible. You calculate the entropy and get

    S = log2(36) ≈ 5.17 bits

    Now she tells you that the sum is greater than eight. Your missing information has been reduced, so you recalculate the entropy and see that it is now

    S = log2(10) ≈ 3.32 bits

    Now she tells you that the sum is greater than ten. Your missing information has been reduced even further, so you recalculate the entropy and see that it is now

    S = log2(3) ≈ 1.58 bits

    Finally, she tells you that the sum is twelve. All the uncertainty — all the missing information — has been eliminated. You know that the microstate must be (6,6). Your missing information has been reduced yet again, to zero this time, so you recalculate the entropy and see that it is now

    S = log2(1) = 0 bits

    Note that the microstate didn’t change during this time. It was always (6,6).

    What changed was the information you possessed about the microstate. As you learned more about the microstate, the missing information was reduced, and therefore so was the entropy. When all of the missing information had been supplied, so that the exact microstate was known, the entropy was reduced to zero.

  36. “Typical” ice (i.e. ice Ih) will not melt at 250K; it will, however, melt at 252K.

    DNA_Jock

    LOL, so you admit the idiocy of computing ice melting at 250K, not to mention the figures you cited for ΔH were for FREEZING of supercooled liquid water not melting of ice.

    I was wondering how you managed to invoke Kirchoff for melting of ice at 250K and got ΔH for various transitions. You didn’t bother telling the reader all the ΔH values had phase transition from super cooled water to ice at every temperature from 250K (or whatever low temperature) to the normal melting point. Not very straight up on your part, blatant equivocations and misrepresentations of the arguments. No wonder it sounded idiotic because it was idiotic.

    I defined “typical” and also referenced a college chemical exam from where the question of the ice cube came from. You’d have gotten an automatic fail for that question since the exam answer assumed typical melting was 273.15 K. And by the way typical is listed when you simply google melting point:

    https://en.wikipedia.org/wiki/Melting_point

    The melting point (or, rarely, liquefaction point) of a solid is the temperature at which it changes state from solid to liquid at atmospheric pressure. At the melting point the solid and liquid phase exist in equilibrium. The melting point of a substance depends on pressure and is usually specified at standard pressure. When considered as the temperature of the reverse change from liquid to solid, it is referred to as the freezing point or crystallization point. Because of the ability of some substances to supercool, the freezing point is not considered as a characteristic property of a substance. When the “characteristic freezing point” of a substance is determined, in fact the actual methodology is almost always “the principle of observing the disappearance rather than the formation of ice”, that is, the melting point.[1]

    The melting point of ice at 1 atmosphere of pressure is very close [3] to 0 °C (32 °F, 273.15 K); this is also known as the ice point

    contrast with

    DNA_Jock:

    If you press on it hard enough…
    Ever been ice skating, Sal?

    More attempted equivocations.

    The referenced conditions in my example were 273.15K, and 333.55 J/g specific enthalpy of fusion. You don’t use figures for freezing supercooled water at 250K to answer the question of melting ice at 273.15K and 333.55 J/gram specific enthalpy, and you don’t need Kirchoff’s equation for that as shown by a college chemistry exam solution.

    It should have been clear enough to you what the example was because I provided the two essential numbers for the readers. Now you got caught equivocating, and you try to weasel out with more equivocations.

    Now, using the macro thermodynamic Claussius definition of change of entropy:

    dS = dQ/T

    for 20 gram ice cube melting at 237.15K and 333.55 J/gram specific enthalpy of fusion (melting), since this is an isothermal process

    ΔS = Integral ( dQ/T) = ΔQ/T

    where

    ΔQ = 333.55 J/grams x 20 grams = 6671 J
    T = 273.15

    ΔS = ΔQ/T – 6671 J/ 273.15K = 24.42 J/K

    which is (allowing for rounding) essentially the same answer on the Frazier University Chem exam (which was 24.4) provided by the instructor.

    The Claussius version of entropy is, roughly speaking, a macro thermodynamic description of entropy. The macro version can be tied to the micro version of Boltzmann Gibbs.

    ΔS = Integral ( dQ/T) Claussius Macro Thermodynamic

    ΔS = kB (ln W_final – ln W_initial) Boltzmann/Gibbs Micro thermodynamic

    ΔS_Shannon = (ln W_final – ln W_initial) / kB/ ln(2) Shannon information

    The first two can be then related:

    Integral ( dQ/T) = kB (ln W_final – ln W_initial) = kB ln (W_final/W_initial)

    where kB = 1.38 x 10^-23

    which implies

    24.42 J/K = 1.38 x 10^-23 J/K ln (W_final/W_initial)

    24.24/ (1.38×10^-23) = ln (W_final/W_initial) = 1.769 x 10^24

    the final figure is dimensionless nats, it can be converted to Shannon bits by dividing by ln(2)

    ΔS_Shannon = 1.769 x 10^24 / ln (2) = 2.55 x 10^24 Shannon bits

    So the additional “missing information” added by melting the ice is 2.55 x 10^24 Shannon bits. But this is a convoluted way of looking at thermodynamic entropy!

    The problem for Keiths, Mung, and DNA_Jock is they haven’t done the “missing information” definition of entropy nor the “counting microstate” definition of entropy and then worked back to the Claussius definition without first doing something like macro thermodynamic calculation I just provided.

    Even double Nobel Prize winner Linus Pauling could only get an approximation using the microstate counting approach, and that was done with idealized and hypothetical models of water for ice, not liquid water.

    So I’m afraid we’ll continue to get non-answers for actual worked out examples of computing entropy from “missing information” or “counting microstates all the way” from Keiths, Mung, and DNA_Jock unless they start off with the definition of entropy of

    dS= dQ/T

    But, well unless they retract from swearing by this statement:

    dQ/T is rarely informative

    They’ll be stuck.

  37. stcordova: The referenced conditions in my example were 273.15K, and 333.55 J/g specific enthalpy of fusion.

    Yes, and my response was “Cool calculation, bro. But what if the temperature is, say, 250 K ?” which started our conversation; I don’t think my rounding down from 251.165 is what confused you, Sal.
    😉
    You still don’t seem to have taken this on board

    At T1, ΔHfus = – ΔHsolid.
    At T2, ΔHfus = – ΔHsolid.
    but they differ.

    And you still avoid the question:
    WHY does heat capacity vary with temperature?
    Oh well.

  38. Awesome news, Sal!

    It turns out that my sister didn’t throw away our grandfather’s copy of the 6th edition of the “Rubber Bible”, which carried an appendix with “Ratios von Wahrscheinlichkeit” Tables in it.
    The water/ice ratio at STP is none other than = 14.11 !!
    ΔS = kB ln (W1/W2)
    Thus:
    ΔS = kB x ln(14.11) = 3.654 x 10^-23 J/K

    There’s about 6.686 x 10^23 molecules in 20g of ice, so we reckon that gives
    ΔS = 24.43 J/K for 20g.

    She has no idea how they arrived at the “Ratios von Wahrscheinlichkeit” numbers, but she doesn’t mind just plugging them blindly into an equation — she’s never had a very good grasp of scientific concepts — unlike my father and brother, she didn’t go to a good engineering school.

    </ Rev. Dodgson>

  39. keiths, colewd

    keiths writes: Entropy is the information gap between the macrostate and the exact microstate

    He should have written:

    Entropy is the information gap between what the macrostate is taken to be and the exact microstate.

    That entropy is observer dependent doesn’t make macrostates observer dependent. But what someone takes some macrostate to be is, of course, observer-dependendent.

    In other words, the excerpt colewd put up is quite right, and keiths’ paraphrase is almost right.

  40. keiths, a couple of weeks ago:

    And now that you’ve reversed yourself on energy dispersal and the missing information view, why not take the final step and acknowledge your error regarding observer dependence?

    walto, today:

    That entropy is observer dependent doesn’t make macrostates observer dependent.

    Excellent! You’ve acknowledged that entropy is observer-dependent. That wasn’t so bad, was it?

    You’re still wrong about macrostates, however.

    Entropy is a function of the macrostate. If macrostates weren’t observer-dependent, then entropy wouldn’t be observer-dependent.

    See this comment.

  41. keiths: Entropy is a function of the macrostate. If macrostates weren’t observer-dependent, then entropy wouldn’t be observer-dependent.

    No, that’s completely confused. My views on or definitions of this or that macrostate is observer dependent, macrostates are not. That confusion has dogged you throughout this thread, making many, if not most, of your posts wrong.

    And that’s a lot of wrong posts!

  42. walto,

    The macrostates themselves are observer-dependent. Reread this comment:

    Now suppose that you roll the dice and your honest, accurate friend whispers to you that the sum r + g is greater than eight. To me she whispers that the sum is greater than ten.

    For you, there are ten epistemically possible microstates — all those in which the sum is greater than eight:
    (3,6)
    (4,5), (4,6)
    (5,4), (5,5), 5,6)
    (6,3), (6,4), (6,5), (6,6)

    For me, there are only three possible microstates — those in which the sum is greater than ten:
    (5,6)
    (6,5), (6,6)

    The entropy for you is

    S = log2(10) ≈ 3.32 bits

    From my perspective, the entropy is

    S = log2(3) ≈ 1.58 bits

    Entropy is observer-dependent. It depends on the amount of missing information, which can vary from observer to observer.

    There are two macrostates in that example. Each macrostate is observer-dependent, because it depends on what the observer knows. You know that the sum is greater than eight; I know that the sum is greater than ten. Different knowledge, different macrostates, different entropy.

  43. keiths: Entropy is observer-dependent. It depends on the amount of missing information, which can vary from observer to observer.

    Yes.

    keiths: There are two macrostates in that example. Each macrostate is observer-dependent, because it depends on what the observer knows

    No.

  44. Do you have an argument to offer along with those assertions?

    The entropy is different for each of us because the macrostates are different. The macrostates are different because what you know — that the sum is greater than eight — is different from what I know: the sum is greater than ten.

  45. keiths:
    Do you have an argument to offer along with those assertions?

    The entropy is different for each of us because the macrostates are different.The macrostates are different because what you know — that the sum is greater than eight — is different from what I know: the sum is greater than ten.

    It’s obvious. Macrostates supervene on microstates, if you can’t see that that’s inconsistent with your position, I can’t help you.

  46. In the example above, your macrostate is associated with the following microstate ensemble…

    (3,6)
    (4,5), (4,6)
    (5,4), (5,5), 5,6)
    (6,3), (6,4), (6,5), (6,6)

    …and my macrostate is associated with this ensemble:

    (5,6)
    (6,5), (6,6)

    Different knowledge -> different macrostates.
    Different macrostates -> different entropy values.

    Entropy is observer-dependent because the macrostates are observer-dependent, and the macrostates are observer-dependent because knowledge is observer-dependent.

    Think about it. It’s actually quite obvious.

Leave a Reply