Granville Sewell vs Bob Lloyd

Bob Lloyd, professor emeritus of chemistry at Trinity College Dublin, wrote an opinion article in Mathematical Intelligencer (MI) commenting on Sewell’s not-quite-published AML article. This was mentioned in a previous thread, where Bob briefly commented. Granville was invited to participate but never showed up.

In response to Lloyd, Sewell submitted a letter to the editor. On advice of a referee, his letter was rejected. (Rightly so, in my view. More on that later.) Sewell has now written a post on Discovery Institute’s blog describing his latest misfortune. The post contains Sewell’s unpublished letter and some of the referee’s comments. I invite you to continue the technical discussion of Sewell’s points started earlier.

Sewell’s reply to Lloyd deals mostly with “X-entropies:”

Lloyd cites my example, given in my letter to the editor in a 2001 Mathematical Intelligencer issue, of carbon and heat diffusing independently of each other in a solid… He proceeds to show that these “entropies” are not independent of each other in certain experiments in liquids. This seems to be his primary criticism of my writings on this topic. I may have left the impression in my 2001 letter that I believed these different “X-entropies” were always independent of each other, but in the more recent AML paper, I wrote:

He then quotes from his AML paper and rambles on for another eleventeen paragraphs. Read at your own risk.

Here is my brief take on this. I will expand on it in the comments.

“X-entropies” are not new quantities. Sewell does not define them in any of his papers and blog posts, but my reading is that they are either other thermodynamic variables (e.g., chemical potential or pressure) or they are regular thermal entropies of different parts of a large system (configurational entropy of carbon and entropy of lattice vibrations in Sewell’s example). Either way, the 2nd law is not threatened. In the latter case, if the two subsystems can interact and exchange energy then the entropy in one can decrease; the decrease is amply compensated, and then some, by an increase of entropy in the other subsystem. We saw this in the previous thread with an ice cube in a glass of water and with spins in a ferromagnet. Compensation works. Sewell has no leg to stand on.

151 thoughts on “Granville Sewell vs Bob Lloyd

  1. @all:

    Anybody that wishes to follow up personally on any of my posts is welcome to contact me by email:

    kdevotalk at cox dot net

  2. I’m sure there are people standing in line to help you with your problem.

    Looking through the fossil record, say at the evolution of the inner ear bones, can you point to a macroevolutionary event, as distinguishable from a microevolutionary event?

  3. Kent_D: The evidence comes from the pro-evolution camp. If you’re a proponent of gradualistic evolution (as opposed to some kind of radical saltationism), my statement is a simple necessary inference from the available evidence, as interpreted through the lens of the new synthesis. Macroevolution (one kind of evolution) is many orders of magnitude less probable than microevolution (another kind of evolution). That something will be produced in, say, a few hundred generations is virtually certain (barring some sort of extinction event); that that new something will be massively more complex than the baseline organism from which it descended is far less certain.
    Am I missing something?

    We really need to get this “tautology business” off the table here. Sewell’s “tautology” is nothing more than brazen circular reasoning.

    He has serious misconceptions in thinking the second law of thermodynamics says that the universe is coming all apart and that entropy is disorder. He inherited those misconceptions from Henry Morris.

    He then uses his misconception to plug in “X-entropy” in the form of “carbon entropy” to “prove” that order in one place in not “compensated” by disorder in another place. This is also a gross misconception as well as a misuse of the equation into which he plugs his “X-entropies.”

    Then he spins a story about things appearing on planet Earth that are “highly improbable” because of the second law of thermodynamics. He has now established the growth of living organisms and abiogenisis as being “highly improbable” because of the second law of thermodynamics.

    The circle is closed by his followers who assert that abiogenisis is highly improbable. Therefore physicists don’t understand the second law of thermodynamics. Life is “highly improbable,” therefore the second law of thermodynamics says everything is falling apart as proven by the fact that abiogenisis is “highly improbable.”

    The big picture that is missing in all of ID/creationism is that, for centuries now, chemists and physicists have been taking condensed matter apart to find out the rules by which it is constructed. The laws of physics and chemistry were not proclaimed so that ID/creationists can assert that they are inadequate to explain what we see in the universe. The rules ARE the chemistry, physics, and biology. They explain what matter and energy do. They don’t need “information” and “intelligence” to replace them.

    The universe is what it is, and we know many of the rules; especially those that involve atoms and molecules, the building blocks of living organisms.

    ID/creationists get it exactly backwards. They misunderstand and misrepresent the physics, chemistry, and biology; and then they go on to assert that physics, chemistry, and biology are inadequate. They have even taken the extraordinary step in declaring that the “indisputable” second law of thermodynamics proclaims that the universe is coming all apart into a state of disorder. Therefore the laws of chemistry and physics are inadequate to explain living organisms.

    If an ID/creationist avers that he sincerely wants to understand entropy and the second law, then he should not insist on pushing the discussion onto a story that blatantly abuses those concepts. He should direct his focus directly onto the concepts themselves.

  4. Kent_D,

    “Macroevolution” … oh boy. Okay, let’s start with the basics, like what do you mean by macro-evolution?

    For practical purposes, let’s use the modern dog/wolf as our example. At what point in its proposed evolutionary history does the micro- switch to macro-? Did the Fox genus (Vulpes) descend from a common ancestor or is it a de novo creation? How would you make that determination?

  5. @Elizabeth L:

    Would you please consider creating a new thread? Some recent comments are begging for a response, but I have no wish to disrupt the current thread more than I already have. Thanks.

    @all:

    I’m going to shut up now. Please don’t interpret my silence as intellectual cowardice. If Elizabeth chooses to create a new thread, I’d be happy to respond to any comments, complaints, or criticisms. Otherwise, if you wish to correspond offline from TSZ, you’re welcome to reach me by email (posted above). I will respond as time permits.

  6. Part II. Monkey Do,

    in which Sewell “generalizes” thermal entropy. (Continuing from Part I, in case you forgot with all the latest distractions.)

    To keep things simple, we stick with our discrete version. Sewell looks at the thermodynamic definition of entropy, ΔS = Q/T, and tries to come up with other quantities that tend to increase out of equilibrium. Can one generalize this from temperature to, say, the number of particles N (or their concentration in the continuum case)? We can replace T with N, but what do we replace Q with? At this point, Sewell is stuck. He does not seem to remember that particle number is married to chemical potential. This would lead to a correct, but entirely different story (more on that later).

    He decides to try a shortcut and expresses heat in terms of temperature increment ΔT and heat capacity C: Q = CΔT, getting the following equation: Δ S = CΔT/T. This result is correct, but it is no longer a definition of entropy because it now mixes in a constitutive equation. Heat capacity is temperature-dependent and the form of this dependence C(T) varies from one system to another. Sewell is entirely oblivious to these red-flashing warning signs.

    He next says, literally: let’s replace T with N and set C to a constant number, e.g., 1. (Doug, this is why his “carbon entropy” has the units of volume. It’s not the biggest problem.) He thus arrives at a “definition” of a new quantity X, whose increment is given by this equation: ΔX = ΔN/N. Voilà! Ladies and gentlemen, I give you X-entropy!

    Those who know thermodynamics should compare the above with the introduction of entropy by Clausius through his ingenious analysis of reversible thermodynamic processes (see Wikipedia for a quick reminder). Sewell’s “derivation” is mindless doodling in comparison. It is not based on any fundamental principles, just on plugging stuff in.

    To his credit, Sewell checks whether his X-entropy tends to increase if the system is out of equilibrium. Indeed it does. Particle number on site 3 increases thanks to the flow of particles from sites 2 and 4: ΔN_3 = J_{23} + J_{43}. After rearranging the terms, ΔX is expressed as a sum of products like J_{23}(1/N_3−1/N_2). If site 2 has more particles than site 3 then particles flow from 2 to 3, so both factors are positive and so is the product. X-entropy thus increases in a closed system. The reasoning is identical to the thermal case.

    But since the “derivation” involved some arbitrary steps, such as setting C to a constant, shouldn’t Sewell worry that his X-entropy is not a fundamental quantity? What if C is a function of N? Then ΔX = C(N)ΔN/N. As long as C(N)/N is a decreasing function of N, X-entropy will increase. Does this mean that there are infinitely many X-entropies? No, as I mentioned in Part I, there is only one thermal entropy. Thus we expect that only one of the infinitely many X-entropies will be correct. All the other will be either impostors or redundant copies. But then how do we know that Sewell’s choice, ΔX = ΔN/N, is the right one? What if we choose instead ΔX = −ΔN ln(N)? Under this definition, X-entropy of the system will also tend to increase! As well as under lots of others.

    I won’t keep you guessing. We are fortunate to have statistical physics at our disposal, which can provide the correct answer. It turns out that there already is a well-known physical quantity whose increment ΔX = −ΔN ln(N), just like I proposed arbitrarily in the previous paragraph. It’s called (drum roll) entropy! It is the configuration entropy of particles distributed among the sites in our problem. It is a physically significant quantity that is related to the number of microstates in the system and hence to thermodynamics of the system. You can create a temperature gradient and particles will flow under its action. I will provide technical details in a subsequent comment.

    So if my choice for “X-entropy”, ΔX = −ΔN ln(N), gives the familiar entropy, what shall we make of Sewell’s ΔX = ΔN/N? Just like other candidates for entropy in our thermal example are failures, so is his. It tends to increase, but so do a bazillion other wrong “X-entropies” defined incrementally as ΔX = f(N)ΔN, with f(N) a decreasing function. None of them has any physical significance, Sewell’s included.

    So, to wrap it up, Sewell’s X-entropy is a hastily introduced quantity with no physical significance. On close inspection it turns out to be a failed version of ordinary entropy, in this case the configuration entropy of particles. This kind of entropy, of course, allows for compensation, and that runs against Sewell’s logic.

  7. olegt: Heat capacity is temperature-dependent and the form of this dependence C(T) varies from one system to another. Sewell is entirely oblivious to these red-flashing warning signs.

    There is the interesting case of negative heat capacities that would completely stump Sewell.

    Stars have negative heat capacity; their temperature increases as they lose energy. As one can learn from Virial Theorem, the kinetic energy of the constituents of a star increases as the constituents fall deeper into their mutual gravitational potential well. In fact, the average kinetic energy of the constituents is always minus one-half of their potential energy for a 1/r potential well.

    And as the kinetic energy of the constituents increases and they continue to have inelastic collisions with each other, they ionize, radiating more energy while falling more deeply into their gravitation well and heating up even more. Finally, they reach such high kinetic energies that they slam into each other in nuclear fusions, thereby releasing intense bursts of electromagnetic radiation and neutrinos. The electromagnetic radiation interacts with the charged particles higher up in the well, partially compensating for the energy they lose through radiation, and slowing their fall into the gravitational well.

    Yet, throughout all this, the second law of thermodynamics still holds even as matter condenses into more complex nuclei within stars. In order for matter to condense at any level of complexity, the second law is required in order to release energy and spread it around as the matter begins to “clump.” This remains true for atoms and molecules condensing into more complex systems.

    Since the Big Bang, it has been condensation “all the way down.”

  8. Here is a quick derivation of configuration entropy of particles distributed among k sites. With N_1 particles on site 1, N_2 on site 2 and so on, the number of possible configurations is Ω = N!/(N_1! N_2!…N_k!), where N is the total number of particles. The entropy is S = lnΩ, in natural units. We will work in the thermodynamic limit, where the number of particles on any site is large.

    If we change the number of particles on site 1 from N_1 to N_1 + ΔN_1, the entropy changes by ΔS = −ln[(N_1+ΔN_1)!/N_1!]. For ΔN_1 small compared to N_1, we can approximate (N_1+ΔN_1)!/N_1! = (N_1+ΔN_1)(N_1+ΔN_1−1)(N_1+ΔN_1−2)…(N_1+1) by N_1^{ΔN_1}. Thus ΔS = &minus ΔN_1 ln(N_1).

    If we change the numbers of particles on all sites then ΔS is given by the sum −ΔN_1 ln(N_1) − ΔN_2 ln(N_2) − … − ΔN_k ln(N_k).

    (Strictly speaking, the above manipulations are valid only if we keep the net number of particles N fixed, so that the particles are transferred between sites but not taken out of the system or brought in.)

    This establishes that configuration entropy of N particles distributed between k sites has the increment ΔS = −ΔN_1 ln(N_1) − ΔN_2 ln(N_2) − … − ΔN_k ln(N_k), as I mentioned in the previous comment.

    This expression for the entropy, derived from statistical physics, gives sensible results for the distribution of particles. Suppose a particle on site 1 costs energy E_1, a particle on site 2 costs E_2 and so on. If we did not have to worry about energy conservation, maximization of entropy would produce a uniform distribution of particles among all sites. Instead, we must maximize energy subject to energy conservation, E = N_1 E_1 + N_2 E_2 + … + N_k E_k = const. We also have to conserve the net number of particles, N = N_1 + N_2 + … + N_k = const. Optimization with constraints can be done with Lagrange multipliers, so we maximize S &minus μN &minus βE with respect to each variable N_1, N_2,…, N_k. In this way we get [−ln(N_i) − μ &minus βE_i]ΔN_i = 0 for all i = 1, 2,…, k. We thus obtain the Boltzmann distribution, N_i = C exp(−&beta E_i). The Lagrange multiplier &beta is merely the inverse temperature, β = 1/(k_B T).

    Sewell’s X-entropy, defined by increments ΔX_i = ΔN_i/N_i, gives entirely different results. Maximizing X, subject to constraints E = const and N = const, gives [1/N_i − β E_i − μ]ΔN_i = 0. This yields what we can call the Sewell distribution, N_i = 1/(μ + βE_i), which makes absolutely no sense. This confirms, once again, that Sewell’s X-entropy is sheer nonsense

  9. Kent_D: @Elizabeth L:

    Would you please consider creating a new thread? Some recent comments are begging for a response, but I have no wish to disrupt the current thread more than I already have. Thanks.

    I’ve given you posting permissions, and I would be delighted if you’d start a new thread. If you’d rather not, I’m more than happy to do so. If I don’t see one from you in the next few hours or so, I’ll do one anyway.

    Thanks!

    Lizzie

  10. Am I missing something?

    I can’t resist. Yes, you are missing something. If the starting point is the simplest, there’s only one way to move.

    You also haven’t attempted to connect the appearance of maize (here) with increases in “complexity”. What makes you think maize is more “complex” than its ancestors (after some “complexity” develops from the first life)? How are you measuring complexity?

    So far all you’ve said looks like the ol’ “Golly gee whillickers, life is complicated!!one11! Musta been a Designer!”

  11. Part III. Importer-exporter,

    in which Sewell smuggles order across the boundary.

    Here is another howler. In his AML paper, Sewell claims that order in a system can only be imported through the boundary. The basis for this claim is Equation 5 retrofitted for X-entropy. As we discussed earlier, the right-hand side of that equation contains two terms: one gives a contribution from the bulk of the system, where fluxes of energy (or particle number) make thermal or X-entropy increase; the other describes action at the edge, where the system exchanges energy or particles with the outside world.

    If you read the text below Eq. 5 you will see that Sewell employs a rather unsubtle sleight of hand. When he discusses thermal entropy, he says, correctly, that the boundary term represents export or import of energy. (See also my discussion of energy fluxes in Part I.) But when he turns to diffusion of carbon, all of a sudden he is talking of importing and exporting X-entropy. That’s incorrect. In this case, J in Eq. 5 is the flux of particles. In the simpler discrete setting (Part II), J_{23} is the number of particles that went from site 2 to site 3. It is not the amount of entropy!

    So what gets imported into or exported out of the system at the edges is energy (thermal example) and particle number (carbon). Not entropy. Not X-entropy. A flux is defined for conserved quantities. Energy and the number of particles are conserved, so if they vanished in one place they must have gone to another place nearby. The local character of these conservation laws is what allows one to introduce fluxes. In contrast, entropy is not conserved (it tends to increase), so it makes no sense to speak of entropy flow.

    This is an important point, so I will add more on it.

  12. Elizabeth: I dunno, I think I share that opinion.Eyes and flippers seem pretty evolvable.

    Sorry, still can’t resist after some thought.

    I should have written “organisms” rather than “things”. Eyes and flippers aren’t particular things, they are groups of particular parts of organisms. Eyes and flippers are indeed likely to evolve, tiger sharks (down to every last detail) not so much.

  13. Stars have that negative heat capacity because they were designed that way.

    In a universe governed by blind and undirected processes would we even have stars?

  14. Is entropy imported and exported?

    A crucial sleight of hand in Sewell’s AML paper is the claim that entropy of a system tends to increase unless it is exported through the boundary. And since order is the opposite of entropy, order tends to decrease unless it is imported through the boundary. This is entirely wrong because it isn’t entropy that is imported, but rather energy or particles.

    Here is an excerpt from Section 2 of the AML paper:

    Furthermore, Eq. (5) does not simply say that the X-entropy cannot decrease in a closed system; it also says that, in an open system, the X-entropy cannot decrease faster than it is exported through the boundary, because the boundary integral there represents the rate at which X-entropy is exported across the boundary. To see this, notice that, without the denominator U, the integral in (3) represents the rate of change of total X (energy, if X = heat) in the system; with the denominator it represents the rate of change of X-entropy. Without the denominator U, the boundary integral in (5) represents the rate at which X (energy, if X = heat) is exported through the boundary; with the denominator therefore it must represent the rate at which X -entropy is exported.

    That last therefore is not justified by anything. Just declared, with no supporting arguments.

    Sewell is mistaken at a very basic level. Two systems in thermal contact exchange energy. They do not exchange entropy. Let’s unpack this using basic statistical physics.

    An isolated system has a fixed amount of energy E. The volume of phase space available to the system Ω depends on its energy. Usually there is more phase space at higher energies, so Ω(E) is an increasing function. Entropy is simply the logarithm of the phase space volume: S = S(E) = lnΩ(E). Thus, if energy were not fixed then the most likely state of the system (highest entropy) would be one with the highest (infinite) energy. But energy is conserved in an isolated system, so its entropy is whatever energy dictates it to be, S(E).

    Suppose we have two systems that interact with each other. Their energies E_1 and E_2 can now change, although their sum, E_1 + E_2, is still subject to the conservation law: it must remain constant. If the energy of system 1 has changed by ΔE_1, it must have been compensated by an equal and opposite change in the energy of system 2: ΔE_2 = −ΔE_1. This is why we say that energy flows from 1 to 2.

    If the energy of system 1 increases, so does its entropy: ΔE_1>0 means ΔS_1>0. This sounds like a good idea: the system is more likely to be found in a state with higher entropy, so, statistically speaking, it should prefer to go to the new state. But wait! The second system’s energy goes down (energy conservation!), and so does its entropy: ΔE_2<0 means ΔS_2<0. The net entropy change, ΔS = ΔS_1 + ΔS_2 could be either positive or negative. If it is positive, the system will move in that direction. Otherwise, it won’t; instead, it will move in the opposite direction: system 1 will reduce its energy and system 2 will increase its own, the resulting entropy change being positive.

    Which way it moves depends on how fast entropy rises with energy. Let us call the rate dS/dE some arbitrary name, e.g., inverse temperature. (Yes, that’s how temperature appears in statistical mechanics.) ΔS_1/ΔE_1 = 1/T_1 and ΔS_2/ΔE_2 = 1/T_2. The net change in entropy then is ΔS = ΔS_1 + ΔS_2 = ΔE_1/T_1 + ΔE_2/T_2. By virtue of energy conservation, ΔE_2 = −ΔE_1, we obtain this important result: ΔS = ΔE_1(1/T_1 − 1/T_2).

    If system 1 is hotter then 1/T_1 < 1/T_2, so, inorder to increase the overall entropy, energy must flow from 1 to 2: ΔE_1<0 and ΔE_2>0. That’s the 2nd law of thermodynamics in one of its formulations: heat flows from hot to cold.

    Note that in the above analysis (really basic textbook stuff) we have been mentioning energy flux but not entropy flux. Can we say that entropy flows from 1 to 2 in this case? No, we cannot. Although the entropies change in opposite directions (ΔS_1<0 and ΔS_2>0), we cannot say that entropy went from 2 to 1 because there is a net change in entropy: ΔS = ΔS_1 + ΔS_2 >0. More entropy “entered” system 1 than “left” system 2. Where did this overall increase of entropy come from? Nowhere. The question makes no sense because entropy is not a conserved quantity. There is no flux of entropy.

    Sewell’s machinations in the AML article can be summarized as follows: what’s true is not new, what’s new is not true. His analysis of entropy change in heat transfer is entirely correct. Entropy tends to increase unless energy is exchanged through the boundary. His claim about “export of X-entropy” is meaningless because again, what is exchanged through the boundary is energy or particles (or some other conserved quantity). Entropy is not conserved, so there is no sense in talking about entropy flux.

    It bears repeating, again and again. In physics, the concept of flux (a.k.a. current) is reserved for quantities obeying a local conservation law. If a particle disappears from here, it must have gone somewhere nearby (and not to a galaxy far, far away). This is why flux is defined for only a limited number of physical variables such as energy, momentum, particle number, charge. See Wikipedia entry on flux for further examples. Entropy is not among them. (It can be under special circumstances: in adiabatic processes, when entropy is conserved. But generally it isn’t.)

  15. olegt: It bears repeating, again and again. In physics, the concept of flux (a.k.a. current) is reserved for quantities obeying a local conservation law. If a particle disappears from here, it must have gone somewhere nearby (and not to a galaxy far, far away). This is why flux is defined for only a limited number of physical variables such as energy, momentum, particle number, charge. See Wikipedia entry on flux for further examples. Entropy is not among them. (It can be under special circumstances: in adiabatic processes, when entropy is conserved. But generally it isn’t.)

    The shorthand way of illustrating this fact is to look at a system at temperature T_higher in which an amount of heat (energy), ΔQ, is transported into the surrounding environment which is at lower temperature T_lower.

    The amount of heat entering the environment is equal to the amount of heat leaving the system; energy is conserved.

    But ΔQ/T_higher is a smaller quantity than ΔQ/T_lower. Entropy is NOT conserved. DUH!

    The transport of energy from higher temperatures to lower temperatures was an observed fact that went into the early development of thermodynamics; so it became one of the “axioms” in the teaching of thermodynamics in many courses that teach thermodynamics from an axiomatic perspective.

    However, we now understand from statistical mechanics that temperature is the average kinetic energy per degree of freedom within a thermodynamic system comprised of particles, for example. Higher kinetic energies are associated with higher momentum transfers, so it is not surprising that energy flows in the direction in which the momentum transfers are taking place.

    Within solids, this transfer takes place in the form of phonons (quantized lattice vibrations of the matrix of atoms) and also, in the case of conductors (both electron or hole conductors), by way of the charge carriers that receive their momentum from interactions with the lattice (metals are usually better heat conductors than insulators unless lattice vibrations are exceptionally efficient, as in diamond for example).

    The conclusions regarding the efficiencies of heat engines (Carnot efficiency) can be arrived at either by casting the problem in terms of heat always flowing from higher temperature to lower temperatures or by casting the problem in terms of the fact that entropy must always increase. These two ways are equivalent; if heat (energy) flows in the direction of momentum transfers – i.e., from high temperature to low temperature – then it is automatically the case that, by definition, entropy increases. Sometimes it is easier to work a problem in one form than in the other.

    Just as a little aside to tie things together and link to other areas of physics; if we divide energy by temperature (the definition of entropy), a dimensional analysis shows that we are dividing energy by energy per degree of freedom. The result has units of number of degrees of freedom. In other words, the number of degrees of freedom increases in spontaneous thermodynamic processes; i.e., energy spreads around. For example, one of the most common sources of these additional degrees of freedom is the creation of photons flying off into the surrounding space. Photons are bosons, and bosons are not conserved.

  16. Non-technical summary

    I am afraid that my comments have been a bit heavy on the technical side. It was necessary because Sewell’s article purports to be technical. One has to get through all the calculus and thermodynamics to see what he has done. (I stripped away the former, but the latter is important.) And here is what the AML paper boils down:

    A. Entropy in a closed system tends to increase.
    B. In an open system, it may decrease if there is flux of energy or particles through the boundary.
    C. Rename energy or particle flux into entropy flux.
    D. Order information is entropy with a minus sign.
    E. Therefore order information cannot increase in a system unless it is imported through the boundary.
    F. Mere energy exchange does not increase order information.

    Points A and B are non-controversial. They are standard textbook material. Point E is what creationists want to claim. Let’s give him point D, as information is (more or less) entropy with a minus sign. Still, to get to his coveted conclusion (E), he has to rely on a purely semantic trick (C), renaming energy flux into entropy flux. Having done that, and having thus reached point E, he has the gall to claim that energy flux is not sufficient, even though this point (F) plainly contradicts point B used in his “proof.”

    The charitable interpretation of such “scholarship” is that Sewell is mistaken and does not see these glaring problems.

  17. Bottom line:

    Don’t plug your weight and your daily calorie intake into the Pythagorean Theorem in order to calculate your intelligence quotient. You’ll just end up looking stupid.

  18. olegt:
    Part II. Monkey Do,
    To his credit, Sewell checks whether his X-entropy tends to increase if the system is out of equilibrium. Indeed it does. Particle number on site 3 increases thanks to the flow of particles from sites 2 and 4: ΔN_3 = J_{23} + J_{43}. After rearranging the terms, ΔX is expressed as a sum of products like J_{23}(1/N_3−1/N_2). If site 2 has more particles than site 3 then particles flow from 2 to 3, so both factors are positive and so is the product. X-entropy thus increases in a closed system. The reasoning is identical to the thermal case.

    Oleg — I might be missing something, but I don’t see why Sewell’s X-entropy must increase. The conclusion S_t > 0 (equation 5) is a consequence of equation 2, which is a restatement of the second law (heat flows from hot to cold). But there is no analog of the second law for particle number or concentration — there is no known law of physics that requires that particles must flow from more concentrated to less concentrated. That behavior (spontaneous dilution) is often a consequence of the second law, but it is not a postulate itself. And in fact the opposite behavior happens spontaneously, e.g. during precipitation or crystallization.

    So, when Sewell substitutes N for T, there is no longer any justification for the premise in equation 2.

  19. Mike Elzinga: Don’t plug your weight and your daily calorie intake into the Pythagorean Theorem in order to calculate your intelligence quotient. You’ll just end up looking stupid.

    That’s not just a look…

  20. dtheobald: But there is no analog of the second law for particle number or concentration — there is no known law of physics that requires that particles must flow from more concentrated to less concentrated. That behavior (spontaneous dilution) is often a consequence of the second law, but it is not a postulate itself

    Think of it as of an empirical postulate (just like the 2nd law was before the invention of statistical physics). It is backed up by simple theoretical models of diffusion.

    For example, in my discrete world introduced in Part I, particles hopping randomly from site to site will exhibit precisely this correlation. As the number of particles hopping from site 1 to site 2 is proportional to N_1 and the number of particles hopping back to N_2, the current J_{12} is positive if N_1>N_2. Empirical grounds + simple models = reason strong enough to take it seriously. As long as we remember what the grounds are, it should be fine.

    And on further reflection, counting configurations shows that the model is viable: particle do tend to spread uniformly. So the model itself is not entirely silly. To be sure, Sewell incorrectly guessed the form of X-entropy, but it’s nothing we can’t fix. 🙂

    And in fact the opposite behavior happens spontaneously, e.g. during precipitation or crystallization.

    Yes, that’s a counter example. However, we should not rush to kill a fledgling theory. It may be simply missing some ingredients, in this case the energy of interactions. Attractive interactions are responsible for the gas-liquid phase transition. A theory that combines configuration entropy with attractive interactions can, and does, describe this phase transition. See lattice gas.

  21. dtheobald: So, when Sewell substitutes N for T, there is no longer any justification for the premise in equation 2.

    The t subscripts on the S, U and the Q in Sewell’s paper are partials with respect to the time, t.

    So his equation (1) could be written

    Q/∂t = – ∇⋅J.

    If one takes ∇⋅(J/U) as the divergence of an “entropy flux,” we get

    ∇⋅(J/U) = (∇⋅J)/U – (J⋅∇U)/U^2.

    Using the divergence theorem on this divergence of the “entropy flux” gets us the surface integral in Sewell’s equation (4).

    If we use Sewell’s heat capacity equation just below his equation (1), we can rewrite equation (3) as

    S/∂t = ∫∫∫∂(ln U)/∂t dV.

    This is the rate of change of the logarithm of the temperature field within the volume.

    So, Sewell’s equation (5) would read,

    ∫∫ J⋅n/ U dA = – ∫∫∫ ∂(ln U)/∂t dV – ∫∫∫ (J⋅∇U)/U^2 dV.

    The second volume integral on the right-hand side is positive because of equation (2) in Sewell’s paper. So the two volume integrals on the right-hand side could possibly cancel, or one could change more than the other depending on what the heat capacity is (there are systems in which heat capacity is negative).

    Note also that, by pairing one of the U’s with the heat flux and the other with the gradient of U, we could rewrite that second integral as

    ∫∫∫(J/U)⋅∇lnU dV.

    In other words, the volume integral of the entropy flux dotted with the gradient of the logarithm of the temperature field.

    Now, if Sewell wants to substitute “X-entropy” in the form of carbon particles, presumably an outward particle flow on the left-hand side could cause temperature to drop provided that the particles interacted with stuff already inside the volume (matter interacting with matter). This is where the chemical potential comes in; one has to multiply particles by energy per particle (and it is not constant as concentrations change). But Sewell doesn’t appear to know about chemical potentials.

    Without the chemical potential, there is also the problem of the meaning of particle flux divided by temperature on the left hand side under the surface integral. It has no meaning in physics, and it is dimensionally incorrect for entropy.

    But if Sewell wants U in to be a carbon concentration when he lets particles flow out, then he can still make the equation work, but it is now irrelevant because the equation no longer has anything to do with energy and entropy. Particle flow divided by particle concentration has units of particles per second divided by particles per volume, or just volume per second. Just calling it “X-entropy” doesn’t fix the problem.

    And, indeed, Equation (2) is irrelevant because, as you point out, depending on particle interactions, they could condense or avoid each other. But Sewell doesn’t appear to know anything about matter interacting with matter.

    If the equations were simply used for what they were set up for, one could play around with them and learn a few things. But one doesn’t even have to look at Sewell’s equations to know where he goes way off the rails. Entropy is not disorder, and there is no such thing as “X-entropy.” Sewell’s abstract and his story line in Section 1 are all one has to read to know he is way out of the ballpark. He just makes it worse after Equation (5).

    Silly paper.

  22. olegt,

    But Sewell doesn’t think that it’s a new empirical postulate — he thinks that his equation 2 is valid whether heat is flowing or carbon is flowing or computers are flowing, and he thinks that equation 2 is justified by the second law in all of these cases.

    And anyway, the 2LoT doesn’t have any missing ingredients, as far as we know. It holds absolutely, without the need to account for little annoying things like particle interaction terms. The 2LoT rules all.

  23. dtheobald,

    No disagreement here. His X-entropy is a bastardized version of configuration entropy. It is nothing but familiar statistical physics, of which the 2nd law is part.

    Sewell’s line of argument, summarized without the technical complications, is a stark reminder that ID scholars are all hat and no cattle. Their papers might contain impressive equations, but they boil down to the same bullshit their creationist predecessors peddled.

  24. Mike Elzinga: Particle flow divided by particle concentration has units of particles per second divided by particles per volume, or just volume per second. Just calling it “X-entropy” doesn’t fix the problem.

    Correction:

    Particle flow is in particles per second per area.

    So particle flow divided by particle concentration would have units of velocity.
    Still not entropy.

  25. It is of course flattering to have my name up in lights at the top of this discussion, but I am also humbled by the level of expertise which is available here. The only direct experience of thermodynamics in my academic career came from teaching engineers and materials scientists, and my other published work comes nowhere near the second law. I have a distinct feeling of having rushed in where……
    (though I did leave it for several years!)

    Although this thread is very useful, I feel it would be helpful for a wider audience if the main criticisms of the Sewell papers could be collected together in print; the points here go far beyond those which I made, but do take some work to get through, and by the nature of the Internet, will fade rapidly. A short note, with a few references to this thread for more detail, could be submitted to the Mathematical Intelligencer.

    I can hardly submit a comment on my own piece; is there a volunteer out there? Olegt’s summary of the AML paper, the non-conservation of entropy, reiterating that entropy≠ disorder, and the Felsenstein point about plants (and us) not being able to grow, are possible candidates for inclusion.

  26. That’s a good idea. Mike Elzinga also made an excellent post in one of the earlier threads.

    Not sure I have time at the moment – any volunteers?

  27. If Oleg wants to submit a note, perhaps with some of you other types as coauthors, I would be happy to lend my plant growth example for free in return for a nice acknowledgement in the Acknowledgements section.

    However, it might be hard for all of you agree on a common approach. Also, you should add some simple explanation as well as all the equations. Which is where the plant growth example comes in, but there should be a comparably simple summary of the X-entropy issue in addition to the derivations.

  28. We could write it up. (Not immediately: I have a month of heavy travel ahead of me.) The main question for me is where do we submit it? MI has just published Bob’s piece. I can’t see which reseearch physics journal would be interested in seeing yet another creationist paper taken apart. American Journal of Physics is a pedagogical publication that covers the subject once in a while, so I can ask Jan Tobochnik if they would be interested in running another article.

    On the far side, we could submit it to BIO-Complexity, just for the sheer fun of it. They should like that: it will increase the number of papers they print annually by a third. Or more if Sewell decides to reply. 🙂

  29. bobl: Although this thread is very useful, I feel it would be helpful for a wider audience if the main criticisms of the Sewell papers could be collected together in print; the points here go far beyond those which I made, but do take some work to get through, and by the nature of the Internet, will fade rapidly. A short note, with a few references to this thread for more detail, could be submitted to the Mathematical Intelligencer.

    I have been retired for some time now, so it is really up to the younger folks coming along to bring themselves up to speed on the misconceptions and misrepresentations that have been propagated primarily by the ID/creationists and by a few popularizers of physics.

    If someone wants to do it, consider the fact that the audience may not be familiar with thermodynamics, let alone the subtle details of the misconceptions we in the physics community have been dealing with for decades.

    Thus, one needs to stick to the most fundamental ideas and avoid insider jargon. The first and most obvious critiques of Sewell’s paper can deal with Sewell’s misconceptions about the concepts of entropy and the second law. These are so obvious in his abstract and in Section 1 of his paper that I didn’t even bother to look at his math; and I deliberately avoided the math when attempting to explain the misconceptions to lay persons. I learned that lesson of not using math back in the 1970s after some wise input from friends.

    If someone wants to deal with the math, I would suggest they do what I did in my last post above. Sewell’s paper was written in a way that obscures his misuse of his “X-entropies” in his equation. Rewrite the equation as I did and then do a dimensional analysis of J/U when Sewell plugs in particles per second per unit area for J and concentration in particles per unit volume for U.

    The reason for doing this dimensional analysis is because it is one of the most fundamental checks we ask beginning students to do when checking their work. Even high school students are taught this. The fact that Sewell didn’t do this should be telling to any readers. As I said above, one doesn’t plug weight and calorie intake into the Pythagorean Theorem to calculate one’s IQ. Yet Sewell made just such a stupid error.

    But the misconception of equating entropy with disorder must be dealt with firmly – almost dogmatically – because it is one of the most common misconceptions that the ID/creationists have propagated and have kept insisting on injecting into every argument they make. It is dead wrong, and that point must be hammered home.

    The other issue is the second law itself. ID/creationists have propagated the misconception that the second law says the universe is decaying and everything is coming all apart. This is simply wrong; matter has been condensing since the Big Bang. In order for matter to clump together, energy must be released and spread around.

    Energy flows in the direction of net momentum transfers, i.e., in the direction of decreasing temperature, which is equivalent to the average kinetic energy per degree of freedom in systems made up of energetic particles. I don’t think one has to get into relativity or quantum mechanics to get these notions across to lay audiences.

    There are also the issues around the misconceptions about everything coming all apart and decaying. ID/creationists point to death, rust, tornadoes in junkyards, and a host of other familiar phenomena as “proof” of their misconceptions about the second law.

    This means that the binding energies at the various levels of condensed matter must be mentioned also. Nuclear binding energies are on the order of millions of electron volts (eV). Chemistry involves binding energies on the order of 1 or 2 eV (think of a dry cell battery). Solids such as iron have binding energies on the order of 0.1 eV. Life as we know it exists within the energy window of liquid water, 0.01 – 0.02 eV.

    Living organisms, such as we, think of the freezing and boiling points of our most important compound, water, as being extreme. Living organisms live in a very narrow temperature window in which the matter of which they are made up is soft. This means that the average kinetic energies of the atoms and molecules that make up living systems are comparable to the binding energies that hold these systems together. That is why ID/creationists think everything decays; and this feeds their misconceptions about the second law.

    One of the important points I have made to students and to lay audiences is the notion of things falling into wells and staying there. This is really basic stuff, but it is almost totally overlooked in the discussions of the second law. Matter interacts with matter. Stickiness in matter requires that energy be released as atoms and molecules bind; and this gets at one of the most fundamental properties of our universe as well as the meaning of the second law. My last talk on this was given at a Science Café a couple of years ago. The PowerPoint is here,, but the audio recorder died about 30 minutes before the talk ended.

    As I said, I am retired. I have been dealing with this issue since the 1970s, and many of my fellow colleagues stayed out of the battle, leaving it for the biologists to fight. I would like to see the younger generation of physicists making themselves thoroughly familiar with the misconceptions spread by the ID/creationists, and learning the lessons I have learned along the way about presenting physics concepts to lay audiences. Warning: it is not easy. The temptation to show off one’s math is strong and is really pandering to one’s colleagues. Don’t do it.

    I have also developed the tactic of not allowing myself to become a name that ID/creationists recognize or to whom they want to attach themselves. I have already had enough hassles with crackpot hangers-on during my career, and I would rather be a nobody coming out of nowhere when it comes to taking down ID/creationists. ID/creationists and other crackpots will eat up all the time that you have if you let them. Most of us in the research community would rather stick to our research; yet we do have some civic responsibility to educate the public. But given the nature of crackpots, one has to be careful how one goes about it.

    The younger generation also needs to become thoroughly familiar with the history and the tactics of the ID/creationist movement as well as the famous court cases. Read the transcripts and the various books that have been written on this.

  30.  

    olegt : We could write it up. (Not immediately: I have a month of heavy travel ahead of me.) The main question for me is where do we submit it? MI has just published Bob’s piece.

     

    I was actually suggesting a second MI submission. Springer’s introductory text for a ‘Viewpoint’ says:

     

    The Viewpoint  column offers readers of The Mathematical Intelligencer  the opportunity to write about any issue of interest to the international mathematical community.

    Disagreement and controversy are welcome…….Viewpoint  should be submitted to one of the editors-inchief,Chandler Davis and Marjorie Senechal.

     

    I take it from this that further contributions, be they anti-, in support, or “well he’s partly right, but the real story is…” are more or less invited. Any volunteer could make an informal enquiry before committing her/his time; Chandler Davis has been the editor looking after this so far.

  31. (I wrote this on Cornelius Hunter’s blog, which might be made into something useful, unless of course I’ve made an error!)


    Here is what I hope is a clear commentary on his argument as presented in his abstract:


    It is commonly argued that the spectacular increase in order which has occurred on Earth does not violate the second law of thermodynamics because the Earth is an open system, and anything can happen in an open system as long as the entropy increases outside the system compensate the entropy decreases inside the system.


    Specifically, the argument is that we see local decreases in entropy on earth as a result of solar radiation, and this gain is “bought” at the cost of increased entropy in the sun itself (it is running out of fuel).

    However, if we define “X -entropy” to be the entropy associated with any diffusing component X (for example,
    X might be heat), and, since entropy measures disorder, “X -order” to be the negative of X -entropy, a closer look at the equations for entropy change shows that they not only say that the X -order cannot increase in a closed system, but that they also say that in an open system the X -order cannot increase faster than it is imported through the boundary.

    Sounds reasonable.

    Thus the equations for entropy change do not support the illogical “compensation” idea; instead, they illustrate the tautology that “if an increase in order is extremely improbable when a system is closed, it is still extremely improbable when the system is open, unless something is entering which makes it not extremely improbable”.


    Again, sounds reasonable. But we are of course talking about a open system in which solar radiation enters through the “boundary”.

    Thus, unless we are willing to argue that the influx of solar energy into the Earth makes the appearance of spaceships, computers and the Internet not extremely improbable, we have to conclude that the second law has in fact been violated here.


    Now, let us accept, entirely for the sake of argument that “the appearance of spaceships, computers and the Internet” are, in fact, extremely improbable, unless an Intelligent Designer first made living things that were capable of, in turn, designing them, and that life on earth, including intelligent beings capable of designing spaceships etc, was, in fact brought about by an Intelligent Designer.

    And we humans figured out that there is a 2nd Law of Thermodynamics that holds throughout the universe in which we find ourselves.

    This is accepted by Granville Sewell, and it is accepted by biophysicist Cornelius Hunter: nothing in this universe violates the 2LoT.

    Now, let’s consider not spaceships and the internet, but an oak tree. A tree is fantastic example of “order” under Sewell’s definition, far more wonderful than as a spaceship, because it actually grows spontaneously from an acorn, rather than being laboriously assembled piece by piece.

    That acorn contains all the instructions necessary to convert water and carbon dioxide from the environment into tree-stuff, including the assembly of complex molecules tough enough to withstand the enormous lateral wind forces, an ingenious system for raising water from the roots to the crown, where leaves are beautifully spread to catch the solar radiation that enables it to convert carbon dioxide and water molecules into complex energy-storing sugar molecules. Way smarter even than a solar-panelled satellite, it is even self-adjusting, so that if it is stressed on one side, it toughens the trunk on that side, and if it is shadowed by some new object, it can extend its growth towards the sun. etc.

    So let’s re-write Granville’s conclusion:

    “Thus, unless we are willing to argue that the influx of solar energy into the Earth makes the appearance of oak trees not extremely improbable, we have to conclude that the second law has in fact been violated here.”

    Notice that I am not arguing here that the acorn wasn’t designed. I am simply demonstrating to you that the influx of solar energy is precisely what makes the construction of a highly complex oak tree highly probable.

    Put a solar radiation-proof cover on that acorn, and no matter how beautifully designed it is, it won’t grow into an oak tree. What allows it to do so, with all that beautiful X-order (specifically, all that beautiful sugar, full of stored solar energy in usable form, the vast bulk of which did not exist before the acorn germinated, is that “influx of solar energy”.

    I’m certainly willing to argue that, and I’m sure you are. The alternative is to insist that oak trees growing from acorns would violate the 2nd Law, and therefore are no more likely to grow from acorns than spaceships are likely to grow from primordial goo.

    Except that they do.

    Therefore Sewell’s argument must be wrong.

    (Remember – I am not here arguing that evolution is probable – I’m merely demonstrating that Sewell’s argument that it cannot have happened because it would violate the 2nd Law is clearly false, because exactly the same argument tells us that “oaks grow from acorns” is also false.)

  32. Hey, I just found Granville’s book “In The Beginning- and Other Essays on Intelligent Design”- found it in my nested hierarchy of a mess. Maybe I can make something out of it and post that here- page 63 starts with the 2lot…

    I’ll be back—– 

  33. Elizabeth:

    Notice that I am not arguing here that the acorn wasn’t designed. I am simply demonstrating to you that the influx of solar energy is precisely what makes the construction of a highly complex oak tree highly probable.

    Put a solar radiation-proof cover on that acorn, and no matter how beautifully designed it is, it won’t grow into an oak tree. What allows it to do so, with all that beautiful X-order (specifically, all that beautiful sugar, full of stored solar energy in usable form, the vast bulk of which did not exist before the acorn germinated, is that “influx of solar energy”.

    The basic understandings of this phenomenon are rooted in condensed matter physics and organic chemistry.

    There is a key point about the “bathing in sunlight” that is obscured by referring to it as an “influx of solar energy.”  The “influx” notion lends itself to misuse by the ID/creationists who imply that we are saying that solar energy flows through matter organizing it.

    That is completely wrong.  Solar energy maintains the molecules of life within an extremely narrow window of energy (on the order of 0.01 to 0.02 electron volts) in which the kinetic energies are very near the binding energies of many of the organic structures.  In fact, other energy sources, like thermal vents, can do the same thing; provide a narrow energy window and thermal gradients in which process can take place.

    This permits all sorts of things to happen.  Slightly higher energy photons, for example, can now nudge chemical reactions to take place.  The patterns are constrained and permitted by the quantum mechanics in these organic structures as well as by other constraints placed on these structures by the environment in which they are immersed. Thermal gradients, electrical gradients, photoelectric effects, chemical reactions; these all lead to the flow of matter and energy through these systems that are held within a narrow energy window and “tickled” by larger perturbations that keep processes going.

    The phenomena of hypothermia and hypothermia are clues to what is going on in living systems.  The temperatures at which these occur tell us about the window of energies in which things like nervous systems can work.  If they are too cool, they are bound too tightly to work.  If they are too warm, they start going chaotic and coming apart.  Nervous systems are but one of hundreds of examples of narrow-energy-window processes that take place in the organic structures of living organisms.

    Sunlight is not flowing through things and organizing them; solar radiation provides the stabilizing heat bath and the tickling photons that allow things to happen.

    There is a reason that DNA structures have been referred to as quasi-crystals.  Crystals self organize in limited ways; and they are bound relatively tightly. But quasi-crystals have virtually unlimited ways in which they can self-organize and allow evolving structures to build on underlying patterns.  While the binding energies of DNA are large relative to the structures built on top of them, maintaining them in the narrow energy window close to the limits of the binding energies of evolving structures is what permits so many things to happen.

    The second law of thermodynamics underlies all this.  In order for atoms and molecules to bind together, energy must be released and energy must flow in the direction of thermal gradients, high to low.

    Entropy is not disorder; and claiming that “more evolved” structures have lower entropy is another glaring misconception.  A young animal has less entropy than an adult; total entropy is proportional to volume.  So a young animal is not “more organized” than an adult.  It is also not “more advanced,” nor does it contain “more information.”

    These misconceptions and misrepresentations belong to the ID/creationists, going back to Henry Morris and perhaps even earlier to E.A. Wilder-Smith.  They are wrong, and no one should pander to or adopt these misconceptions.

  34. OK- my take on this- yes Granville equates entropy with disorder. Once that is done so is his argument.

    Yes Granville, sometimes, or even most of the time, there is a correlation. Yes the dispersion of heat/ energy, if not constrained, is relatively random, ie disorderly.  But that does not mean entropy = disorder.

    Entropy is just a measure of energy flow/ dispersal, given the variables- states, arrangements, distribution.

    In Granville’s essay can anything happen in an open system, he says:

    The first formulations of the second law were all about heat: a quantity  called thermal “entropy” was defined to measure the randomness, or disorder, associated with a temperature distribution, and it was shown that in an isolated system this entropy always increases, or at least never decreases, as heat diffuses and the temperature becomes more and more randomly (more uniformly) distributed.    

  35. I’m planning on submitting something to MI along these lines.  My point all along has been that Sewell’s most fundamental error is naively, bone-headingly, the substitution of a concentration for heat and temperature in the classical definition of entropy.  Another way of saying this is to simply note that after Sewell’s calculations are done, he ends up with an “entropy” change in units of something other than energy per temperature.  I think another paper to MI is indicated, because so far the criticisms, though correct, are only part of the story AND more importantly, they are not the sort of thing that a nonspecialist can follow. We need a criticism that is clear, concise, and basic enough that a good high-school science student can follow it.  The idea of “checking your units” is so fundamental and easy, and it quite clearly shows where Sewell went into La-La land.  This is a point that my biochem undergrads can appreciate, and they don’t need to know mutlivariate calculus to see it.

  36. dtheobald wrote

    I’m planning on submitting something to MI along these lines.  My point all along has been that Sewell’s most fundamental error is naively, bone-headingly, the substitution of a concentration for heat and temperature in the classical definition of entropy.  Another way of saying this is to simply note that after Sewell’s calculations are done, he ends up with an “entropy” change in units of something other than energy per temperature.  I think another paper to MI is indicated, because so far the criticisms, though correct, are only part of the story AND more importantly, they are not the sort of thing that a nonspecialist can follow. We need a criticism that is clear, concise, and basic enough that a good high-school science student can follow it.  The idea of “checking your units” is so fundamental and easy, and it quite clearly shows where Sewell went into La-La land.  This is a point that my biochem undergrads can appreciate, and they don’t need to know mutlivariate calculus to see it.

    My 18 year old pretty well pointed that out this afternoon, when he described his response to an online creationist who had come up with the 2LoT argument.  I’ll see if I can get him to post it here.  It was short and sweet.

  37. The t subscripts on the SU and the Q in Sewell’s paper are partials with respect to the time, t.

    Actually, they’re not. Sewell never defines his notation in his AML or MI pieces (dadgummit), but I just figured this out by looking at Ch 2 of his book (where his entropy “derivation” originated). U(x,y,z,t) is a temperature distribution over x,y,z points in a 3D solid R at time t. So, S_t is not a partial with respect to time, it is the entropy change at time t (entropy change as a function of time, del S(t)).

    This makes everything easier to talk about.  And it means that, when you sub in concentration for Q and U and integrate over the volume, you end up with a dimensionless delS.  
     

  38. Actually, they’re not. Sewell never defines his notation in his AML or MI pieces (dadgummit), but I just figured this out by looking at Ch 2 of his book (where his entropy “derivation” originated). U(x,y,z,t) is a temperature distribution over x,y,z points in a 3D solid R at time t. So, S_t is not a partial with respect to time, it is the entropy change at time t (entropy change as a function of time, del S(t)).

    Well, if true, that would suggest that Sewell is even more out of touch than I gave him credit for.

     

    In physics and most engineering applications, a <i>flux vector</i> is a flow of something in units of that something per unit area <i>per unit time</i> (e.g., mass per unit area per unit time, or charge per unit area per unit time). The important part is the per unit area per unit time when we are talking about flow rates.

     

    When you take the divergence of such a flux vector (derivative with respect to spatial variables), you get units of something per unit volume per unit time.

     

    When you then integrate over a volume of such a divergence, you get units of whatever the something is per unit time.

     

    The divergence theorem converts this volume integral to a surface integral of the flux vector dotted with the outward unit normal to the surface enclosing the volume.  So you have this something per unit area per unit time integrated over an area which again gives you the something per unit time.

     

    I don’t have access to the book from which Sewell borrowed his equations, but if what you are saying is true, I would suggest that Sewell didn’t understand what he was reading.

     

    An entropy flux is sometimes used in problems if the problem lends itself to such a calculation, and if this way of doing the problem makes it easier.  So, in such a case, the flux vector would have units of entropy per unit area per unit time.  That could also be expressed in something like watts per Kelvin per unit area.  And <i>energy flux</i>, for example, is usually express in watts per unit area (e.g., W/m^2) because watts is already energy per unit time (J/s).

     

    The divergence theorem is pretty basic math, and it is usually taught in a third semester of the calculus sequence.  It is not really very hard to derive.

  39. Hmmm.

    I have been busy with other things and haven’t been keeping up with the changes to this site. 

  40. Never mind, you’re right, its the partial wrt time.  That’s what I get for reading this Sewell stuff while drinking champagne — but it does make it more fun.  

  41. So Mike — I’m trying to get this units thing sorted out, so please bear with me. Taking Sewell’s equations as they apply to diffusion of heat:

    J is heat flux, in energy per area per time.

    Q_t is div J, in energy per volume per time.

    Q is heat density, in energy per volume.

    U is an absolute temperature distribution at a given time, in (say) degrees.

    U_t will then be degrees per time.

    C is a heat capacity, in energy per degree per mass.

    Rho is a density, in mass per volume.

    So when Sewell says Q_t = C rho U_t, that makes sense, as both sides of that equality are in energy per volume per time.

    Q_t/U should then be units of energy per volume per degree per time.

    Integrating Q_t/U over volume will finally give an entropy rate, in energy per degree per time. 

    Now, where I’m a bit confused is when Sewell subs in a concentration distribution for U.

    If U is now in units of particles per volume, then U_t is particles per volume per time.

    Sewell states that C rho = 1, and that Q_t = U_t, so Q_t should be in units of particles per volume per time.

    That implies, then, that J is now a particle flux, in units of particles per area per time.

    Q_t/U should now be particles per volume per time divided by particles per volume, or simply per time.

    But we still have to do the integration to get Sewell’s “X-entropy” (his equation 3). Integrating this Q_t/U over volume will then result in something with units of volume per time.

    Right?

    I believe this is what Oleg stated at one point.

  42. Sewell’s notation is inconsistent.  I suspect he didn’t derive any of the equations but simply “borrowed” them without comprehension.

    What I did when I took the divergence of J/U was correct.  Now entropy COULD have units of joules per Kelvin (it usually does in SI units).  But the empirical temperature scale itself was developed independently of the units for energy.  From statistical mechanics we find that, for a system comprised of energetic particles, temperature comes out in units of kinetic energy per degree of freedom.

    So now entropy COULD be measured in a pure number, the number of degrees of freedom.  Then an entropy flux would be in number of degrees of freedom per unit area per unit time, or simply per unit area per unit time.

    Then the divergence of that would be in units of per unit volume per unit time.  Integrating that over a volume would give units of per unit time.  Applying the divergence theorem converts the volume integral to a surface integral of the flux vector, which is in units of per area per unit time, and that again gives units of per unit time.

    If Sewell plugs in the number of carbon particles per unit area per unit time as the flux vector, and he then divides that by a carbon concentration in units of particles per unit volume, then his “X-entropy” has units of length over time, or velocity.

    Taking the divergence of this gives units of per unit time.  Then integrating over volume gives volume per unit time.  Similarly, integrating length per unit time over an area also gives volume per unit time.

    So, as I said before, simply calling it “X-entropy” doesn’t fix it.

  43. Mike — so I’m still a bit confused.  You seem to be saying that J/U is Sewell’s X-entropy (which would have units of velocity), but Sewell states that his “X-entropy” rate is defined by his Equation 3, which is the volume integral.  So am I correct that his X-entropy is in units of volume per time?  

  44. Doug,

    Have a look at Monkey Do. I explained there that Sewell’s X-entropy in a discrete form has the increment
    [latex]dX = \sum_k dN_k/N_k,[/latex]
    or
    [latex]X = \sum_k \ln{N_k}[/latex]
    after integration. This choice gives rise to a quantity that tends to increase in a closed system but has nothing to do with the number of microstates of the system. We will see that this incorrect choice also leads to problem with dimensionality that you noted.

    As you can see from my note, configurational entropy of carbon has increment
    [latex]dS = -\sum_{k} dN_k \, \ln{N_k}[/latex].
    After integration,
    [latex]S = -\sum_k N_k \ln{N_k} + \mathrm{const}[/latex].
    This discrete expression can be converted to an integral over a continuous variable if we express the number of particles in terms of density and volume element,
    [latex]N_k = n_k \Delta V[/latex].
    Then we obtain the following result for entropy:
    [latex]S = -\sum_k \Delta V \, n_k \ln{(n_k \Delta V)} = -\int dV \, n(\mathbf r) \ln{n(\mathbf r)} + \mathrm{const}[/latex].
    This expression is dimensionless, as entropy should be if we treat it as the logarithm of the number of microstates.

    Let’s do the same for Sewell’s X-entropy.
    [latex]X = \sum_k \ln{N_k} = \sum_k \ln{(N_k \Delta V)} = (1/\Delta V)\int dV \, \ln{(n_k \Delta V)}.[/latex]
    We thus obtain
    [latex]X = (1/\Delta V)\int dV \, \ln{n(\mathbf r)}.[/latex]
    As you can see, this is the same expression as in Sewell’s paper, but now with a prefactor that makes it dimensionless. The prefactor is the inverse volume of a discretized cell that we used to derive Sewell’s X-entropy. Sewell’s expression must be multiplied by that factor to get the correct dimensionless expression for X-entropy.

    While we see how the units can be fixed, the result makes no sense. It should not depend on how we discretize the continuous system. But in this case it does: if we choose smaller cells then X-entropy increases. This is another way to see that X-entropy is unphysical. It was a wrong guess on Sewell’s part. The correct guess would be configurational entropy, which is physical and behaves sensibly even if we use the continuum description.

  45. Sewell’s formula gives the rate of X-entropy change in units of volume per time, so his X-entropy has the units of volume. As I explain below, this is wrong and a more proper derivation of his X-entropy would yield the correct units at the expense of multiplying his expression by 1/ΔV, where ΔV is the volume of a discretized cell.

    Although the units are now fixed and X-entropy is dimensionless, this quantity becomes dependent on the volume of cells that we use. That indicates that the quantity is unphysical. Someone who does a calculation with a cell of 1 micron will get a different result from someone who uses a finer discretization of 0.5 micron. Physical quantities do not behave in this way. Sure enough, there is no such problem with configuration entropy.

    These consideration demonstrate that the problems with Sewell’s X-entropy that you and I uncovered are related.

  46. That is one of the inconsistencies in Sewell’s notation and use of the integrals; and it is one of the things – among many – that reveal that Sewell doesn’t know what entropy means nor does he know what the integrals mean. 

    Look at my taking of the divergence of J/U in my math above. When one integrates both sides over the volume, the second term on the right-hand side of my equation is Sewell’s integral in his Equation (3) after substituting his Equation (1).  That should have units of entropy per unit time, which it does if Sewell hadn’t mislabeled it in Equation (1).

    But Sewell calls the divergence of J in his Equation (1) the rate of change of energy, by his use of the subscript t.  That’s wrong.  And he also refers to it as “energy heat density” which is also wrong.  It’s an energy flux density in units of energy per unit volume per unit time.

    So Equation (3) has incorrect units, as can be readily spotted when one knows Clausius’s definition of entropy and takes the derivative of it with respect to time, keeping the temperature constant.  In Sewell’s notation, Equation (3) becomes entropy times volume per unit time.

     

    Sewell’s “X-entropy” flux = J/U, and it has units of velocity when he plugs in particles and particle concentration.  If he dots that with the outward normal to an enclosing surface and integrates over the surface, he gets volume per time.

Leave a Reply