Granville Sewell vs Bob Lloyd

Bob Lloyd, professor emeritus of chemistry at Trinity College Dublin, wrote an opinion article in Mathematical Intelligencer (MI) commenting on Sewell’s not-quite-published AML article. This was mentioned in a previous thread, where Bob briefly commented. Granville was invited to participate but never showed up.

In response to Lloyd, Sewell submitted a letter to the editor. On advice of a referee, his letter was rejected. (Rightly so, in my view. More on that later.) Sewell has now written a post on Discovery Institute’s blog describing his latest misfortune. The post contains Sewell’s unpublished letter and some of the referee’s comments. I invite you to continue the technical discussion of Sewell’s points started earlier.

Sewell’s reply to Lloyd deals mostly with “X-entropies:”

Lloyd cites my example, given in my letter to the editor in a 2001 Mathematical Intelligencer issue, of carbon and heat diffusing independently of each other in a solid… He proceeds to show that these “entropies” are not independent of each other in certain experiments in liquids. This seems to be his primary criticism of my writings on this topic. I may have left the impression in my 2001 letter that I believed these different “X-entropies” were always independent of each other, but in the more recent AML paper, I wrote:

He then quotes from his AML paper and rambles on for another eleventeen paragraphs. Read at your own risk.

Here is my brief take on this. I will expand on it in the comments.

“X-entropies” are not new quantities. Sewell does not define them in any of his papers and blog posts, but my reading is that they are either other thermodynamic variables (e.g., chemical potential or pressure) or they are regular thermal entropies of different parts of a large system (configurational entropy of carbon and entropy of lattice vibrations in Sewell’s example). Either way, the 2nd law is not threatened. In the latter case, if the two subsystems can interact and exchange energy then the entropy in one can decrease; the decrease is amply compensated, and then some, by an increase of entropy in the other subsystem. We saw this in the previous thread with an ice cube in a glass of water and with spins in a ferromagnet. Compensation works. Sewell has no leg to stand on.

151 thoughts on “Granville Sewell vs Bob Lloyd

  1. In posts I have made on Panda’s Thumb in response to Granville Sewell’s assertions, I have noted that a critical part of his argument is that there is insufficient connection between the processes that increase entropy and those that decrease entropy (by evolving complex life forms). Speaking of the biosphere (or maybe of the whole Earth) he argues that processes there that decrease entropy cannot be explained because

    if all we see entering is radiation and meteorite fragments, it seems clear that what is entering through the boundary cannot explain the increase in order observed here.

    In doing this, Sewell blithely ignores that the “radiation” is the flow of energy from the sun which powers (almost all) life on earth. I have pointed out that if his argument is correct, plants can’t grow. Others have made comparable objections to Sewell’s work earlier, but I think this may be a bit clearer.

    Even if Sewell’s equations are all perfectly good, he has ignored facts known to good middle-school science students.

  2. On one of the discussions of Sewell’s paper (on Panda’s Thumb, it think), I had noted that Sewell is doing the equivalent of taking something like the Pythagorean Theorem, declaring it to be universally true, and then plugging in weight and calories to calculate IQ.

    As near as I can tell about Sewell’s misconceptions from his use of “X-entropies,” he is simply using anything he can think of to calculate “disorder” because that is what he thinks entropy is all about. However, he is clearly not aware of the fact that the flow of particles into or out of a thermodynamic system involves the chemical potential. Matter interacts with matter; thus the flow of particles into or out of a system involves particle interactions that affect the energy within the system and contribute to the change in the number of energy microstates from which entropy is calculated.

    Sewell’s example of carbon for one of his “X-entropies” is a dead giveaway. He apparently thinks that simply pushing particles into his equation is increasing disorder somewhere, and that is what he takes to be an increase in entropy. But he cannot do this without accounting for the energy per particle, which is the chemical potential. And, as is obvious to anyone who actually thinks about what is going on inside a system, the chemical potential is going to be temperature dependent because temperature tells us something about the average kinetic energies per degree of freedom within the system.

    “Configurational entropy” is a concept that is fraught with danger. The arrangement of particles has nothing to do with entropy if it does not involve the transfer of energy into or out of the system. For example, here is someone who is puzzling over counting issues and ends up double counting; once for energy and once again for rearrangements.

    If a rearrangement involves energy in the movement of particles, the counting is already done by counting how that energy gets distributed among energy microstates. If no energy is required to rearrange something, no energy gets distributed among energy microstates.

    Again, all this is handled in physics and chemistry by using thermodynamic potentials like the Helmholtz function or the Gibbs function, which take into account the work done against ambient pressure and/or the changes in internal energy of the system as the constituents of the system undergo phase changes and condense or come apart.

    This is undergraduate and graduate thermodynamics and statistical mechanics we are discussing here. Sewell made an egregious error in thinking he had trumped the physics community without ever picking up a classic thermodynamics text like Zemanski, or a thermodynamics and statistical mechanics text like Reif, or Kittel, or Fermi, or Sommerfeld, or Tolman. The very least Sewell should have done would have been to check with some physicists who would have set him straight. But he was just too arrogant to do that. He had this big “gotcha” that he just knew would overturn the world of physics. But notice that he did NOT submit his paper to Physical Review Letters.

    Notice also that the kvetching over on UD, about the “censorship” of Sewell’s answer to Lloyd, never mentions the fact that Sewell sued Elsevier and got something like $10,000. That is not how peer review works in the real world of science. Sewell has nothing to complain about. Elsevier got burned by Sewell once; but we hope not again.

  3. Joe Felsenstein: In posts I have made on Panda’s Thumb in response to Granville Sewell’s assertions, I have noted that a critical part of his argument is that there is insufficient connection between the processes that increase entropy and those that decrease entropy (by evolving complex life forms).

    That’s what his arguments boil down to. In the latest reply he computes a decrease in “poker entropy” resulting from a change from a three-of-a-kind hand to a full house and remarks:

    Of course, a decrease in probability by a factor of only 15 leads to a very small decrease in entropy, which is very easily compensated by the entropy increase in the cosmic microwave background, so there is certainly no conflict with the second law here. 


    I can see Sewell typing that with a smug expression on his face. “Of course! — the reader would exclaim — it is ridiculous to think that the cosmic microwave background has any role in card games. The microwave radiation and the cards are not coupled.”

    But Sewell’s example falls apart if one examines it carefully. What he does amounts to cherry picking. There is no entropy decrease to begin with.

    If he were to repeat the card experiment many times, he would not receive a full house every time. Sometimes he would get nothing, sometimes he would get back three of a kind, occasionally a full house, and so on. Averaged over the ensemble, the entropy change would be 0. There is no need to explain where the entropy went.

    Sewell’s error is in picking one particular outcome and declaring it representative of the ensemble. This is no different from pointing to a lottery winner and saying that the odds of winning it are close to 1. It’s a novice error. Journal editors are doing Granville a favor by not publishing his awful arguments. He makes an ass of himself by publishing them on the internet.

    In the cards-and-microwave-background example, the two systems are of course entirely decoupled. Each keeps its entropy unchanged. This is different from situations where the entropy of a system does go down. Then it must be compensated by an increase in entropy in another system coupled to it. We saw in the previous thread how it works for a glass of water with an ice cube or for dipoles in a ferromagnet. Below I will discuss another well-known example of entropy decrease in a computer register when all of its bits are set to 0 starting with an arbitrary initial state. The computer produces heat and the related entropy increase is at least as large as the number of bits set to 0.

    So it is with biological systems. They can reduce entropy by rearranging things around themselves. The resulting entropy decrease is amply compensated (many times over) by the thermal entropy they generate.

  4. olegt:
    So it is with biological systems. They can reduce entropy by rearranging things around themselves. The resulting entropy decrease is amply compensated (many times over) by the thermal entropy they generate.

    In my plant example the plant can grow, but that does not violate the Second Law of Thermodynamics because for the plant to grow, energy must arrive from the sun, and the increase of entropy as the sun gives off radiation is greater than any local decrease of entropy when the plant grows. (I think this is true even if the plant manages to grow without ever generating any heat. Thus I am focused on where the energy in the plant comes from, not where it goes to).

    As far as I can see Sewell in effect says plants can’t grow.

  5. Here are two, contrasting examples that are often used as illustrations of the concepts of temperature and entropy from the microscopic perspective.

    One is the general two-state system in which the constituents are atoms or spins, each of which can be in only one of two states. The other is the Einstein oscillator system consisting of a bunch of quantized harmonic oscillators, each of which can contain essentially an unlimited number of units of energy.

    The reason these toy models (they are not really toys; they actually come very close to describing real systems) are often portrayed together is because they are simple and they focus on the actual counting of energy microstates; but the two-state system dispenses with a common misconception about entropy always increasing when energy is added to the system. On the other hand, the Einstein oscillator system demonstrates the common perception that entropy increases as energy is added; as we see in solids, for example.

    The main difference is in the upper limit imposed on how much energy a given degree of freedom can hold.

    For the two-state system, we are given N identical atoms or spins that can be in only one of two states, excited or ground (or “up” or “down”). The number of microstates of the system when p atoms are in the excited state – each excited state containing one unit of energy – is simply the number of combinations of N things taken p at a time,

    Ω = C(N, p) = N!/((Np)! p!)

    The entropy, after the system comes into equilibrium, is

    S = k ln Ω

    where k is Boltzmann’s constant.

    For purposes of illustration, we can set Boltzman’s constant equal to 1, and we can take each unit of energy change equal to 1.

    The reciprocal temperature is obtained by

    1/T = ΔS/ ΔE

    where we have taken the units of energy change to be 1 (ΔE = Δp = 1).

    Therefore, we can take N to be large (on the order of 10^23) so that, in the discrete derivative, a unit change in the energy states is infinitesimal with respect to the total N.

    Then, for the two-state system we have

    1/T = ln((Np)/(p + 1)),

    where p = 0, 1, …, N-1.

    In contrast, when we have an Einstein oscillator system made up of N quantized harmonic oscillators, each capable of accumulating p units of energy, but p can now go from zero to infinity, we have

    Ω = (N – 1 + p)!/[(N – 1)! p!].

    One arrives at this by thinking of the N – 1 partitions between oscillators plus the p units of energy as an entire set of (N – 1 + p) things taken p at a time.

    Then, repeating the same calculations as above, we get for the Einstein oscillator system,

    1/T = ln[(N + p)/(p + 1)],

    with p = 0, 1, …, infinity.

    Note that the only difference between these is the plus sign for the Einstein oscillator system. This makes a dramatic difference however; and plotting the temperature, T versus p shows that dramatic difference between these two systems.

    The temperature behaves as naively expected in the case of the Einstein oscillator system. However, in the case of the two-state system, the temperature starts out at a small positive value, increases to infinity as half the atoms go into the excited state, and then, just past the half-way point, the temperature jumps to negative infinity and then approaches zero from the negative portion of the graph.

    Notice also that the entropy of the two-state system decreases with added energy after half of the atoms go into the excited state.

    This is a pedagogical exercise that is often used in statistical mechanics courses to encourage students to explore the actual meanings of entropy and temperature. If one has never done this little exercise, there is a high probability that some serious misconceptions about temperature and entropy will remain.

  6. Erratum:

    The factorial is missing for the number of states in the Einstein oscillator system. It should read.

    Ω = (N – 1 + p)!/((N – 1)! p!).

    Fixed. --OT

  7. On the slightly less technical side of things, Sewell’s post begins with a complaint about his letter being rejected (yet again!). He quotes from the referee report:

    He [Sewell] quotes [Bob] Lloyd criticizing him for being too casual in equating entropy and disorder. He replies that he should not be faulted, since physics textbooks often discuss the second law in scenarios in which precise quantification is difficult. Surely, though, there is a big difference between the level of rigor called for when communicating the flavor of the second law to students, and the level to which one should be held when arguing that a major branch of science must be discarded as conflicting with the second law. Hand-waving arguments about films running backward are not adequate for drawing the momentous conclusions Sewell wants to draw. That was Lloyd’s point.

    Then Sewell counters with this amazing defense:

    In other words, it’s OK for physics texts to apply the second law to applications beyond thermodynamics, such as books burning or wine glasses breaking, but not to computers arising on a rocky planet, because that might threaten “a major branch of science.” Even publishing one letter to the editor that draws such a “momentous” conclusion must not be permitted!

    🙂

    I teach statistical physics at the graduate level. None of the textbooks that I have used (Landau and Lifshitz; Pathria; Kardar) treats entropy in a hand-waving way, by equating it with disorder. Sewell likely means undergraduate texts aimed at non-physics majors, who have to get through all of physics in two semesters and often don’t even have the requisite background knowledge such as probability theory.

    I have one such text on my shelf: Knight. He spends less than a page on the entire subject of entropy. All he can do in that space is wave hands and rely on analogy with disorder.

    But I am not sure that we should use such low standards for professional scientific and mathematical journals.

  8. olegt: I have one such text on my shelf: Knight. He spends less than a page on the entire subject of entropy. All he can do in that space is wave hands and rely on analogy with disorder.
    But I am not sure that we should use such low standards for professional scientific and mathematical journals.

    I have a hypothesis that I can’t prove generally, but I have found very few exceptions, if any.

    One of the textbooks on my shelves is Robert Bruce Lindsay’s Basic Concepts in Physics which Lindsay rote for non-majors back in 1971 (note that the Institute for Creation Research was started by Henry Morris in 1970, as I recall). This was during the time that Isaac Asimov and a few other popularizers of physics ideas were using disorder as a metaphor for entropy (thinking of ideal gases in their illustrations).

    Lindsay was a very good writer who wrote some philosophically laden works for the general public as well.

    My hypothesis (which I admit I cannot prove) is that many physics popularizers come from the ranks of physicists who have become over-enamored with philosophy, and in the process, end up bastardizing both the philosophy and the physics.

    Looking over many of these kinds of works suggests to me that writing for the public is hazardous to one’s professional health if one gets caught up in the temptations of celebrity and the ego trip of “sophisticated erudition.”

    I am not accusing Lindsay, or Paul Davies, or any of the other writers of consciously engaging in this kind of self-promotion; but I suspect that there is something about writing popularizations that seduces writers to engage in these kinds of flowery presentations that end up spreading misconceptions.

    The period throughout the 1970s to the 1990s was overwhelmed with the notions of entropy being synonymous with disorder. The “Scientific” Creationists were pushing this notion very hard, but they had unwitting help from some popularizers who were also good writers.

    Whether the attraction of writing popularizations is rooted in the attraction to philosophy or whether the tendency to use gee-whiz philosophical hooks to make a popularization interesting is the cause, I can’t say. But I suspect there is a strong correlation.

  9. Oleg — I think you’re mistaken on Sewell’s X-entropies. They are indeed novel quantities, and Sewell does define them in his AML paper (or at least, he gives an equation for them and an example). He defines “X-entropy” in equation 3, and describes it in the paragraph beginning with “However, there is really nothing special about ‘thermal’ entropy”. Sewell’s X-entropy is simply an abuse of classical entropy where heat and temperature are replaced with other quantities, like the concentration of some species.

    In his AML piece, equation 3 is a restatement of the classical definition of entropy S_t. When Q is heat and U is a temperature distribution, the units of S_t in his equation 3 are the usual classical entropy units, Joules per Kelvin (as they should be). But if you try to substitute in “carbon concentration” for U, as Sewell erroneously claims you can, then the resulting “entropy” S_t has different units! His argument is therefore internally inconsistent, and an easy, obvious way to see it is that his equation has a units problem. In classical thermodynamics you cannot just arbitrarily replace heat with whatever you want . His claim that “there is really nothing special about ‘thermal’ entropy” is absurd — in classic thermo, entropy is defined in terms of heat.

    Now, you and I know that in principle one could validly parse up the different components of the entropy of a system — e.g., the configurational entropy of the carbon vs configurational entropy of the steel. But that’s not what Sewell’s X-entropies do.

    Sewell has defined a novel quantity, mislabeling it an “X-entropy” (which is not an entropy, wrong units), and claims that every possible X-entropy has its own “second law” associated with it. In effect, he has proposed new laws of physics out of whole cloth — laws that in fact violate the 2nd law of thermo in addition to trivial experimental observations.

  10. dtheobald,

    Doug,

    I agree, Sewell’s recipe for “carbon entropy,” in which he integrates (dn/dt)/n over the volume* to get its rate of change, gives a quantity measured in units of volume. This certainly does not look like entropy of any kind, which should be dimensionless. (Alternatively, one can multiply it by the Boltzmann constant to get the ridiculous SI units of J/K.)

    This dimensional inconsistency already indicates that his “carbon entropy” is a half-baked notion. In the next couple of comments I will dissect Sewell’s mathematical manipulations to show that what he does with thermal entropy. His “carbon entropy,” however, is a failed attempt to reinvent the wheel. That quantity is a bastardized version of configurational entropy of carbon impurities. It’s not surprising that he failed: he is an applied mathematician whose task is to compute things. Partial differential equations are his area of expertise. Statistical physics, not so much.

    *Here n(x,y,z,t) is carbon concentration.

  11. I have often suspected that the misconception ID/creationist have about the second law of thermodynamics is THE Fundamental Misconception of the ID/creationists. They have all inherited this “bad gene” from Henry Morris and Duane Gish, who may have picked it up from A.E. Wilder-Smith, although Morris often refered to Isaac Asimov.

    The earliest reference I have been able to find for Morris is from 1973, and it is still up on the ICR website. This argument was used in the book What is Creation Science? that Morris wrote with Gary Parker back in 1982, as I recall. Duane Gish was already using it to harass biology teachers in Kalamazoo, Michigan when he worked for the Upjohn Company in the 1960s. He quit Upjohn to join Morris in starting the ICR, but he would return to Kalamazoo on occasion. I knew one of the biology teachers he harassed.

    The nearly full-blown version of what Morris taught at the ICR is found in this video by Thomas Kindell who was a protégé of Morris. The graph in that video – the one that shows evolution making things better and better as compared with the second law saying everything is decaying – was also in Morris and Parker’s book.

    So this fundamental misconception was well-ingrained at the ICR before “Scientific” Creationism morphed into Intelligent Design after the US Supreme Court case, Edwards v. Aguillard, in 1987. I was battling against this misconception throughout the latter part of the 1970s and into the 1990s when I was giving talks to lay audiences about this back then.

    Nearly every ID/creationist I have encountered believes that the second law of thermodynamics is all about everything in the universe decaying and coming all apart. If one believes this is what all matter does, then one has to come up with some kind of intelligence that puts it all together. This is the tornado-in-a-junkyard thinking, and this is what is behind Sewell’s little gimmick about running a movie backwards of a tornado ripping up houses.

    Hence, Dembski, Sewell, et. al. apparently believe – or are simply asserting – that physicists, chemists, and biologists are saying that atoms and molecules are just scattered around and are randomly selected with a uniform sampling distribution and put in highly improbable arrangements; and, of course, that is impossible. Hence intelligence and information are required.

    This misconception is so deeply ingrained in the thinking of ID/creationists, that even detailed explanations about what the second law and entropy really are all about simply roll right off their backs and they immediately revert right back to using the same misconceptions in continuing to argue. They are impervious to the real concepts.

    I suspect that the leaders of the ID/creationist movement drag this trope out from time to time to keep the misconceptions going; although I don’t see any evidence that Sewell or any of the others knows any better. Sewell seems genuinely miffed that his paper was rejected and that he isn’t being allowed to respond in Elsevier’s journals.

    Does he really think that Elsevier is going to let him jerk them around and sue them again?

  12. Monkey See, Monkey Do.

    I will dissect what Sewell is doing in his AML paper. His activities are nicely summarized by the title of this comment. He has observed how physicists derive thermal entropy from heat through some mathematical procedures. By analogy, he attempts to introduce his own quantity, “X-entropy,” exemplified by “carbon entropy.”

    In doing so, Sewell fails badly. Doug Theobald noted that “carbon entropy” has weirdly wrong units (volume). That is just a tip of the iceberg. Standard entropy is, technically speaking, a function of state, which means that it depends only on the current physical state of the system but not on the previous history (i.e., on how the current state came about).

    Sewell’s carbon entropy is not a function of state. If you start with a particular distribution of carbon, mess with the system and then put it back into the original state, the carbon entropy will be different. Like phlogiston, this variable does not even exist.

    It’s not difficult to fix Sewell’s error and introduce a real variable that does all he wanted it to do. That variable is a familiar quantity: configurational entropy. Contrary to his claims, this entropy is not necessarily separate from thermal entropy. The two can be added, compared, and can compensate for each other.

    This mishap is not unexpected. Sewell is an applied mathematician whose specialty is partial differential equations. Statistical physics and thermodynamics are outside of his expertise area. He does not have a feel for the subjects and made some glaring errors.

  13. Again, all you have to do to refute what we say and the second law is demonstrate that blind and undirected chemical processes can produce a living organism from non-living matter.

    Your rhetoric isn’t going to cut it. But then again your rhetoric is all you have

  14. Part I. Monkey See,

    in which we will follow Sewell’s manipulations of heat, heat current, and temperature in Equations 1 through 5 of his AML paper. Everything is consistent with standard thermodynamics, but we will see what could go wrong if one tried to use a copy-and-paste approach in order to generalize this to other situations.

    Sewell relies on multivariate calculus to perform his mathematical manipulations. We will simplify the setting by switching from a continuum medium to a discrete system. The physics will remain intact, but the math will be much simpler algebra.

    Let’s take discrete bodies set in a row and numbered from 1 to k. They can exchange heat with their neighbors. For example, body 3 only interacts with bodies 2 and 4. The bodies at the ends, 1 and k, can also exchange heat with the outside world.

    The bodies have temperatures T_1, T_2, …, T_k. When body 3 receives a small amount of heat Q_3, its entropy increases by ΔS_3 = Q_3/T_3. The heat comes from the neighbors: Q_3 = J_{23} + J_{43}, where J_{23} is the amount of heat body 2 gave to body 3. Our quantities J are discrete analogs of heat current J Sewell uses in his equations. They have a sense of direction, so for example if heat flows from 2 to 3 then J_{23}>0. The quantity J_{32} is equal and opposite in sign to J_{23}: J_{32} = −J_{23}.

    We are now fully prepared to finish the derivation of Eq. 5 in Sewell’s paper. The change in entropy of the entire system ΔS is the sum of entropy changes of its components: ΔS = ΔS_1 + ΔS_2 + … + ΔS_k. Each of those is determined by the temperature of the body and the amount of heat flowing into it from its neighbors. For example, ΔS_2 = Q_2/T_2 = (J_{12}+J_{32})/T_2 and ΔS_3 = Q_3/T_3 = (J_{23}+J_{43})/T_3.

    Note that the same heat current, J_{32}=−J_{23}, enters the expressions for entropy changes of bodies 2 and 3. We can place these contributions together, yielding J_{23}(1/T_3−1/T_2). This expression is positive definite, or in plain English, non-negative. If body 2 is warmer than body 3 then 1/T_3−1/T_2>0; by the second law, the heat flows from 2 to 3, so J_{23}>0; the product of two positive numbers is positive.

    The same works for all other pairs of neighbors, from (1,2) to (k−1,k). For all of them, the heat current times the difference of inverse temperatures is a positive quantity if the temperatures are different and zero if they are the same. That is the first term in Sewell’s Equation 4, coming from the bulk of the system. It is positive definite.

    The situation is slightly different at the edges of our system. Body 1 receives heat J_{21} from body 2 and also from the outside, J_{o1}. The former quantity was accounted for when we considered heat exchange between bodies 1 and 2, pairing J_{21}/T_1 with J_{12}/T_2. The heat flowing from the outside is not paired with anything, so we must keep that term, J_{o1}/T_1. The last body on the other side has a similar unpaired contribution J_{ok}/T_k.

    Putting everything together, we see that internal heat currents increase the total entropy. This shows that a system out of thermal equilibrium (here exemplified by nonuniform temperature) tends to increase its entropy. On top of that, entropy changes as a result of exchanging heat with the outside, in the amount J_{o1}/T_1 + J_{ok}/T_k. That is a discrete analog of the second term in Sewell’s Equation 4. He has a minus sign in front of it because he computes the heat flow out of the system, whereas we count the influx of heat. We can rewrite our result as −(J_{1o}/T_1 + J_{ko}/T_k) to conform with his. The net entropy change is greater than that because of the heat exchanges inside. That is the statement of Equation 5.

    So far so good. This is standard thermodynamics. What could go wrong?

    Looking at the result, Eq. 5, we note that it could be obtained for other quantities, not just entropy. For example, we could introduce quantity X whose increment is given by Q f(T), where f(T) is any decreasing function of temperature. Indeed, the same manipulations would yield the net change in X as the sum of bulk and edge contributions. The bulk contribution from heat exchange between bodies 2 and 3 would be J_{23}[f(T_3)−f(T_2)]. If body 2 is warmer than body 3 then J_{23}>0 by the 2nd law and f(T_3)−f(T_2)>0 by construction. Therefore quantity X tends to increase in an isolated system, thanks to the irreversible heat exchange when it is out of equilibrium. Entropy is but one example of such a quantity, for which we took a decreasing function f(T) = 1/T.

    Did we just discover a bazillion of new 2nd laws of thermodynamics? Nope. Almost all of these new X variables are dismal failures. Those which do not fail are merely copies of the standard thermal entropy obtained by setting f(T) = const/T. (For example, we could choose the multiplicative constant to be 1/k_B, the inverse Boltzmann constant. Then we would obtain entropy in dimensionless units, rather than in joules per kelvin.)

    Why can’t we make a new physical variable X with an arbitrary decreasing function f(T)? The mathematical reason for this is a bit arcane: its increment is not a differential. Physically speaking, most variables X are not functions of state. If you take the system on a round trip, coming back to the exact same state, the value of X upon return will not be the same as it was when you started: the sum of increments Q f(T) does not, ingeneral, add to zero. It does when f(T) = 1/T, but generally it does not. This makes a general X unphysical: it cannot describe the state of the system because it does not even take the same values in the same physical situations. This is why we only have one thermal entropy, not many different thermal entropies corresponding to different choices of f(T).

    This is not to say that Sewell botches thermal entropy. He has the correct choice f(T) = 1/T in that case. However, his “carbon entropy” is precisely this kind of unphysical variable. I will show that in a later comment.

  15. Setting mathematical questions aside for the moment, what do you all think of Sewell’s tautology?

    If an increase in order is extremely improbable when a system is closed, it is still extremely improbable when the system is open, unless something is entering which makes it not extremely improbable.

    Doesn’t Sewell’s tautology seem reasonable, from a scientific standpoint? Take, for example, an imaginary planet, with an atmosphere similar to Earth’s, and a continuous stream of radiation entering the planet from the sun. Assume further that we have few centuries of close observational experience with the planet; we have a relatively good idea what to expect in terms of its observable behaviors. Based on our observations, we are as certain as we can be that no life exists on the planet, and that our planet is closed to all incoming forms of matter and energy except for radiation from the sun.

    Now suppose we take an isolated snapshot of some part of the planet on a given day and we observe a large rock hurtling through the air. This is certainly an unusual phenomenon. In fact, this is the first observation of a flying rock on our planet since we started careful observation a few centuries ago. Is the law of gravity being violated? Was the rock put into its extraordinary motion by magic? At least two possibilities come to mind that might reasonably explain the rock’s flight.

    Maybe a volcanic eruption–an eruption not captured in our isolated snapshot–forced the rock onto its current trajectory. We’ve never observed an actual volcanic eruption before on our planet, but we have observed what appears to be evidence of past volcanic activity. Or maybe a tornado did it. We’ve observed violent windstorms before on our planet, but never a tornado, but there’s no particular reason why a tornado might not have formed as a result of a violent windstorm.

    Both possibilities seem reasonable, and can be explained as the results of complex processes in thermodynamic systems–volcanic, or atmospheric, as the case may be. So while the flying rock event is relatively improbable, the event is not vanishingly unlikely, given other behaviors we’ve observed.

    Now suppose a few days later that we take another isolated snapshot, and find, to our astonishment, that a plant-like thing, something that looks very much like a maize plant, appears. Upon closer inspection, we conclude that, not only does it look like maize, it is maize. It’s genetically consistent with Earth’s maize. It’s cellular, and the cells are metabolizing, reproducing, photosynthesizing, and doing all the things that maize cells do. The plant is growing! How would we explain its presence on our planet?

    Our first event, the flying rock, was relatively improbable, but reasonably explainable by extrapolating from observed behaviors and artifacts of past events. Our second event, the existence of a growing maize plant, is vanishingly unlikely—so unlikely, in fact, that from a purely scientific standpoint, we would feel compelled to resort to measures more extreme than simply extrapolating from past behaviors observed on our planet. We might, for example, feel forced to deny the validity of our “certain” knowledge that the planet had previously been lifeless. Or perhaps we’d question our conclusion that nothing was entering our planet but radiation from the sun; we’d insist instead that the maize had been very recently introduced from outside the planet.

    If maize plant growth is extremely improbable when our planet is closed, it is still extremely improbable when our planet is open, unless something is entering our planet which makes it not extremely improbable.

    To me, on the surface at least, this seems plausible. As does Sewell’s generalization.

    What do you all think?

  16. Kent_D: Setting mathematical questions aside for the moment, what do you all think of Sewell’s tautology?
    <snip>
    What do you all think?

    Frankly, not much. It’s the standard argument from incredulity.

    Besides, you are in the wrong thread, Kent. If you read the OP of this thread, or of the previous one, you will see that we are dealing here with the technical merits of Sewell’s papers. Your question is off topic here.

  17. Hi Oleg,

    I assume OP stands for “original post”?

    My thought experiment was not so much an argument from incredulity as an argument from plausibility. In my opinion, considering the plausibility of a proposition or some set of propositions has some technical value, if it has the potential of advancing our scientific understanding, and if plausibility does not somehow achieve the status of certainty apart from methodological rigor.

    Admittedly, plausible propositions are far from infallible. A plausible proposition might be roughly analogous to the results of a non-rigorous, back-of-the-envelope calculation, or to a hypothesis. All three–plausible propositions, back-of-the-envelope calculations, and hypotheses–have their place in the scientific endeavor, even though additional consideration might render any particular instance of them invalid.

    I’d already read through the “A Second Look at the Second Law” thread from Mar. 25, including all comments. Likewise for the present thread. Points raised in those threads prompted my comments. If you don’t think my comments are appropriate to this thread, would you kindly direct me to a relevant thread, or start a new one?

    Thanks, and regards.

  18. Kent,

    Plausibility is a slippery notion, sometimes useful if it can help form testable hypotheses. However, in general an argument from plausibility and an argument from incredulity are the same thing – if you don’t find it plausible, you don’t find it credible.

    We’d do well to remember the old aphorism that there is nothing so reasonable as a shared prejudice.

  19. Oleg,

    We’d do well to remember the old aphorism that there is nothing so reasonable as a shared prejudice.

    Agreed. But it’s one thing to say, “I find your proposition incredible at first blush, but I’m open to correction. What am I missing?” It’s entirely another thing to say, “I find your proposition incredible, therefore I reject it outright. End of discussion.” I hope, in my quest for knowledge, that I’m an example of the former, and not the latter.

    Now, in my thought experiment, wouldn’t you consider the appearance of maize on my imaginary planet a highly improbable event? If so, why?

    (If you think I’m headed toward some bizarre caricature evolutionary theory, you’re wrong. I’m asking this question in all seriousness, I am headed somewhere with it–in a direction that I hope might eludicate the present discussion, or dispel some ignorance on my part, or both. Thanks in advance for your patience.)

  20. Sorry–I meant to type “If you think I’m headed toward some bizarre caricature of evolutionary theory, you’re wrong.”

  21. Kent_D: Now, in my thought experiment, wouldn’t you consider the appearance of maize on my imaginary planet a highly improbable event? If so, why?

    This seems ripe for a new thread.

  22. Kent_D: What am I missing?

    Pretty much ALL of physics, chemistry, and biology.

    Did you know that matter condenses? Do you know why? Check it out.

  23. @Mike,

    Did you know that matter condenses?

    Are you referring to the condensation of vapor into a liquid state?

    @all:

    I’ll defer any additional comments or responses until a new thread (hopefully forthcoming) is created, so as not to pollute the current thread with potentially irrelevant comments.

  24. Kent_D: Are you referring to the condensation of vapor into a liquid state?

    That is but one small example of a general rule in the universe.

    Do you know why matter condenses? Do you understand the process? Do you have any idea of what it has to do with the second law?

    Your scenario – and that of Sewell – is meaningless if you don’t understand basic physics, chemistry, and biology along with the kinds of systems they study.

  25. olegt: This is not to say that Sewell botches thermal entropy. He has the correct choice f(T) = 1/T in that case. However, his “carbon entropy” is precisely this kind of unphysical variable. I will show that in a later comment.

    I like your discrete version of Sewell’s equation. It aids in the transition from continuous, differentiable fields to scattered, discrete constituents that can be connected only by, say, a radiation field or a particle field.

    One of the problems of attempting to make arguments from continuum fields is that one has to get into math issues like conservative versus non-conservative fields, integrating factors, connectedness, poles, analyticity, and a whole series of other issues that divert from the basic physics.

    When I first got drawn into having to answer questions about the second law to lay audiences back in the 1970s – I was in the wrong place at the wrong time, I guess – I used too much math. It doesn’t work for lay audiences; and I finally got some good advice – which I resisted at first – to lay off the math and find other ways to explain the physics.

    The advice turned out to be good because the problems laypersons have – as do the ID/creationists – is that they don’t even understand the basic physics of something falling into a potential well and staying there.

  26. Now, in my thought experiment, wouldn’t you consider the appearance of maize on my imaginary planet a highly improbable event? If so, why?

    I’ll bite. I find it not improbable, but lacking in history. It’s a forensic mystery.

  27. Kent_D:
    Now, in my thought experiment, wouldn’t you consider the appearance of maize on my imaginary planet a highly improbable event? If so, why?

    (If you think I’m headed toward some bizarre caricature evolutionary theory, you’re wrong. I’m asking this question in all seriousness, I am headed somewhere with it–in a direction that I hope might eludicate the present discussion, or dispel some ignorance on my part, or both. Thanks in advance for your patience.)

    I see from a later comment that you’re waiting for a new thread to further pursue this line of thought. I strongly encourage you not to do so in this pseudo-Socratic fashion. This is a cheap rhetorical technique designed to put one person in an authoritative position, eliciting answers from those less enlightened.

    If you’ve got an argument to make, make it. Put your assumptions, evidence, and reasoning out where anyone can see it. You’ll earn (and deserve) more respect by being straightforward and brutally honest.

  28. Kent_D:
    Take, for example, an imaginary planet, with an atmosphere similar to Earth’s, and a continuous stream of radiation entering the planet from the sun. … What do you all think?

    I think I’d like to see the underlying presumptions upon which you’ve constructed your gendankenexperiment. That way, once you’ve arrived at whatever conclusion you’re gonna arrive at, it’ll be easier to tell if you managed to assume your conclusion up front.

  29. Now, in my thought experiment, wouldn’t you consider the appearance of maize on my imaginary planet a highly improbable event? If so, why?

    Well, the evolution of any particular thing is amazingly improbable. However, given self-replicating organisms with ransom variation, the evolution of something is assured.

    JonF,

    So the appearance of an exact duplicate of maize would be incredibly improbable, but it seems pointless to ask.

  30. @Patrick:

    You regard my current inquiry as “pseudo-Socratic” questioning and “cheap rhetorical technique designed to put one person [myself (Kent), in this case] in an authoritative position, eliciting answers from those less enlightened.” But you misunderstand my intent, and my own view of myself with respect to the pro-evolution scientists who participate on TSZ. If my tone seems condescending, I apologize–that is certaily not my intent. I have genuine regard for science, and scientific practitioners. But I have a greater regard for truth, so perhaps I may be pardoned if I ask seemingly impertinent questions. Except in those areas where I might have some subject matter expertise (software engineering, for example), I’m asking as a student of science, and not as a self-perceived teacher.

    As to laying out my argument(s) explicitly, I’m not certain that I have an argument to make…yet. Any attempt to formalize my thoughts at this point would run smack into ambiguities and our “talking past each other”. As a case in point, check out Mike Elzinga’s last posted comment in the Second Look thread of Mar. 25. Mike opines that variously held conceptions and misconceptions of the word “entropy” have muddied the waters of the origins debate. I want to get any confusions about entropy in my own mind cleared up before I open my mouth on that topic, lest I muddy the waters with straw-man or other fallacious arguments.

    I am, I hope, a reasonably intelligent “lay audience”, ready-made for potential persuasion that ID is wrong. I’m willing to do some grunt work on my own to master at least some of the relevant science, if you all are willing to point me in the right direction.

    All that being said, my question (as a sincere inquirer) still stands:

    Is the appearance of maize on my imaginary planet highly improbable, or not? If so, why?

    I’m still waiting for an answer.

    If you need at least a fragmentary assertion/argument to justify my asking the question: I suggest that my thought experiment represents a specific instance of Sewell’s general tautology in action, and that, for this particular instance, his tautology seems to hold in light of observed law in our real physical universe. I.e. his tautology seems to me to be scientifically sound in at least this one case, and in many other conceivable real-world cases.

    What motivated my current line of questioning? I do not understand how Joe Felsenstein and others conclude, on the basis of Sewell’s Mathematical Intelligencer article submission, that Sewell’s conception of entropy would preclude the growth of plants. Perhaps that’s because I’m not up to the math, or simply have not thought through the issues sufficiently. But it seems to me that either (1) Sewell has made statements that are ambiguous, which might be understood differently if clarified; (2) Sewell made his assertions too broadly (i.e. they might indeed apply to a limited extent, but not so generally); or (3) Joe and others are misunderstanding or misconstruing (but not maliciously) what Sewell said.

    To echo what some earlier commentators have opined, it would be nice to have Sewell’s participation in the discussion.

    Sorry to be so verbose.

  31. @Cubist:

    I think I’d like to see the underlying presumptions upon which you’ve constructed your gendankenexperiment.

    I don’t intentionally presume anything. I assume that my imaginary planet operates by the same general laws that govern physics, chemistry, biology, etc. in our own real universe. I assume (for the sake of argument) that there is no post-big-bang tinkering with natural processes or natural law by any agent external to, or prior to, the physical universe. I leave open the possibility of pre-big-bang front-loading by a creator, but do not consider that to be relevant one way or another to my thought experiment.

    Can you think of any other assumptions (hidden or not) that might be relevant?

  32. Kent_D,

    In your hypothetical case, the sudden appearance of maize would be extraordinary considering that the same plant already exists on Earth (and only after many thousands of years of human selection upon the original plant, Teosinte) and that there was no previous indication of life. However I don’t see how your analogy applies to Sewell’s argument and the first life form.

    A better hypothetical scenario would be the appearance of a rudimentary prokaryote. But because the prevailing argument is that the prokaryote evolved from a less complex form, the point you are trying to make is only valid if we assume the first life form must have been a single-celled organism. And that’s a mighty big IF.

  33. @JonFon:

    Well, the evolution of any particular thing is amazingly improbable. However, given self-replicating organisms with ran[d]om variation, the evolution of something is assured.

    Certain kinds of evolution are vastly more probable than others, in my opinion. We may safely take the current existence of self-replicating organisms as a given, and of course then microevolution is virtually assured. However, we may not take the existence of self-replicating organisms beyond some point in the distant past as a given. Which of course brings us to the question of abiogenesis.

    So the appearance of an exact duplicate of maize would be incredibly improbable, but it seems pointless to ask.

    On the contrary, my question, and your answer, are both highly relevant to the potential credibility of Sewell’s tautology. The reason that the maize is so improbable is precisely because Sewell’s tautology holds–at least in this case. And Sewell’s tautology holds in this case because my imaginary world is operating according to the physical laws of our real universe. Indeed, it seems to me that Sewell’s tautology can only hold where such law is operative.

  34. Your example is irrelevant in the same way and for the same reason that we had the big fight at UD over whether a moon can exist and not exist at the same time.

    We do make judgements about the probability of things happening. We seldom calculate the probability of something like your immaculate maize, but it could be done.

    But such speculative fancies are irrelevant when discussing ordinary chemistry and ordinary progressions and ordinary incremental change.

  35. Kent_D: On the contrary, my question, and your answer, are both highly relevant to the potential credibility of Sewell’s tautology.

    No, Sewell’s construction needs to hold on its own. So far it doesn’t, purely on technical grounds. It’s easy to come up with stories like a tornado in a junk yard. It’s not so easy to show that physics laws or their extensions preclude biological evolution. He tried the latter route and failed. If he wants to fall back on tornado-in-the-junk-yard bullshit, he is free to do so. But one is not a substitute for the other.

    And with that, I’d like the discussion to return to the technical points. Thanks to all for understanding.

  36. Kent_D: . I want to get any confusions about entropy in my own mind cleared up before I open my mouth on that topic, lest I muddy the waters with straw-man or other fallacious arguments.

    If you really want to do this, then do it. Don’t waste your time on the red hankerchief.

    Maize is irrelevant and misleading. Sewell has overlain his misconceptions about chemistry, physics, and biology – as well as the history of life on this planet – with an irrelevant scenario drawn from the annals of ID/creationist literature. It is a bamboozle.

  37. @rhampton7″

    In your hypothetical case, the sudden appearance of maize would be extraordinary considering that the same plant already exists on Earth (and only after many thousands of years of human selection upon the original plant, Teosinte) and that there was no previous indication of life.

    My point exactly. It would be extraordinarily extraordinary.

    However I don’t see how your analogy applies to Sewell’s argument and the first life form.

    I have not extrapolated or applied Sewell’s tautology to the first life form, and for the moment, I leave open the question of how and why it might apply. All I have done so far is to suggest that there is at least one (hypothetical) instance
    where his tautology does plausibly hold. And I do not think we would have to look far in the real world to find examples where it actually holds.

    Note further that I have only advanced my argument at an abstract level. I’ve left aside concrete questions of entropy and thermodynamics, which I do not consider myself technically equipped to discuss, except on a superficial level.
    But Sewell’s tautology, if we restrict ourselves to words of the tautology alone, and not to his peripheral remarks, does the same. The tautology may be regarded as an abstraction of observed physical behavior. (I’m not claiming his peripheral remarks are unimportant; I’m just saying that they’re unimportant for my present purposes.) Whether Sewell’s abstraction is a good one in some cases, whether it is at least partially credible, is the question at hand.

    A better hypothetical scenario would be the appearance of a rudimentary prokaryote.

    The appearance of a rudimentary prokaryote would be illustrative, but to a lesser extent. I was reaching for a virtually impossible improbable event.

  38. As far as I can tell, Sewell is typical in the respect that he starts with his conclusions (which originate as ramifications of a misguided religious faith), and tries to find some way to misrepresent reality into fitting them. When all of the smoke and mirrors are decoded and extracted, all that’s left are arguments no different from Joe G’s, namely that things must be the way I WANT them to be because, uh, because that’s what I WANT. So there!

  39. Mike Elzinga: I like your discrete version of Sewell’s equation. It aids in the transition from continuous, differentiable fields to scattered, discrete constituents that can be connected only by, say, a radiation field or a particle field.

    Thanks for the kind words, Mike. I follow this teaching philosophy: keep the math simple and focus on physics. By stripping away complications of the continuum formulation, one can more easily see the connection of Sewell’s math to the 2nd law and to figure out how the hell he came up with his misguided idea of “X-entropy.” More on that later.

  40. Kent_D:
    . . .
    I want to get any confusions about entropy in my own mind cleared up before I open my mouth on that topic, lest I muddy the waters with straw-man or other fallacious arguments.
    . . .
    What motivated my current line of questioning? I do not understand how Joe Felsenstein and others conclude, on the basis of Sewell’s Mathematical Intelligencer article submission, that Sewell’s conception of entropy would preclude the growth of plants.

    Fortuitously, Dr. Felsenstein provided some details on this very blog: http://theskepticalzone.com/wp/?p=639&cpage=1#comment-9085

    With respect to learning more about entropy, this blog also boasts Mike Elzinga as a participant. He has provided several comments on the topic of entropy that I wish I’d read before learning it the hard way. These include:

    2LoT trouble

    Does intelligence violate the 2LoT?

    Granville Sewell vs Bob Lloyd


    The first thing you need to realize is that “entropy” and “disorder” are not synonyms.

  41. Kent_D: But it seems to me that either (1) Sewell has made statements that are ambiguous, which might be understood differently if clarified; (2) Sewell made his assertions too broadly (i.e. they might indeed apply to a limited extent, but not so generally); or (3) Joe and others are misunderstanding or misconstruing (but not maliciously) what Sewell said.

    Sewell’s statements are exactly what we are discussing here. They are pretty unambiguous, at least to a specialist. You, on the other hand, cut in with a hypothetical scenario that is not even remotely connected to the discussion. Once again, if you wish to start your own thread, by all means ask Elizabeth and she will help you with that. If you have questions about the technical side (entropy and 2nd law), don’t be afraid to ask. We’re here to help.

  42. olegt,

    Alternatively, one can multiply it by the Boltzmann constant to get the ridiculous SI units of J/K

    I realize you have a bee in your bonnet over the classical SI entropy units (Joule per Kelvin). I see these units arguments as mostly philosophical, as similar arguments can be made against most all units in physics; perhaps Planck units are the most natural. The root reason why entropy has SI units of J/K is because temperature is measured in K. If temperature were measured as an energy (in Joules, being roughly a measure of the average kinetic energy per degree of freedom), then classical entropy would be unitless. But temperature is a special sort of energy, and so it does make some sense to give it units that distinguish it from other energies. It’s also not at all obvious, macroscopically, that temperature is an energy. So R (gas constant) and k_B (Boltzmann’s constant) are the proportionalities that “convert” temperature into a corresponding energy. Furthermore, there are some good general reasons for using unit systems like SI rather than Planck. The main one is that by keeping track of units you can avoid basic errors of calculation. For instance, the one that Sewell made.

  43. Kent_D:
    @JonFon:

    Well, the evolution of any particular thing is amazingly improbable. However, given self-replicating organisms with ran[d]om variation, the evolution of something is assured.

    Certain kinds of evolution are vastly more probable than others, in my opinion.

    Get back to us when you have evidence rather than opinion.

  44. dtheobald: The root reason why entropy has SI units of J/K is because temperature is measured in K. If temperature were measured as an energy (in Joules, being roughly a measure of the average kinetic energy per degree of freedom), then classical entropy would be unitless.

    That’s what it looks like from the perspective of thermodynamics. Temperature is introduced, historically, before entropy, with its own units, so entropy has no choice but to be measured in joules per kelvin. It looks quite different from the perspective of modern statistical physics, where entropy is introduced before temperature as the logarithm of a phase-space volume. It is naturally dimensionless in this approach. Then inverse temperature is introduced as the rate at which entropy varies with energy, 1/T = dS/dE. Now temperature has no choice but to be measured in joules.

    I am not going to argue about which choice is proper. It’s a matter of taste. My experimental colleagues measure energy in kelvins, millielectronvolts, and inverse centimeters. I have no problem with that.

    As to Sewell’s errors, the unit mismatch is obvious in both systems of units.

  45. Welcome to TSZ, Kent!

    Yes, if you’d like to post an OP, let me know. Sorry I’ve been distracted by Cornelius Hunter’s blog.

    I’ll set your permissions tomorrow anyway.

    Cheers, Lizzie

  46. JonF: Get back to us when you have evidence rather than opinion.

    I dunno, I think I share that opinion. Eyes and flippers seem pretty evolvable.

  47. dtheobald: The root reason why entropy has SI units of J/K is because temperature is measured in K. If temperature were measured as an energy (in Joules, being roughly a measure of the average kinetic energy per degree of freedom), then classical entropy would be unitless.

    The histories of the temperature scale and the units for energy are interesting in themselves. They developed independently; and even Galileo, for example, used a column of expanding wine as a “thermoscope.”

    Isaac Newton came up with his law of cooling long before the Fahrenheit and Celsius scales were established. He had an interesting scale made up of various familiar phenomena, such as when various things melted and the colors of familiar hot things, to make a crude temperature scale on which he placed numbers.

    The entire enterprise of developing a “temperature” scale was to measure “degrees of heat;” in other words, to quantify how much “in your face” a hot object was. Generally one wanted the numbers to get bigger the more intense the heat.

    Most folks are now familiar with the mercury thermometer invented by Fahrenheit; and they think of temperature as an independent kind of measurement just as they think of distance and time.

    A good portion of my research was in low temperature physics and superconductivity. Establishing temperature scales is not trivial in this line of work. Nor is it at high temperatures. Many phenomena are employed as “converters” that take measurements of a particular phenomenon and change it to temperature. For example, one could make the Boltzmann constant have units of ohms per Kelvin, or volts per Kelvin or, as is often the case, electron volts per Kelvin. But one has to know in detail the physics of what is going on in these kinds of transducers.

    The issue comes down to establishing conventions that everyone can use; and in most cases these conventions have deep roots in historical precedents. A dimensionless Boltzmann constant is clearly what it boils down to; but we are stuck with history. In some lines of work, different conventions make the concepts easier to work with by not having to carry along a bunch of conversion factors. The conversions can be put in at the end of a process of calculations or on the graphs. Even the unit sizes are chosen for convenience; such as the electron volt instead of the joule.

    Dimensional analysis is useful in physics; and it is often the case that working with dimensionless constants is useful for getting to the heart of the matter without being bogged down with the clutter of conventional units.

  48. JonF: Get back to us when you have evidence rather than opinion.

    The evidence comes from the pro-evolution camp. If you’re a proponent of gradualistic evolution (as opposed to some kind of radical saltationism), my statement is a simple necessary inference from the available evidence, as interpreted through the lens of the new synthesis. Macroevolution (one kind of evolution) is many orders of magnitude less probable than microevolution (another kind of evolution). That something will be produced in, say, a few hundred generations is virtually certain (barring some sort of extinction event); that that new something will be massively more complex than the baseline organism from which it descended is far less certain.

    Am I missing something?

Leave a Reply