Does quantum entanglement violate relativity?

Ever since the implications of quantum entanglement between particles became unavoidable for physicists and cosmologists, the doubt of the accuracy or completeness of Einstein’s general and special theory of relativity became real… Einstein himself called quantum entanglement “spooky action at a distance” because the possibility of faster than speed of light transfer of information between two entangled particles (no matter what distance between them) would violate relativity and the fundamentals of one of the most successful theories in science…

Recently, however, several experiments have confirmed that entanglement is not only real but it seems to violate relativity.

The results of the first experiment have provided the speed of entanglement, which was measured to be at least 10.000 times faster than the speed of light. here

In the second experiment scientists have been able to send data via quantum entanglement at 1200 km distance. Next OP will be on this theme…

Quantum entanglement is a phenomenon in quantum physics where 2 particles, like photons or electrons, become entangled, or their quantum state, or properties, became interdependent. Any change to the property of one entangled particle instantaneously (or faster than speed of light) affects the other. Einstein believed that the exchange of information at the speed faster than speed of light would create paradoxes, such as sending information to the past. That was one of the reasons Einstein and many other physicists have rejected quantum mechanics as either incomplete or false. And yet, up until today, no experiment has ever contradicted any of the predictions of QM.

As the experiments clearly show, the speed of entanglement is at least 10.000 faster than the speed of light and if that is the case, then entanglement violates relativity, as quantum information about the quantum state of one entangled particle instantaneously affects the other entangled particle…

So, if that is true, as it clearly appears to be, why didn’t we hear about it on the News?

What I would like to do with this OP is to get everyone involved to state their opinion or provide facts why these news have not been widely spread or accepted…

As most of you probably suspect, I have my own theory about it…Yes, just a theory…for now… 😉

BTW: I love quantum mechanics…
Just like Steven Weinberg once said: <strong><i>”Once you learn quantum mechanics you are really never the same again…”

501 thoughts on “Does quantum entanglement violate relativity?

  1. Mung,

    So you’re still unable to justify your rejection of the observer-dependence of entropy?

    keiths:

    In earlier discussions, you objected to my characterization of entropy as observer-dependent. However, that objection never made sense to me, given that observers can differ in the amount of information they possess about the exact microstate of a given system. Different quantities of missing information lead to different assessments of entropy, even though the system is the same in both cases.

  2. Mung: Of course. An inventory of what current textbooks teach would probably be revealing. Not that I am offering to do that.

    Thanks. That’s my sense as well. There were a number of textbooks cited in the last couple of threads on this. I doubt there’s been some universal conversion to one interpretation since then, but I could be wrong.

  3. walto:
    Mung, is it your sense that there are a number of different views on the matter–that there’s no consensus at present?

    What I see as missing from this discussion is consideration of the context of the questions about the nature of entropy. For example, here is a paper that argues that the spreading metaphor is best, assuming that you want an explanation of entropy that best incorporates energy, space, and time. That would make sense in textbooks for explaining entropy to chemists, for example. But perhaps not for textbooks in other domains or in populatizations that wanted to stick with counts and macrostates for simplicity.

    On the other hand, if one is looking to explain entropy in a way that generalizes to many domains — counting, energy/thermodynamic, information, quantum information, then the formula
    entropy = -k sum of [p(i) log p(i)]
    works for all of them, as long as one sets k and the probabilities p(i) appropriately (the formula does need to be generalized for quantum case).

    But there is still controversy. The nature of the probabilities in this formula is a controversy in statistical mechanics and in science in general. Are probabilities ontic (part of the world) or epistemic (only in us and our limited knowledge). Also, are they objective or subjective? (This is a separate issue from ontic/epistemic).

    I think the objective versus subjective issue is a part of what motivates the differences in view in some of the previous exchanges in TSZ on explaining entropy.

    For the ontic case, I believe the probabilities are always objective. But that is not true for the epistemic case. For here we can separate the probabilities according to an ideal, rational, fully-scientifically informed human observer from those of a person without those qualities. The first are object, the second subjective.

    And, but in a different way, we can also separate the entropy according to a demon who could only perceive and conceive of the world in microstates from those of a human observer who perceives and conceives the world first in macrostates.

    That’s enough for this post. More in next.

  4. BruceS,

    To continue my post on whether we should consider the probabilities in the entropy formula to be objective or subjective, assuming they are epistemic.

    Consider first the case of a non-ideal human observer who is nonetheless partly familiar with the calculation of entropy. My view is that such non-ideal observers can get the estimate of entropy objectively wrong because they would make incorrect predictions about the world based on their estimate of entropy. For example, the person who did not understand the entropy (and enthalpy) of protein folding would not predict what happens correctly. Or, for an example closer to the heart of posters at TSZ, the person who did not understand the role of the sun in entropy on the earth would make incorrect predictions about possible changes in the complexity of life over time.

    So I say epistemic probabilities can be objective if we accept a standard of correctness as the marker of objectivity. Since entropy is a scientific concept, that standard is accurate empirical prediction (ETA) and adherence to norms of science. The ideal rational observer makes such predictions. Or if you are uncomfortable with how to define “ideal” then take best current science, although you have to accept that correctness then has to allow for the fallibility of science. The point is that the probabilities in the entropy formula can be both be epistemic and objective.

    What about the demon who only observed and conceived of the world as a series of microstates which were the exact position and momentum of each molecule? In this case, I do agree that the objective calculation of entropy for the ideal demon would differ from the ideal human scientist. But the demon’s predictions would also differ from the scientist, since the demon would base those predictions on a different set of conceptions.

  5. Bruce,

    What I see as missing from this discussion is consideration of the context of the questions about the nature of entropy. For example, here is a paper that argues that the spreading metaphor is best, assuming that you want an explanation of entropy that best incorporates energy, space, and time.

    That paper came up in the earlier thread, where I pointed out a flaw in its handling of the entropy of mixing. More on that particular flaw later.

    If, like Neff, you want “an explanation of entropy that best incorporates energy, space, and time”, you need to

    a) look at the available, correct explanations of entropy, and
    b) pick the one that “best incorporates energy, space, and time”.

    The problem is that the energy dispersal explanation fails the correctness test. It’s a false explanation.

    Now, you might argue that for certain purposes — when teaching beginners, for instance — the false explanation of entropy is better as a stepping stone than a true explanation would be, just as Newtonian physics is a better first stepping stone to a mastery of physics than quantum mechanics or general relativity would be.

    That’s fine, but in this thread we are concerned about what entropy is in reality. It’s definitely not energy dispersal, so the energy spreading explanation fails the correctness test.

    I have yet to see anyone come up with a scenario in which the missing information interpretation fails.

  6. Bruce,

    But there is still controversy. The nature of the probabilities in this formula is a controversy in statistical mechanics and in science in general. Are probabilities ontic (part of the world) or epistemic (only in us and our limited knowledge).

    Unless you reject the notion that a thermodynamic system is in a single microstate at any given moment, then the probabilities we are concerned with are purely epistemic. The ontic probability (which I referred to as the “metaphysical probability” in the earlier thread) is 1 for the actual microstate and 0 for all others, so using ontic probabilities would always give an entropy of zero, which isn’t useful.

    Since the probabilities are epistemic, not ontic, they can vary among observers depending on the information possessed by each. That means that entropy is observer-dependent (though not subjective!) as well.

  7. Bruce,

    The point is that the probabilities in the entropy formula can be both be epistemic and objective.

    Right. And they can remain epistemic and objective even when they differ from observer to observer.

  8. Now let me explain Neff’s error regarding the entropy of mixing.

    Sal quoted Neff as follows:

    Mixing of Gases

    Consider two gases, each with N molecules, initially in separate but equal volumes V separated by a central partition, as shown in Fig. 4(a). The entire system has temperature T . If the partition is removed, the gases spontaneously mix together, there is no temperature change, and the entropy change is

    ΔS = 2NkB ln 2 for distinguishable gases,

    0 for identical gases.

    For distinguishable molecules, process I yields the standard “entropy of mixing”,
    ….
    Here is an energy spreading interpretation. For distinguishable gases, energy spreads from volume V to 2V by each species in process I, accounting for SI = 2NkB ln 2. Note that upon removal of the partition, the energy spectrum of each species becomes compressed because of the volume increase and, most important, spreads through the entire container. That is, energy states of “black” particles exist in the right side as well as the left, with a corresponding statement for “white” particles.

    I responded:

    Sal, quoting Harvey Leff:

    ΔS = 2NkB ln 2 for distinguishable gases,

    0 for identical gases.

    Let that sink in, Sal. Do you think the laws of physics “look” to see whether the gases are distinguishable to an observer before deciding whether and how the energy should disperse?

    The idea is ludicrous. The gas behavior, and any consequent energy dispersal (or lack thereof), is the same whether the molecules in Leff’s experiment are “black” or “white”. The only difference is that the observer can look to see where the “black” molecules are versus the “white” molecules, thus distinguishing between microstates that would be indistinguishable if the molecules were all the same “color.”

    Entropy is a measure of missing knowledge, not of energy dispersal.

  9. keiths:
    Bruce,

    Right.And they can remain epistemic and objective even when they differ from observer to observer.

    I would say that, furthermore, one human observer can be correct and another human observer wrong, where standard of correctness is conformance with predictions and norms of the science which is appropriate for the type of prediction being made that.

    I’m not sure if you agree with that.

  10. BruceS: that standard is accurate empirical prediction (ETA) and adherence to norms of science.

    Can you provide a comprehensive and noncontroversial list of those norms?

    are you referring to the Mertonian Ethos?

    Thanks

    peace

  11. keiths: So you’re still unable to justify your rejection of the observer-dependence of entropy?

    That is correct, as far as it goes. So I’m not going to argue with you about it. 🙂

  12. BruceS: I still think Barrow’s arguments apply in the sense that if no TOE is possible, it won’t be because of Godelian limits on formal systems…

    ok, but as far as Jaki is concerned, Jake does not use Godel to argue that that a ToE is impossible. Jaki says a ToE is possible.

    I fond it a bit odd how Barrow divided people into optimists and pessimists and put Jaki on the pessimist side. Given Barrow’s criterion for what makes one an optimist I would put Jaki on the optimist side.

    None of this is really either here nor there. Not trying to argue with you. Welcome back, btw.

  13. walto: I doubt there’s been some universal conversion to one interpretation since then, but I could be wrong.

    I will speculate that more are adopting the information theory approach. I’ll see if I can dig some up that I did find.

  14. keiths:
    Now let me explain Neff’s error regarding the entropy of mixing.

    Yes, I do agree that Neff was stretching (so to speak) “dispersal” to answer that scenario by making volume dependent on whether particles were distinguishable.

    But if I read your reply correctly, you seem to be saying he got the entropy formula wrong.

    My understanding is that he did not: Neff was alluding to Gibbs paradox and the entropy formula used to resolve it.

    That Wiki article does explain how Jaynes would interpret the formula subjectively; I’d prefer using the term “epistemic” rather than “subjective”.
    Whether Jayne would accept that distinction is an vexed question according to what I have read; some think his opinion changed and that in his later writing he thought there was an objective, epistemic view whereas in his earlier writing he did not clearly make that distinction for between subjective/objective and epistemic.

  15. keiths: Entropy is a measure of missing knowledge, not of energy dispersal.

    “Missing knowledge” is an oxymoron. 🙂

  16. fifthmonarchyman: Can you provide a comprehensive and noncontroversial list of those norms?

    I was not familiar with that term, but it seems a good start.

    So now let me know why it is not, preferably without re-opening any arguments about epistemology. Having lurked during your exchanges with walto et al on the nature of knowledge, I have no interest in and won’t respond to arguments of that nature

  17. BruceS: …some think his opinion changed and that in his later writing he thought there was an objective, epistemic view whereas in his earlier writing he did not clearly make that distinction for between subjective/objective and epistemic.

    I think I have a book by Jaynes in which he argues it is objective, which probably led to my disagreement with keiths over that (along with the books by Arieh Ben-Naim).

    I can look it up if you are interested.

  18. Mung: I think I have a book by Jaynes in which he argues it is objective, which probably led to my disagreement with keiths over that (along with the books by Arieh Ben-Naim).

    I can look it up if you are interested.

    You are right, that is the considered view of probability Jaynes held at least according to the secondary sources I’ve consulted like section 3.3.6 of the Frigg paper I linked earlier and also in Probability in Statistical Physics. For now, I’m sticking with secondary but philosophically-informed sources like those rather then trying to delve into interpreting Jaynes directly.

    But let me return the favor (perhaps) on the topic of books that might interest you: Other Worlds Spirituality and the Search for Invisible Dimensions had a recent New Books Network interview. (ETA: I liked the author but the interviewer, not so much. I have not read or bought the book. Probably a wait-for-library one for me).

  19. BruceS: So now let me know why it is not, preferably without re-opening any arguments about epistemology.

    I don’t know that it’s not.

    I really was just interested in what you meant by the term “scientific norms” and if this was a well established idea.

    You mention that it’s a good place to start are you implying that these “scientific norms” are not yet fully codified?

    I do think there are some issues with Mertonian Ethos.

    For example who determines if a particular scientist is conforming to them and how do we keep subconscious bias from intruding with out our knowledge into our efforts.

    BruceS: I have no interest in and won’t respond to arguments of that nature

    It’s a good thing I don’t offer arguments of that nature then 😉

    peace

  20. Alan Fox:
    BruceS,

    Excellent comments, Bruce! You are almost making sense to me, and I’m incorrigible.

    I am you are enjoying your new-found leisure time, Alan. And the world cup (assuming you are cheering for England though I don’t believe you live there).

  21. BruceS,

    I find football a bit dull but I must start to take notice now England are in the semi-finals. Can’t help but know the last four also include France (adopted home and I currently get banter in almost any encounter with local populace) and Belgium as there are many Belgian immigrants, second-homers and retirees locally. Had to google to find there’s one semifinal between Russia and Croatia to play.

    An England vs France final would be entertaining. I might even watch!

    ETA remove insult to Belgium

  22. Alan Fox:
    BruceS,

    I find football a bit dull but I must start to take notice now England are in the semi-finals. Can’t help but know the last four also include France (adopted home and I currently get banter in almost any encounter with local populace) and Belgium as there are many Belgian immigrants, second-homers and retirees locally. Had to google to find there’s one semifinal between Russia and Croatia to play.

    An England vs France final would be entertaining. I might even watch!

    ETA remove insult to Belgium

    My son created a list of the best floppers at World Cup 2018. None of the English or Belgium players made the list but 3 Croatian did, and so did 3 French players…

    Neymar Jr. has been crowned before the WC ended…

  23. Bruce,

    Yes, I do agree that Neff was stretching (so to speak) “dispersal” to answer that scenario by making volume dependent on whether particles were distinguishable.

    His name is actually Leff, not Neff. It’s my fault — I misidentified him as “Neff” above.

    Leff is pulling a bait-and-switch. What disperses in his example is not energy, but rather the “black” and “white” particles. Entropy increases, but energy does not disperse. Therefore entropy cannot be a measure of energy dispersal.

    But if I read your reply correctly, you seem to be saying he got the entropy formula wrong.

    No, the formula is correct. The problem is that Leff cannot explain, using the energy dispersal interpretation, why the entropy increase is nonzero for distinguishable particles but zero for indistinguishable ones.

    The energy dispersal interpretation fails, but the missing information interpretation handles this scenario with no difficulty.

  24. Bruce:

    I would say that, furthermore, one human observer can be correct and another human observer wrong, where standard of correctness is conformance with predictions and norms of the science which is appropriate for the type of prediction being made that.

    I’m not sure if you agree with that.

    An observer can certainly screw up an entropy calculation — for instance, by using the measuring equipment incorrectly, or by plugging degrees Celsius into a formula that requires kelvins — and in that case, of course, the resulting entropy number won’t be correct.

    Entropy is observer-dependent and objective, but that doesn’t mean that any number an observer happens to come up with will qualify as objective.

  25. Bruce,

    That Wiki article does explain how Jaynes would interpret the formula subjectively; I’d prefer using the term “epistemic” rather than “subjective”.

    I agree. That’s why I say that entropy is “observer-dependent but objective.”

    Whether Jayne would accept that distinction is an vexed question according to what I have read; some think his opinion changed and that in his later writing he thought there was an objective, epistemic view whereas in his earlier writing he did not clearly make that distinction for between subjective/objective and epistemic.

    I addressed that in the earlier thread in an exchange with Mung:

    keiths:

    Entropy is observer-dependent but not subjective.

    Mung:

    It’s subjective in the sense that it is observer-dependent.

    keiths:

    Calling it ‘subjective’ just invites misunderstanding, which is exactly what Jaynes experienced:

    The reactions of some readers to my use of the word ‘subjective’ in these articles was astonishing….There is something patently ridiculous in the sight of a grown man recoiling in horror from something so harmless as a three-syllable word. ‘Subjective’ must surely be the most effective scare word yet invented. Yet it was used in what still seems a valid sense: ‘depending on the observer’.

    Jaynes could have avoided the problem by referring to entropy as “observer-dependent but objective”.

    Fleshing that out, it means that once the parameters of the macrostate have been selected, the rest of the calculation proceeds objectively. In other words, two observers who specify the macrostate in the same way should get the same value for entropy, assuming that their measurements are accurate and their calculations are correct.

  26. Mung:

    “Missing knowledge” is an oxymoron. 🙂

    I was watching a lecture on knot theory the other day, in which the professor explained that the simplest knot — a loop — is known as “the unknot”.

    I immediately thought of you. Had you been watching that lecture, I predict you would have wasted all of your time obsessing over the apparent contradiction: “How can the unknot be a knot? It’s an unknot, not a knot!”

    Meanwhile, the brighter folks would register the apparent contradiction, recognize its unimportance, and proceed to spend the rest of the lecture learning about knot theory.

  27. Mung, to Bruce:

    I think I have a book by Jaynes in which he argues it is objective, which probably led to my disagreement with keiths over that (along with the books by Arieh Ben-Naim).

    It shouldn’t have led to your disagreement, since I too hold that entropy is objective. Observer-dependent but objective.

    Jaynes makes the same point:

    From this we see that entropy is an anthropomorphic concept, not only in the well-known statistical sense that it measures the extent of human ignorance as to the microstate. Even at the purely phenomenological level, entropy is an anthropomorphic concept. For it is a property, not [solely] of the physical system, but of the particular experiments you or I choose to perform on it.

    Entropy is observer-dependent.

  28. fifth:

    I do think there are some issues with Mertonian Ethos.

    For example who determines if a particular scientist is conforming to them and how do we keep subconscious bias from intruding with out our knowledge into our efforts.

    That’s not a problem with the Mertonian norms. That’s a problem with the humans who are trying to conform to them.

  29. fifthmonarchyman: I don’t know that it’s not.

    You are right, it is at least partly an issue of epistemic virtues in science.

    I really was just interested in what you meant by the term “scientific norms” and if this was a well established idea.

    These norms are studied in philosophy of science and sociology of science, eg in understanding “best” in “inference to best explanation”, in the demarcation problem, in understanding rationality in science (eg later Kuhn). Sometimes they are called “values” or “virtues”.

    You mention that it’s a good place to start are you implying that these “scientific norms” are not yet fully codified?

    I don’t think the scientific norms can ever be “fully codified”. For one thing, they will change in time. For another, the list of applicable norms and their priority depends on the particular circumstance. Of course, this approach raises more “who decides” issues. I have a common answer for all in my next note.

    My approach to these issues relies partly on some variation of coherentism as expressed, eg, in Neuruth’s boat analogy.

    I think the Mertonian Norms apply to any process which claims objectivity, not just the process in science. For me, the meaning of “objectivity” in “scientific objectivity” is based following a process subject to those Mertonian norms. Objectivity is mandatory in science; complying to the norms is how to achieve it.

    To get a full set of the norms particular to science, one has to add the following: falsifiability; accuracy of predictions; unification meaning consistency with nearby scientific domains (eg as with psychology and neuroscience) but also with physics; simplicity of theories; fruitfulness of theories for ongoing research; wide scope of theories to accommodate facts beyond those directly explained by the theory; and likely others. Not everyone agrees on priority of these. Not all of them are applicable in a given circumstance. Who decides? See next note.

  30. fifthmonarchyman:

    I do think there are some issues with Mertonian Ethos.

    how do we keep subconscious bias from intruding with out our knowledge into our efforts.

    Follow the objective process according to Mertonian norms. Those norms mean science is a community process, not an individual process, so individual bias gets addressed.

    who determines if a particular scientist is conforming to them

    First note that following the norms is different from being correct. Some theories can be judged as scientific but wrong. Same deciders for both questions: is it science? if so, is it correct?.

    The first deciders are the community of scientists working on the research program in the relevant domain of science. They are the primary deciders.

    Secondary deciders include: scientists in the domain not part the research program, scientists in domains with related expertise, philosophers with expertise in that domain, statisticians if the experiments or theories involve science, engineers making technology based on that science, intellectual descendants of any of those groups (who may change the decision over time).

    Sometimes there is no consensus on whether something is science, or if it is science, whether it is correct. As current examples, consider multiverses or string theory.

    There are people who carry on with what they think as valid scientific research programs but which are not. Cold fusion and some quantum-based explanations of consciousness are examples. Sometimes the issues with these groups are obvious: they do not follow the objectivity norms or they clearly violate a scientific norm like unification. If not, I look at the secondary deciders to ascertain whether these ideas follow the norms of science and if so whether they are correct.

    I do not consider the general public to be deciders. People do decide (through their government) whether they want to pay for science. They also decide whether scientific theories should overrule other values and beliefs in their lives. But they do not decide whether something is science or whether it is correct science.

    I am certainly in this non-decider category. I rely on understanding the consensus of deciders of all types, or, if there is no consensus, on understanding the reasons why.

    How does one get in the community of deciders/scientists? Both by formal training and by following an apprentice program. It usually includes a Phd and postdoc work with an “apprentice master” who is existing member in the relevant domain and possibly a specific research program.

    You may ask how I can justify the intellectual inbreeding of such an apprenticeship approach. The answer is I see science as successful in meeting its goals and I see these processes as an essential part of that success. The reason for this is that they are maintained by working scientists themselves: successful practitioners have the best ideas of why they are successful.

  31. keiths: keiths:

    Calling it ‘subjective’ just invites misunderstanding, which is exactly what Jaynes experienced:

    The reactions of some readers to my use of the word ‘subjective’ in these articles was astonishing….There is something patently ridiculous in the sight of a grown man recoiling in horror from something so harmless as a three-syllable word. ‘Subjective’ must surely be the most effective scare word yet invented. Yet it was used in what still seems a valid sense: ‘depending on the observer’.

    Jaynes could have avoided the problem by referring to entropy as “observer-dependent but objective”.

    Nice.

  32. keiths: Entropy is a measure of missing knowledge, not of energy dispersal.

    I have a picture in my mind of keiths going around trying to measure things that are not there.

  33. keiths: Mung:

    It’s subjective in the sense that it is observer-dependent.

    Hilarious. So you were trying to gin up a controversy where none existed?

    keiths: So you’re still unable to justify your rejection of the observer-dependence of entropy?

  34. Mung,

    I have a picture in my mind of keiths going around trying to measure things that are not there.

    Brighter people have no trouble with the concept. How many eggs are missing from this carton, Mung?

  35. Your confusion on this very simple matter suggests a large amount of missing intelligence.

  36. Mung,

    Hilarious. So you were trying to gin up a controversy where none existed?

    Um, no. I was (and still am) trying to correct your chronic confusion regarding the observer-dependence of entropy:

    keiths:

    So you’re still unable to justify your rejection of the observer-dependence of entropy?

    Mung:

    That is correct, as far as it goes. So I’m not going to argue with you about it.

  37. Bruce,

    For example, here is a paper that argues that the spreading metaphor is best, assuming that you want an explanation of entropy that best incorporates energy, space, and time.

    Leff erroneously criticizes the missing information interpretation of entropy for not explicitly involving energy:

    Despite its intimate connection with energy, entropy has been described as a measure of disorder, multiplicity, missing information, freedom, mixed-up-ness, and the like—none of which involves energy explicitly.

    Lambert makes a similar criticism, but both Leff and Lambert are incorrect. While it’s true that the missing information interpretation leads to an entropy that is not expressed in terms of energy, that’s actually a feature, not a bug.

    The natural unit of all entropies, including thermodynamic entropy, is the bit. Energy doesn’t make an appearance. What puts the “thermodynamic” in “thermodynamic entropy” is the fact that the epistemic probability distribution from which the entropy is derived is a distribution over possible thermodynamic microstates.

    Similarly, card deck entropy is expressed in bits. Cards and decks don’t make an appearance. What puts the “card deck” into “card deck entropy” is the fact that the epistemic probability distribution from which the entropy is derived is a distribution over possible card deck orderings.

    Paradoxically, then, the fact that the energy dispersal interpretation explicitly invokes energy actually proves that thermodynamic entropy is not a measure of energy dispersal. Any proper entropy will have units of bits, not those of energy dispersal.

  38. “Wait a minute,” you might object. “If the natural unit of thermodynamic entropy is the bit, why are thermodynamic entropies typically expressed in joules per kelvin?”

    I addressed that issue in the earlier thread:

    walto,

    As for Lambert’s objection, you’ve already quoted the refutation. You just didn’t realize it.

    Lambert’s complaint is that the equation for thermodynamic entropy includes Boltzmann’s constant (kb), while the equation for informational entropy does not. He thinks that the informationists are therefore cheating by bringing kb into the equation:

    Arbitrarily replacing k by kB — rather than have it arise from thermodynamic necessity via Boltzmann’s probability of random molecular motion — does violence to thermodynamics. The set of conditions for the use or for the meaning of kB, of R with N, are nowhere present in information theory. Thus, conclusions drawn from the facile substitution of kB for k, without any discussion of the spontaneity of change in a thermodynamic system (compared to the hundreds of information “entropies”) and the source of that spontaneity (the ceaseless motion of atoms and molecules) are doomed to result in confusion and error. There is no justification for this attempt to inject the overt subjectivity of “disorder” from communications into thermodynamic entropy.

    That’s nonsense. The only reason kb even appears in the thermodynamic entropy equation is because of the choice of units. I explained that earlier in the thread:

    The fact that it’s [thermodynamic entropy is] usually expressed in units of joules per kelvin (J/K) is an accident of history, due to the definition of the kelvin as a base unit. Had the kelvin been defined in terms of energy, joules in the denominator would have cancelled out joules in the numerator and the clunky J/K notation would be unnecessary.

    Entropy — including thermodynamic entropy — really is best expressed in terms of bits (or nats, trits, hartleys, or other units of information). Entropy is a measure of missing information, after all.

  39. Leff and Lambert both correctly criticize the disorder interpretation of entropy by pointing to instances in which it fails. The irony is that they fail to realize that their own preferred interpretation — the energy dispersal interpretation — also fails in certain cases. By their own reasoning, it too should be abandoned.

    I’ve already discussed why the energy dispersal interpretation fails in the isothermal gas mixing example. Another of my favorite examples is provided by John Denker:

    keiths:

    We’ve mostly been talking about gases so far. Here’s a solid-state example of why entropy is not a measure of energy dispersal, from John Denker of Bell Labs:

    As another example, consider two counter-rotating flywheels. In particular, imagine that these flywheels are annular in shape, i.e. hoops, as shown in figure 9.7, so that to a good approximation, all the mass is at the rim, and every bit of mass is moving at the same speed. Also imagine that they are stacked on the same axis. Now let the two wheels rub together, so that friction causes them to slow down and heat up. Entropy has been produced, but the energy has not become more spread-out in space. To a first approximation, the energy was everywhere to begin with and everywhere afterward, so there is no change.

    If we look more closely, we find that as the entropy increased, the energy dispersal actually decreased slightly. That is, the energy became slightly less evenly distributed in space. Under the initial conditions, the macroscopic rotational mechanical energy was evenly distributed, and the microscopic forms of energy were evenly distributed on a macroscopic scale, plus or minus small local thermal fluctuations. Afterward, the all the energy is in the microscopic forms. It is still evenly distributed on a macroscopic scale, plus or minus thermal fluctuations, but the thermal fluctuations are now larger because the temperature is higher. Let’s be clear: If we ignore thermal fluctuations, the increase in entropy was accompanied by no change in the spatial distribution of energy, while if we include the fluctuations, the increase in entropy was accompanied by less even dispersal of the energy.

    He’s right, and Sal is wrong. Again.

  40. keiths,

    My original reply was in response to Walt’s query about what textbooks say. My point was that to answer the question “What is Entropy” one needs to know the context for the desired answer (eg the course for the textbook). If the context is teaching the concept to undergrad chemists, maybe the energy spreading idea might be the best way for them to think about it, at least initially. I have no opinion on whether that is the case.

    In my experience, Jayne’s stuff comes up in philosophy of science in discussions of whether thermodynamics is reducible to SM and if so how. Definitely no “energy spreading” idea in those discussions. Instead, those discussions pit Jayne’s pure info against Boltzmann and Gibbs approaches, which are based on the physics of the thermodynamic microstates. See the book and paper I linked earlier for more details. Message me if you want pdf of the book.

    (ETA typos)

  41. Bruce,

    I understand what you were trying to do. It’s just that the answer you gave is incorrect.

    My point was that to answer the question “What is Entropy” one needs to know the context for the desired answer (eg the course for the textbook).

    That’s not right. There are no contexts in which entropy is a measure of energy dispersal. Lambert and Leff mean well, but they are wrong. By lobbying textbook authors to include the energy spreading interpretation of entropy, Lambert has inadvertently done a great disservice.

    Is an increase in entropy associated with energy spreading? In some cases, yes, but not in others. Therefore entropy cannot be a measure of energy dispersal.

    Is an increase in entropy associated with an increase in disorder? In some cases, yes, but not in others. Therefore entropy cannot be a measure of disorder.

    Entropy is a measure of missing information — the additional information that would be required to specify the exact microstate of a system given that you only know its macrostate.

  42. keiths: Um, no. I was (and still am) trying to correct your chronic confusion regarding the observer-dependence of entropy:

    And to that end you quote me saying that it’s observer-dependent. Isn’t that what you also believe? Perhaps you should clear up your own chronic confusion first.

  43. keiths:

    Um, no. I was (and still am) trying to correct your chronic confusion regarding the observer-dependence of entropy:

    Mung:

    And to that end you quote me saying that it’s observer-dependent. Isn’t that what you also believe? Perhaps you should clear up your own chronic confusion first.

    The confusion is yours. Like walto, you struggle with this subject, to the point that you can’t even keep your own position straight.

    In that quote, you correctly acknowledge that entropy is observer-dependent, but incorrectly claim that that makes it subjective. Elsewhere you have disputed entropy’s observer-dependence, and you confirmed that just two days ago:

    keiths:

    So you’re still unable to justify your rejection of the observer-dependence of entropy?

    Mung:

    That is correct, as far as it goes. So I’m not going to argue with you about it. 🙂

    You’re unable to keep your position straight. That’s confusion.

  44. Bruce,

    Just to drive the point home, here are six reasons why dispersalism cannot be correct, all taken from the earlier thread:

    1. Entropy has the wrong units. Dispersalists and informationists agree that the units of thermodynamic entropy are joules per kelvin (in the SI system) and bits or nats in any system of units where temperature is defined in terms of energy per particle. It’s just that the dispersalists fail to notice that those are the wrong units for expressing energy dispersal.

    2. You cannot figure out the change in energy dispersal from the change in entropy alone. If entropy were a measure of energy dispersal, you’d be able to do that.

    3. The exact same ΔS (change in entropy) value can correspond to different ΔD (change in dispersal) values. They aren’t the same thing. Entropy is not a measure of energy dispersal.

    4. Entropy can change when there is no change in energy dispersal at all. We’ve talked about a simple mixing case where this happens. If entropy changes in a case where energy dispersal does not change, then they aren’t the same thing.

    5. Entropy change in the gas mixing case depends on the distinguishability of particles — the fact that the observer can tell ideal gas A from ideal gas B. Yet the underlying physics does not “care” about distinguishability — the physics is the same whether or not they are distinguishable to the observer. If the physics is the same, then energy dispersal is the same.

    The entropy change depends on distinguishability, so it cannot be a measure of energy dispersal.

    6. Entropy depends on the choice of macrostate. Energy dispersal does not.

    The way energy disperses in a system is dependent on the sequence of microstates it “visits” in the phase space. That sequence depends only on the physics of the system, not on the choice of macrostate by the observer.

    In the Xavier/Yolanda example, Yolanda possesses equipment that allows her to distinguish between the two isotopes of gas X, which we called X0 and X1.

    If she chooses to use her equipment, she comes up with a different macrostate than Xavier and calculates a different entropy value. If she declines to use the equipment, she and Xavier come up with the same macrostate and get the same entropy value. Her choice has no effect on the actual physics, and thus no effect on the actual energy dispersal. Yet her choice does affect the entropy value she measures, and profoundly so.

    She is not measuring energy dispersal. She is measuring the information gap between her macrostate and the actual microstate.

  45. I’ll post some other relevant comments from the earlier thread:

    Joe,

    Some final comments on the gas-mixing examples.

    My conclusion was

    So there’s no difference in energy dispersal between Case 1 and Case 2, but there is a difference in entropy. They can’t be the same thing.

    Entropy is not a measure of energy dispersal.

    This raises a question. Lambert is aware of the issue; he knows that entropy increases when the gases are distinguishable, but not otherwise. How does he reconcile this with the energy dispersal view of entropy? If the change in dispersal is the same in Case 1 and Case 2 — zero — then how can Lambert continue to believe that entropy is a measure of dispersal?

    Here’s his rationalization:

    The motional energy of the molecules of each component is more dispersed in a solution than is the motional energy of those molecules in the component’s pure state.

    [Emphasis added]

    That’s true, but notice that he has equivocated. Before, he claimed that entropy was just a measure of energy dispersal…

    Entropy change is measured by the dispersal of energy: how much is spread out in a process, or how widely dispersed it becomes — always at a specific temperature.

    …and now he’s saying that the entropy is a measure of the dispersal of energy of each component, considered separately and then combined.

    If that were the case, we should be able to designate any two components, measure the change in their individual energy dispersals, and add them together to get the entropy increase for the entire system.

    It doesn’t work.

    Let’s say that before the partition is removed, we designate the molecules on the left side as one component and the molecules on the right side as the other component. After the partition is removed, the molecules from each component spread to the other side. Their energy is dispersed. Therefore, the system’s entropy must have increased, right? Wrong. If the two components consist of molecules of the same gas — that’s Case 2 — then there is no entropy increase, as everyone, including Lambert, agrees. It’s only when the gases are distinguishable — Case 1 — that entropy increases.

    This immediately creates problems for Lambert and the dispersalists. I’ll address those in my next comment.

  46. And:

    You might ask “What’s the problem with taking ‘components’ to mean ‘distinguishable components’?”

    1. In ‘deciding’ whether energy should disperse, the laws of physics don’t ‘pay attention’ to whether the gases are distinguishable, any more than they pay attention to the numbers on a pair of colliding billiard balls. Physical energy dispersal does not care about distinguishability.

    2. What does ‘distinguishable’ really mean? Distinguishable to whom? In my thought experiment, Yolanda could distinguish between the two isotopes — the two components — because she had the right equipment. Lacking that equipment, Xavier couldn’t tell the difference. It was all the same ‘component’ to him.

    Lambert offers no principled criterion for determining whether a component is ‘really’ distinguishable, as opposed to being distinguishable to this or that observer.

    3. If he tried to argue that ‘distinguishable’ just means ‘distinguishable in principle’, then he would run into the ‘Damon the Demon problem’. A Laplacean demon like Damon can not only distinguish one gas from another or one isotope from another — he can distinguish every molecule from every other, because each molecule has a distinct position and momentum. Needless to say, entropy calculations based on the distinguishability of every particle will not give the same results that physicists and chemists get for Case 1 and Case 2.

    The dispersalist view of entropy just doesn’t work.

Leave a Reply