Ever since the implications of quantum entanglement between particles became unavoidable for physicists and cosmologists, the doubt of the accuracy or completeness of Einstein’s general and special theory of relativity became real… Einstein himself called quantum entanglement “spooky action at a distance” because the possibility of faster than speed of light transfer of information between two entangled particles (no matter what distance between them) would violate relativity and the fundamentals of one of the most successful theories in science…
Recently, however, several experiments have confirmed that entanglement is not only real but it seems to violate relativity.
The results of the first experiment have provided the speed of entanglement, which was measured to be at least 10.000 times faster than the speed of light. here
In the second experiment scientists have been able to send data via quantum entanglement at 1200 km distance. Next OP will be on this theme…
Quantum entanglement is a phenomenon in quantum physics where 2 particles, like photons or electrons, become entangled, or their quantum state, or properties, became interdependent. Any change to the property of one entangled particle instantaneously (or faster than speed of light) affects the other. Einstein believed that the exchange of information at the speed faster than speed of light would create paradoxes, such as sending information to the past. That was one of the reasons Einstein and many other physicists have rejected quantum mechanics as either incomplete or false. And yet, up until today, no experiment has ever contradicted any of the predictions of QM.
As the experiments clearly show, the speed of entanglement is at least 10.000 faster than the speed of light and if that is the case, then entanglement violates relativity, as quantum information about the quantum state of one entangled particle instantaneously affects the other entangled particle…
So, if that is true, as it clearly appears to be, why didn’t we hear about it on the News?
What I would like to do with this OP is to get everyone involved to state their opinion or provide facts why these news have not been widely spread or accepted…
As most of you probably suspect, I have my own theory about it…Yes, just a theory…for now… 😉
BTW: I love quantum mechanics…
Just like Steven Weinberg once said: <strong><i>”Once you learn quantum mechanics you are really never the same again…”
Mung,
So you’re still unable to justify your rejection of the observer-dependence of entropy?
keiths:
Thanks. That’s my sense as well. There were a number of textbooks cited in the last couple of threads on this. I doubt there’s been some universal conversion to one interpretation since then, but I could be wrong.
What I see as missing from this discussion is consideration of the context of the questions about the nature of entropy. For example, here is a paper that argues that the spreading metaphor is best, assuming that you want an explanation of entropy that best incorporates energy, space, and time. That would make sense in textbooks for explaining entropy to chemists, for example. But perhaps not for textbooks in other domains or in populatizations that wanted to stick with counts and macrostates for simplicity.
On the other hand, if one is looking to explain entropy in a way that generalizes to many domains — counting, energy/thermodynamic, information, quantum information, then the formula
entropy = -k sum of [p(i) log p(i)]
works for all of them, as long as one sets k and the probabilities p(i) appropriately (the formula does need to be generalized for quantum case).
But there is still controversy. The nature of the probabilities in this formula is a controversy in statistical mechanics and in science in general. Are probabilities ontic (part of the world) or epistemic (only in us and our limited knowledge). Also, are they objective or subjective? (This is a separate issue from ontic/epistemic).
I think the objective versus subjective issue is a part of what motivates the differences in view in some of the previous exchanges in TSZ on explaining entropy.
For the ontic case, I believe the probabilities are always objective. But that is not true for the epistemic case. For here we can separate the probabilities according to an ideal, rational, fully-scientifically informed human observer from those of a person without those qualities. The first are object, the second subjective.
And, but in a different way, we can also separate the entropy according to a demon who could only perceive and conceive of the world in microstates from those of a human observer who perceives and conceives the world first in macrostates.
That’s enough for this post. More in next.
BruceS,
To continue my post on whether we should consider the probabilities in the entropy formula to be objective or subjective, assuming they are epistemic.
Consider first the case of a non-ideal human observer who is nonetheless partly familiar with the calculation of entropy. My view is that such non-ideal observers can get the estimate of entropy objectively wrong because they would make incorrect predictions about the world based on their estimate of entropy. For example, the person who did not understand the entropy (and enthalpy) of protein folding would not predict what happens correctly. Or, for an example closer to the heart of posters at TSZ, the person who did not understand the role of the sun in entropy on the earth would make incorrect predictions about possible changes in the complexity of life over time.
So I say epistemic probabilities can be objective if we accept a standard of correctness as the marker of objectivity. Since entropy is a scientific concept, that standard is accurate empirical prediction (ETA) and adherence to norms of science. The ideal rational observer makes such predictions. Or if you are uncomfortable with how to define “ideal” then take best current science, although you have to accept that correctness then has to allow for the fallibility of science. The point is that the probabilities in the entropy formula can be both be epistemic and objective.
What about the demon who only observed and conceived of the world as a series of microstates which were the exact position and momentum of each molecule? In this case, I do agree that the objective calculation of entropy for the ideal demon would differ from the ideal human scientist. But the demon’s predictions would also differ from the scientist, since the demon would base those predictions on a different set of conceptions.
Bruce,
That paper came up in the earlier thread, where I pointed out a flaw in its handling of the entropy of mixing. More on that particular flaw later.
If, like Neff, you want “an explanation of entropy that best incorporates energy, space, and time”, you need to
a) look at the available, correct explanations of entropy, and
b) pick the one that “best incorporates energy, space, and time”.
The problem is that the energy dispersal explanation fails the correctness test. It’s a false explanation.
Now, you might argue that for certain purposes — when teaching beginners, for instance — the false explanation of entropy is better as a stepping stone than a true explanation would be, just as Newtonian physics is a better first stepping stone to a mastery of physics than quantum mechanics or general relativity would be.
That’s fine, but in this thread we are concerned about what entropy is in reality. It’s definitely not energy dispersal, so the energy spreading explanation fails the correctness test.
I have yet to see anyone come up with a scenario in which the missing information interpretation fails.
BruceS,
Excellent comments, Bruce! You are almost making sense to me, and I’m incorrigible.
Bruce,
Unless you reject the notion that a thermodynamic system is in a single microstate at any given moment, then the probabilities we are concerned with are purely epistemic. The ontic probability (which I referred to as the “metaphysical probability” in the earlier thread) is 1 for the actual microstate and 0 for all others, so using ontic probabilities would always give an entropy of zero, which isn’t useful.
Since the probabilities are epistemic, not ontic, they can vary among observers depending on the information possessed by each. That means that entropy is observer-dependent (though not subjective!) as well.
Bruce,
Right. And they can remain epistemic and objective even when they differ from observer to observer.
Now let me explain Neff’s error regarding the entropy of mixing.
Sal quoted Neff as follows:
I responded:
I would say that, furthermore, one human observer can be correct and another human observer wrong, where standard of correctness is conformance with predictions and norms of the science which is appropriate for the type of prediction being made that.
I’m not sure if you agree with that.
Can you provide a comprehensive and noncontroversial list of those norms?
are you referring to the Mertonian Ethos?
Thanks
peace
That is correct, as far as it goes. So I’m not going to argue with you about it. 🙂
ok, but as far as Jaki is concerned, Jake does not use Godel to argue that that a ToE is impossible. Jaki says a ToE is possible.
I fond it a bit odd how Barrow divided people into optimists and pessimists and put Jaki on the pessimist side. Given Barrow’s criterion for what makes one an optimist I would put Jaki on the optimist side.
None of this is really either here nor there. Not trying to argue with you. Welcome back, btw.
I will speculate that more are adopting the information theory approach. I’ll see if I can dig some up that I did find.
Yes, I do agree that Neff was stretching (so to speak) “dispersal” to answer that scenario by making volume dependent on whether particles were distinguishable.
But if I read your reply correctly, you seem to be saying he got the entropy formula wrong.
My understanding is that he did not: Neff was alluding to Gibbs paradox and the entropy formula used to resolve it.
That Wiki article does explain how Jaynes would interpret the formula subjectively; I’d prefer using the term “epistemic” rather than “subjective”.
Whether Jayne would accept that distinction is an vexed question according to what I have read; some think his opinion changed and that in his later writing he thought there was an objective, epistemic view whereas in his earlier writing he did not clearly make that distinction for between subjective/objective and epistemic.
Some very nice posts, thanks.
“Missing knowledge” is an oxymoron. 🙂
I was not familiar with that term, but it seems a good start.
So now let me know why it is not, preferably without re-opening any arguments about epistemology. Having lurked during your exchanges with walto et al on the nature of knowledge, I have no interest in and won’t respond to arguments of that nature
I think I have a book by Jaynes in which he argues it is objective, which probably led to my disagreement with keiths over that (along with the books by Arieh Ben-Naim).
I can look it up if you are interested.
You are right, that is the considered view of probability Jaynes held at least according to the secondary sources I’ve consulted like section 3.3.6 of the Frigg paper I linked earlier and also in Probability in Statistical Physics. For now, I’m sticking with secondary but philosophically-informed sources like those rather then trying to delve into interpreting Jaynes directly.
But let me return the favor (perhaps) on the topic of books that might interest you: Other Worlds Spirituality and the Search for Invisible Dimensions had a recent New Books Network interview. (ETA: I liked the author but the interviewer, not so much. I have not read or bought the book. Probably a wait-for-library one for me).
I don’t know that it’s not.
I really was just interested in what you meant by the term “scientific norms” and if this was a well established idea.
You mention that it’s a good place to start are you implying that these “scientific norms” are not yet fully codified?
I do think there are some issues with Mertonian Ethos.
For example who determines if a particular scientist is conforming to them and how do we keep subconscious bias from intruding with out our knowledge into our efforts.
It’s a good thing I don’t offer arguments of that nature then 😉
peace
I am you are enjoying your new-found leisure time, Alan. And the world cup (assuming you are cheering for England though I don’t believe you live there).
BruceS,
I find football a bit dull but I must start to take notice now England are in the semi-finals. Can’t help but know the last four also include France (adopted home and I currently get banter in almost any encounter with local populace) and Belgium as there are many Belgian immigrants, second-homers and retirees locally. Had to google to find there’s one semifinal between Russia and Croatia to play.
An England vs France final would be entertaining. I might even watch!
ETA remove insult to Belgium
My son created a list of the best floppers at World Cup 2018. None of the English or Belgium players made the list but 3 Croatian did, and so did 3 French players…
Neymar Jr. has been crowned before the WC ended…
Bruce,
His name is actually Leff, not Neff. It’s my fault — I misidentified him as “Neff” above.
Leff is pulling a bait-and-switch. What disperses in his example is not energy, but rather the “black” and “white” particles. Entropy increases, but energy does not disperse. Therefore entropy cannot be a measure of energy dispersal.
No, the formula is correct. The problem is that Leff cannot explain, using the energy dispersal interpretation, why the entropy increase is nonzero for distinguishable particles but zero for indistinguishable ones.
The energy dispersal interpretation fails, but the missing information interpretation handles this scenario with no difficulty.
Bruce:
An observer can certainly screw up an entropy calculation — for instance, by using the measuring equipment incorrectly, or by plugging degrees Celsius into a formula that requires kelvins — and in that case, of course, the resulting entropy number won’t be correct.
Entropy is observer-dependent and objective, but that doesn’t mean that any number an observer happens to come up with will qualify as objective.
Bruce,
I agree. That’s why I say that entropy is “observer-dependent but objective.”
I addressed that in the earlier thread in an exchange with Mung:
keiths:
Mung:
keiths:
Jaynes could have avoided the problem by referring to entropy as “observer-dependent but objective”.
Fleshing that out, it means that once the parameters of the macrostate have been selected, the rest of the calculation proceeds objectively. In other words, two observers who specify the macrostate in the same way should get the same value for entropy, assuming that their measurements are accurate and their calculations are correct.
Mung:
I was watching a lecture on knot theory the other day, in which the professor explained that the simplest knot — a loop — is known as “the unknot”.
I immediately thought of you. Had you been watching that lecture, I predict you would have wasted all of your time obsessing over the apparent contradiction: “How can the unknot be a knot? It’s an unknot, not a knot!”
Meanwhile, the brighter folks would register the apparent contradiction, recognize its unimportance, and proceed to spend the rest of the lecture learning about knot theory.
Mung, to Bruce:
It shouldn’t have led to your disagreement, since I too hold that entropy is objective. Observer-dependent but objective.
Jaynes makes the same point:
Entropy is observer-dependent.
fifth:
That’s not a problem with the Mertonian norms. That’s a problem with the humans who are trying to conform to them.
You are right, it is at least partly an issue of epistemic virtues in science.
These norms are studied in philosophy of science and sociology of science, eg in understanding “best” in “inference to best explanation”, in the demarcation problem, in understanding rationality in science (eg later Kuhn). Sometimes they are called “values” or “virtues”.
I don’t think the scientific norms can ever be “fully codified”. For one thing, they will change in time. For another, the list of applicable norms and their priority depends on the particular circumstance. Of course, this approach raises more “who decides” issues. I have a common answer for all in my next note.
My approach to these issues relies partly on some variation of coherentism as expressed, eg, in Neuruth’s boat analogy.
I think the Mertonian Norms apply to any process which claims objectivity, not just the process in science. For me, the meaning of “objectivity” in “scientific objectivity” is based following a process subject to those Mertonian norms. Objectivity is mandatory in science; complying to the norms is how to achieve it.
To get a full set of the norms particular to science, one has to add the following: falsifiability; accuracy of predictions; unification meaning consistency with nearby scientific domains (eg as with psychology and neuroscience) but also with physics; simplicity of theories; fruitfulness of theories for ongoing research; wide scope of theories to accommodate facts beyond those directly explained by the theory; and likely others. Not everyone agrees on priority of these. Not all of them are applicable in a given circumstance. Who decides? See next note.
Follow the objective process according to Mertonian norms. Those norms mean science is a community process, not an individual process, so individual bias gets addressed.
First note that following the norms is different from being correct. Some theories can be judged as scientific but wrong. Same deciders for both questions: is it science? if so, is it correct?.
The first deciders are the community of scientists working on the research program in the relevant domain of science. They are the primary deciders.
Secondary deciders include: scientists in the domain not part the research program, scientists in domains with related expertise, philosophers with expertise in that domain, statisticians if the experiments or theories involve science, engineers making technology based on that science, intellectual descendants of any of those groups (who may change the decision over time).
Sometimes there is no consensus on whether something is science, or if it is science, whether it is correct. As current examples, consider multiverses or string theory.
There are people who carry on with what they think as valid scientific research programs but which are not. Cold fusion and some quantum-based explanations of consciousness are examples. Sometimes the issues with these groups are obvious: they do not follow the objectivity norms or they clearly violate a scientific norm like unification. If not, I look at the secondary deciders to ascertain whether these ideas follow the norms of science and if so whether they are correct.
I do not consider the general public to be deciders. People do decide (through their government) whether they want to pay for science. They also decide whether scientific theories should overrule other values and beliefs in their lives. But they do not decide whether something is science or whether it is correct science.
I am certainly in this non-decider category. I rely on understanding the consensus of deciders of all types, or, if there is no consensus, on understanding the reasons why.
How does one get in the community of deciders/scientists? Both by formal training and by following an apprentice program. It usually includes a Phd and postdoc work with an “apprentice master” who is existing member in the relevant domain and possibly a specific research program.
You may ask how I can justify the intellectual inbreeding of such an apprenticeship approach. The answer is I see science as successful in meeting its goals and I see these processes as an essential part of that success. The reason for this is that they are maintained by working scientists themselves: successful practitioners have the best ideas of why they are successful.
Nice.
🙂
Let me know when I begin to haunt your dreams.
I have a picture in my mind of keiths going around trying to measure things that are not there.
Hilarious. So you were trying to gin up a controversy where none existed?
Mung,
Brighter people have no trouble with the concept. How many eggs are missing from this carton, Mung?
Your confusion on this very simple matter suggests a large amount of missing intelligence.
Mung,
Um, no. I was (and still am) trying to correct your chronic confusion regarding the observer-dependence of entropy:
keiths:
Mung:
Bruce,
Leff erroneously criticizes the missing information interpretation of entropy for not explicitly involving energy:
Lambert makes a similar criticism, but both Leff and Lambert are incorrect. While it’s true that the missing information interpretation leads to an entropy that is not expressed in terms of energy, that’s actually a feature, not a bug.
The natural unit of all entropies, including thermodynamic entropy, is the bit. Energy doesn’t make an appearance. What puts the “thermodynamic” in “thermodynamic entropy” is the fact that the epistemic probability distribution from which the entropy is derived is a distribution over possible thermodynamic microstates.
Similarly, card deck entropy is expressed in bits. Cards and decks don’t make an appearance. What puts the “card deck” into “card deck entropy” is the fact that the epistemic probability distribution from which the entropy is derived is a distribution over possible card deck orderings.
Paradoxically, then, the fact that the energy dispersal interpretation explicitly invokes energy actually proves that thermodynamic entropy is not a measure of energy dispersal. Any proper entropy will have units of bits, not those of energy dispersal.
“Wait a minute,” you might object. “If the natural unit of thermodynamic entropy is the bit, why are thermodynamic entropies typically expressed in joules per kelvin?”
I addressed that issue in the earlier thread:
Entropy — including thermodynamic entropy — really is best expressed in terms of bits (or nats, trits, hartleys, or other units of information). Entropy is a measure of missing information, after all.
Leff and Lambert both correctly criticize the disorder interpretation of entropy by pointing to instances in which it fails. The irony is that they fail to realize that their own preferred interpretation — the energy dispersal interpretation — also fails in certain cases. By their own reasoning, it too should be abandoned.
I’ve already discussed why the energy dispersal interpretation fails in the isothermal gas mixing example. Another of my favorite examples is provided by John Denker:
keiths:
keiths,
My original reply was in response to Walt’s query about what textbooks say. My point was that to answer the question “What is Entropy” one needs to know the context for the desired answer (eg the course for the textbook). If the context is teaching the concept to undergrad chemists, maybe the energy spreading idea might be the best way for them to think about it, at least initially. I have no opinion on whether that is the case.
In my experience, Jayne’s stuff comes up in philosophy of science in discussions of whether thermodynamics is reducible to SM and if so how. Definitely no “energy spreading” idea in those discussions. Instead, those discussions pit Jayne’s pure info against Boltzmann and Gibbs approaches, which are based on the physics of the thermodynamic microstates. See the book and paper I linked earlier for more details. Message me if you want pdf of the book.
(ETA typos)
Bruce,
I understand what you were trying to do. It’s just that the answer you gave is incorrect.
That’s not right. There are no contexts in which entropy is a measure of energy dispersal. Lambert and Leff mean well, but they are wrong. By lobbying textbook authors to include the energy spreading interpretation of entropy, Lambert has inadvertently done a great disservice.
Is an increase in entropy associated with energy spreading? In some cases, yes, but not in others. Therefore entropy cannot be a measure of energy dispersal.
Is an increase in entropy associated with an increase in disorder? In some cases, yes, but not in others. Therefore entropy cannot be a measure of disorder.
Entropy is a measure of missing information — the additional information that would be required to specify the exact microstate of a system given that you only know its macrostate.
And to that end you quote me saying that it’s observer-dependent. Isn’t that what you also believe? Perhaps you should clear up your own chronic confusion first.
Heh.
No answer to my question about the number of missing eggs, Mung?
keiths:
Mung:
The confusion is yours. Like walto, you struggle with this subject, to the point that you can’t even keep your own position straight.
In that quote, you correctly acknowledge that entropy is observer-dependent, but incorrectly claim that that makes it subjective. Elsewhere you have disputed entropy’s observer-dependence, and you confirmed that just two days ago:
keiths:
Mung:
You’re unable to keep your position straight. That’s confusion.
Bruce,
Just to drive the point home, here are six reasons why dispersalism cannot be correct, all taken from the earlier thread:
I’ll post some other relevant comments from the earlier thread:
And: