Recap redux

David Nemati and Eric Holloway, “Expected Algorithmic Specified Complexity.” Bio-Complexity 2019 (2):1-10. doi:10.5048/BIO-C.2019.2.

Eric Holloway has returned to The Skeptical Zone, following a long absence. He expects to get responses to his potshot at phylogenetic inference, though he has never answered three questions of mine about his own work on algorithmic specified complexity. Here I abbreviate and clarify the recap I previously posted, and introduce remarks on the questions.

If there is a fundamental flaw in the second half, as you claim, then I’ll retract it if it is unfixable.
— Eric Holloway, January 2, 2020

In Section 4 of their article, Nemati and Holloway claim to have identified an error in a post of mine. They do not cite the post, but instead name me, and link to the homepage of TSZ. Thus there can be no question as to whether the authors regard technical material that I post here as worthy of a response in Bio-Complexity. Interacting with me at TSZ, Holloway acknowledged error in an equation that I had told him was wrong, expressed interest in seeing the next part of my review, and said, “If there is a fundamental flaw in the second half, as you claim, then I’ll retract it if it is unfixable.” I subsequently put a great deal of work into “The Old Switcheroo,” trying to anticipate all of the ways in which Holloway might wiggle out of acknowledging his errors. Evidently I left him no avenue of escape, given that he subsequently refused to engage at all, and insisted that I submit my criticisms to Bio-Complexity.

The notion that I must submit to Bio-Complexity is ludicrous, considering my past interactions with the editor-in-chief, Robert Marks, and a member of the editorial board, Winston Ewert. See “A Nontechnical Recap” for an account of how Baylor University required Ewert (advised by Marks) to submit a revised master’s thesis, following my report of plagiarism in the original version. I have no more interest in legitimizing Bio-Complexity than I have reason to expect fair handling of a submission exposing negligence in the review and editing of the article by Nemati and Holloway. (For an elementary explanation of the most obvious errors that passed review, see “Stark Incompetence.”)

A “fundamental flaw in the second half,” or two

According to Nemati and Holloway (and their former advisor, Robert Marks), each and every measure of algorithmic specified complexity — there are infinitely many of them — is a quantification of the meaningful information in data. The first of the fundamental flaws in the article is that the way in which the algorithmic specified complexity of the data is measured depends on how we refer to the data. Suppose that the expression y refers to the data, and that y = f(x). You need not know how f and x are defined to recognize that f(x) is another way of referring to the data. The amount of “meaningful information” in the data itself does not depend on whether we refer to the data as y or as f(x). This is something that most high schoolers grasp, and I am amazed to find myself explaining it here. Nemati and Holloway introduce a requirement that the measure of algorithmic specified complexity depend on how we refer to the data. As a consequence, the quantity of “meaningful information” in y is generally different from the quantity of “meaningful information” in f(x), even though the two expressions denote the same data. If you measure algorithmic specified complexity as they prescribe, then there are cases in which the result is infinite when the data is referred to as y, and the result is a negative number when the data is referred to as f(x).

The second of the fundamental flaws in the article is that an ostensible characterization of “conservation of complexity for ASC” turns out, when deobfuscated, to be a comparison of differently measured quantities of algorithmic specified complexity. For concreteness, let us say that f(x) is the data output by a process when x is the data input to the process. In any claim that a quantity of algorithmic specified complexity is conserved in the process, the measure must be the same for the output as for the input. However, Nemati and Holloway use the function f, which represents the process, to make the measure of algorithmic specified complexity different for the output of the process, expressed as f(x), than for the input to the process, expressed as x. In fact, the ASC measure that they apply to the output is customized to the process. It is absurd to suggest that a quantity of algorithmic specified complexity is conserved in the process when ASC is measured differently for the output than for the input.

Three unanswered questions

Holloway will be tempted to seize upon parts of this vague recap redux, and twist them to his purposes. Please keep it in mind that I supplied mathematical details in “The Old Switcheroo,” and that he ignored them, complaining that my post was too long. I subsequently distilled three brief questions, repeated below, and Holloway responded only with diversionary tactics. Yet he returns now to TSZ, expecting responses to the latest of his free-ranging crankery. It bears emphasis that I am attempting to engage him on a topic of the dissertation that he defended, by way of earning his doctoral degree in electrical and computer engineering. That is, Holloway is supposed to be an expert on algorithmic specified complexity. He declines to answer questions in his own field, and nonetheless insists that his criticism of phylogenetic inference be taken seriously by experts in the field.

Answers of yes to the following questions lead to the conclusion that Nemati and Holloway in fact compare quantities of “meaningful information” obtained by two different measures, when they claim to have established that the quantity of “meaningful information” is conserved. If Holloway cannot justify an answer of no to any of the questions, then he should retract the latter half of the article, as he said he would.

Question 1. Is your definition of algorithmic specified complexity precisely equivalent to the definition given by Ewert, Dembski, and Marks in “Algorithmic Specified Complexity,”

(A)   \[ASC(x, C, p) = -\!\log_2 p(x) - K(x|C), \]

even though you write I(x) in place of -\!\log_2 p(x)?

Remark. I introduce the equivalent definition (A) of algorithmic specified complexity in order to simplify what ensues. It is important to understand that an ASC measure associates a quantity of “meaningful information” with x, and that C and p are parameters of the measure. Change the values of C and p, and you have switched from one measure of “meaningful information” to another.

Question 2. The identity I(x) = ASC(x, C, p) + K(x|C) follows from your definition of algorithmic specified complexity. Is the following extension of your inequality (43) correct?

    \begin{align*} f\!ASC(x, C, p, f) & < I(x) = ASC(x, C, p) + K(x|C) \end{align*}

Remark. Inequality (43) is f\!ASC(x, C, p, f) < I(x), so the extension is plainly correct.

Question 3. You refer to the upper bound on fASC as “conservation of complexity for ASC,” so you evidently regard fASC as algorithmic specified complexity (ASC). I observe that

    \begin{align*} & f\!ASC(x, C, p, f) & & \\ & \quad = I(f(x)) - K(f(x)|C) & & \text{[definition]}\\ & \quad = -\!\log_2 \Pr[f(\mathcal{X}) = f(x)] - K(f(x)|C) & & \text{[by definition (39)]}\\ & \quad = -\!\log_2 p_f(f(x)) - K(f(x)|C) & & [\text{notation: }f(\mathcal{X}) \sim p_f] \\ & \quad = ASC(f(x), C, p_f), & & \text{[by definition (A)]} \end{align*}

where p_f denotes the probability distribution of the random variable f(\mathcal{X}). Have I correctly expressed fASC as algorithmic specified complexity?

Remark. There is nothing to the derivation but application of definitions and an introduction of new notation. Nemati and Holloway cannot object to my reasoning, because there is virtually none. What I’ve shown is that there is concealed, in the definition of fASC, a switch from one measure of algorithmic specified complexity to another. Specifically, the parameter p of the ASC measure is changed to p_f.

It follows from answers of yes to the foregoing questions that what Nemati and Holloway refer to as “conservation of complexity for ASC,”

(43)   \begin{equation*} f\!ASC(x, C, p, f) < I(x),  \end{equation*}

is equivalent to

    \begin{align*} ASC(f(x), C, p_f) & < ASC(x, C, p) + K(x|C). \\ \end{align*}

Remark. The right-hand side of (43) is rewritten according to the inequality of Question 2, and the left-hand side of (43) is rewritten according to the equality of Question 3. In the result, the two ASC measures have different parameters: where one has p_f, the other has p. In other words, there is clearly a comparison of quantities of “meaningful information” by two different measures.

As explained early in “The Old Switcheroo,” it is absurd to change from one ASC measure to another, and speak of conservation. Holloway should make good on his word, and retract the latter half of the article.

33 thoughts on “Recap redux

  1. From the OP:

    The notion that I must submit to Bio-Complexity is ludicrous, considering my past interactions with the editor-in-chief, Robert Marks, and a member of the editorial board, Winston Ewert.

    I don’t know that the notion is ludicrous. Isn’t it worth a shot just to see what happens?

    Have you submitted material previously?

    Where better to ask for a retraction than in the journal where the original material was published?

  2. Mung: I don’t know that the notion is ludicrous. Isn’t it worth a shot just to see what happens?

    To be honest, I’ve been tempted.

    Mung: Where better to ask for a retraction than in the journal where the original material was published?

    I’ve also been entirely honest in my explanation of why I don’t care to submit to Bio-Complexity.

    P.S.–It’s good to see you. You have my sincere, if belated, condolences on the passing of your mother.

  3. From what I can tell, you seem to not like how I use the term ‘conservation’. That’s a semantic dispute, and does not merit retracting half a paper.

  4. EricMH: From what I can tell, you seem to not like how I use the term ‘conservation’.

    So, it’s conserved for literally unconserved values of conserved? Like the conservation of mass, or energy, or momentum, implies the loss or increase of mass, or energy, or momentum?

  5. EricMH: From what I can tell, you seem to not like how I use the term ‘conservation’.

    From my opening post:

    Holloway will be tempted to seize upon parts of this vague recap redux, and twist them to his purposes. Please keep it in mind that I supplied mathematical details in “The Old Switcheroo,” and that he ignored them, complaining that my post was too long. I subsequently distilled three brief questions, repeated below, and Holloway responded only with diversionary tactics.

    I’ve put three unambiguous questions of mathematics before you, and you are attempting, yet again, to divert attention from them.

    As for what I do and do not like, I made it clear that I do not like seeing you call others to task when you do not submit to the same.

  6. EricMH,

    If you don’t have time for three mathematical questions, then how about one?

    Is the expression that you dub “conservation of complexity for ASC” an inequality of differently measured quantities?

  7. Tom English: Is the expression that you dub “conservation of complexity for ASC” an inequality of differently measured quantities?

    Yes. Can you explain concisely how this causes a problem? For example, we can have the inequality H(X) <= lg |X|. Left side deals with a transformed probability measure, and right side deals with cardinality.

    Also, I think you are right that 'conservation of information' is not a good term. 'information nongrowth', 'conservation of randomness', and 'independence conservation' are better names for this kind of theorem.

    'conservation of information' makes it sound like information is never lost, but the converse is true.

  8. EricMH,

    What you really need is simplicity, not brevity. But here is something brief and informal — perhaps simple enough for you to grasp.

    Suppose that f maps titles of books (in Unicode) to texts of books (in Unicode). You’re saying that if x is the title Complete Works of Shakespeare and f(x) is the text of the complete works of Shakespeare, then it is wrong to apply the same measure of algorithmic specified complexity to f(x) as to x, and ascribe much more meaningful information to the text than to the title.

    In short, you’re saying that there cannot be much more meaningful information in a book than in its title.

  9. Well, Tom, at least you managed to find time to look into how Trump has invested his money. If that’s what makes you happy …

    🙂

  10. Tom English: It’s good to see you. You have my sincere, if belated, condolences on the passing of your mother.

    Thank you. But what about my cat? 😉

  11. EricMH: ‘conservation of information’ makes it sound like information is never lost, but the converse is true.

    Information is always lost? Information is never gained? ID is dead, then.

  12. Tom English: then it is wrong to apply the same measure of algorithmic specified complexity

    Correct.

    Tom English: and ascribe much more meaningful information to the text than to the title.

    Correct. In your scenario, the function is a deterministic mapping of title to text, so it does not increase the amount of information.

    I get that you are setting up a situation where intuitively people will think this is ridiculous that a title contains as much information as the text, but that is just an artificiality of the scenario.

  13. Tom English: Suppose that f maps titles of books (in Unicode) to texts of books (in Unicode). You’re saying that if x is the title Complete Works of Shakespeare and f(x) is the text of the complete works of Shakespeare, then it is wrong to apply the same measure of algorithmic specified complexity to f(x) as to x\ldots

    EricMH: Correct.

    So if g is a function from texts to titles, and y is the only text for which the title g(y) is Complete Works of Shakespeare, then it is wrong, you say, to apply the same measure of algorithmic specified complexity to g(y) as to y.

    If the probability distribution is p in the measure of meaningful information in the text, then you and Nemati say that the probability distribution in the measure of meaningful information in the title must be p_g. (Here I use the notation introduced in the OP.) But in this case p_g(g(y)) = p(y). Put simply, now you say that we must take the title of the book to be just as unlikely as its contents to have arisen by chance. Applying different measures of meaningful information, as you prescribe, to the title, expressed as g(y), and to the text, expressed as y, we have

        \[ASC(g(y), C, p_g) - ASC(y, C, p) = K(y | C) - K(g(y) | C).\]

    For an uncontrived choice of the context C, the difference in conditional Kolmogorov complexity of the text of the complete works of Shakespeare and the title Complete Works of Shakespeare would be large. That is, in practice, you would ascribe much more meaningful information to the title than to the text. However, by contrivance, you could make the title slightly lower in meaningful information than the title.

    When the text is referred to as y, and the title is referred to as g(y), you ascribe a large quantity of meaningful information to the text, and hold that there must be almost as much, if not more, meaningful information in the title.

    When the text is referred to as f(x), and the title is referred to as x, you ascribe a small quantity of meaningful information to the title, and hold that there cannot be much more meaningful information in the text.

    Tom English: … and [wrong to] ascribe much more meaningful information to the text than to the title.

    EricMH: Correct. In your scenario, the function is a deterministic mapping of title to text, so it does not increase the amount of information.

    That’s a statement of creed, not a result that you have proved. I’ve just laid bare the absurdity of your creed.

    EricMH: I get that you are setting up a situation where intuitively people will think this is ridiculous that a title contains as much information as the text, but that is just an artificiality of the scenario.

    Oh, so I’m cultivating false intuitions, am I, just so I can score points with onlookers? You’re projecting your own pathology onto me.

    All that I’ve done is to illustrate the consequences of one of the two fundamental flaws of your article, identified in the opening post. When you take the properties of an entity to depend on the expression referring to the entity, it leads to absurdity. In the foregoing, the expressions x and g(y) both denote the title, Complete Works of Shakespeare, and the expressions y and f(x) both denote the text of the complete works of Shakespeare. The title and the text are what they are, irrespective of how we refer to them. But you attempt, incoherently, to make the measure of meaningful information in the title and the text depend on the expressions that denote them.

    You mentioned semantics, disparagingly, in a comment above. Ironically, incomprehension of the semantics of mathematical language is precisely your problem. I’ve seen what is going on with you many times before, in undergraduate students of computer science — but never before in someone with a doctorate in engineering. You’re obviously taking the expression “f(x)” to mean that something is done to x, or that f(x) is the result of a process operating on input x. In short, you’re interpreting mathematical language as though it were a programming language. To be clear, no, that’s not something you’re free to do, if you want to. It is categorically wrong. And I’ve gone to some length, here, to illustrate why it is wrong.

  14. Tom English: That is, in practice, you would ascribe much more meaningful information to the title than to the text.

    If there is a one to one mapping, then that appears to be the case.

    However, I think the reason this appears to be absurd is that we cannot go from the title to the text in the case of Shakespeare’s works. The title tells us almost nothing about the works, as without prior knowledge there is a very wide range of texts the title could refer to. If we were to add this aspect to our model, that it is a one to many mapping between title and text, then the title gets a whole lot of probabilities aggregated, say for example summing close to 1, with the end result of greatly reducing the complexity term of the title to almost nothing, since log 1 = 0. In which case, the title g(y) has very small ASC with regard to being measured with P_g. The fact g might be a many to one function is an important part of the paper and is illustrated by a couple examples.

    This is why I keep saying the absurdity you are bringing out is just an artificiality of the example.

    The reason I think you are doing this intentionally is because it seems you could be reading my work more charitably than you are, but instead seem to be going out of your way to come up with examples that portray it as negatively as possible.

    Which is fine, I think such strongly critical feedback is more valuable than people being all nice and charitable.

  15. EricMH: but instead seem to be going out of your way to come up with examples that portray it as negatively as possible.

    Which is fine, I think such strongly critical feedback is more valuable than people being all nice and charitable.

    So publish in a real journal and get some real feedback via peer review.

  16. Tom English: You’re obviously taking the expression “” to mean that something is done to or that is the result of a process operating on input In short, you’re interpreting mathematical language as though it were a programming language. To be clear, no, that’s not something you’re free to do, if you want to. It is categorically wrong. And I’ve gone to some length, here, to illustrate why it is wrong.

    This is the the main contention you seem to have, and it could be substantial. Can you give a better explanation of what you are getting at here? Specifically, how would it particularly change the math in my paper if I didn’t make this mistake?

  17. EricMH: Specifically, how would it particularly change the math in my paper if I didn’t make this mistake?

    Papers are published and peer reviewed. Blog posts not so much. Even if it’s a fancy blog.

  18. Tom English: Suppose that f maps titles of books (in Unicode) to texts of books (in Unicode). You’re saying that if x is the title Complete Works of Shakespeare and f(x) is the text of the complete works of Shakespeare, then it is wrong to apply the same measure of algorithmic specified complexity to f(x) as to x\ldots

    EricMH: Correct.

    Tom English: So if g is a function from texts to titles, and y is the only text for which the title g(y) is Complete Works of Shakespeare, then it is wrong, you say, to apply the same measure of algorithmic specified complexity to g(y) as to y. … [N]ow you say that we must take the title of the book to be just as unlikely as its contents to have arisen by chance. … [I]n practice, you would ascribe much more meaningful information to the title than to the text.

    EricMH [responding to the last sentence]: If there is a one to one mapping, then that appears to be the case.

    That’s another “correct.”

    However, you are again exposing your poor grasp of sophomore-level mathematics. What I wrote does not indicate that g is a one-to-one function. All that is necessary, for you to insist that the title is just as unlikely as the text to have arisen by chance, is that there be exactly one text with that particular title. This does not rule out the possibility that other titles that are shared by multiple texts, in which case g would not be one-to-one.

    We don’t even have to address the matter of “meaningful information” to see that what you’re saying is ludicrous. In any sane model of the world in which we live, the binary string encoding Shakespeare’s complete works is much less likely to arise by chance than the binary string encoding the title Complete Works of Shakespeare.

    EricMH: However, I think the reason this appears to be absurd is that we cannot go from the title to the text in the case of Shakespeare’s works. The title tells us almost nothing about the works, as without prior knowledge there is a very wide range of texts the title could refer to. If we were to add this aspect to our model, that it is a one to many mapping between title and text, then the title gets a whole lot of probabilities aggregated, say for example summing close to 1, with the end result of greatly reducing the complexity term of the title to almost nothing, since log 1 = 0. In which case, the title g(y) has very small ASC with regard to being measured with P_g.

    Oh, what a brilliant response! Do please share with us the story of how you realized that you could respond to a sensible case in which your approach produces absurd results by constructing an absurd case in which it produces sensible results. Specifically, if you redefine the function g, making Complete Works of Shakespeare the title of all of the texts, rather than the title of just one of the texts, then the title will be assigned a much higher probability. Surely this is divine inspiration.

    EricMH: This is why I keep saying the absurdity you are bringing out is just an artificiality of the example.

    Yeah, assuming that only one text has a particular title is artificial, and assuming that most, if not all, texts have the same title is not artificial.

    Your response is scoundrelous, but not so scoundrelous as pathetic.

  19. EricMH: This is why I keep saying the absurdity you are bringing out is just an artificiality of the example.

    The reason I think you are doing this intentionally is because it seems you could be reading my work more charitably than you are, but instead seem to be going out of your way to come up with examples that portray it as negatively as possible.

    It is highly ironic that while 0’Leary decries the assault on “2 + 2 = 4” by critical theorists, one of her darlings is assaulting a more fundamental notion, namely, that when y = f(x), the object denoted by y is precisely the object denoted by f(x).

    My example does not establish that you are wrong, in your article, to say that a property of an object is different when the object is denoted by f(x) than when it is denoted by y. Your error actually has nothing to do with the kind of objects, nor with the property, that you address. It is an error that most high schoolers easily recognize as an error, at least when it is pointed out to them. However, at your prompting, I set out to give a convincing illustration of the bizarre consequences of making the measure of algorithmic specified complexity different for f(x) than for y.

    That you, a Ph.D. engineer, should insist on ascribing different properties to a single object, depending on the expression denoting the object, is beyond bizarre. Now, confronted with a fairly simple illustration of how wrong you are in doing so, you want to make an issue of the illustration itself.

  20. Tom English: Now, confronted with a fairly simple illustration of how wrong you are in doing so, you want to make an issue of the illustration itself.

    As I mentioned a couple times, it is hard for me to understand what you are getting at. Maybe the problem is entirely on my side. However, I don’t have anything to go off of, besides asking for clarification what you mean about me treating math like programming. That sounds promising, so if you can explain a specific mathematical problem this causes that would be a big benefit.

  21. EricMH: This is why I keep saying the absurdity you are bringing out is just an artificiality of the example.

    But it is an example, so modus tollens applies.
    You do not appear to understand how math works.

  22. DNA_Jock: You do not appear to understand how math works.

    Can you explain what you are trying to say here? What is the specific problem you see?

  23. EricMH,
    The specific problem?
    Tom has explained to you, with mind-blowing patience, how your conservation of complexity in ASC is fundamentally flawed. He has walked you through an example, where applying your definitions of fASC() leads to patent absurdity.
    This, however, is not the problem. This is merely the background. The specific problem is that you, when confronted with the absurd conclusions of your own math, whine that “the absurdity you are bringing out is just an artificiality of the example.”.
    This response reveals that you do not understand modus tollens.
    Whining about artificiality gets you nowhere; you need to demonstrate that Tom is somehow misapplying your math — whining “well, I don’t like the answer, so that can’t be right” doesn’t cut it.
    You do not appear to understand how math works; specifically, refutation. That is the specific problem.

  24. EricMH: As I mentioned a couple times, it is hard for me to understand what you are getting at. Maybe the problem is entirely on my side.

    Why should I attempt, again, to walk you through basics when it’s not clear that you have legs? Do you agree with the following?

    1. The expression “y = f(x)” indicates that y and f(x) denote one and the same object.

    2. In mathematics, the properties of an object do not depend on how the object is denoted.

    Perhaps you can take some baby steps:

    3. What is a binary relation?

    4. In mathematics (founded on the Zermelo-Fraenkel axioms), a function is a binary relation with certain properties. What are those properties?

    EricMH: However, I don’t have anything to go off of, besides asking for clarification what you mean about me treating math like programming.

    False. In the second post of the series, I provided you with quite a bit to “go off of,” assuming that you were operating in good faith. Your response was that I should tag the post “tl;dr” [too long, don’t read]. You might try reading at least the section “Wherefore art thou f(Montague).” The last paragraph is somewhat dense, but is essential to getting what I say about mathematical functions and programs that calculate functions. I cannot “learn” you. Some effort on your part is required.

  25. DNA_Jock: This response reveals that you do not understand modus tollens.
    Whining about artificiality gets you nowhere; you need to demonstrate that Tom is somehow misapplying your math — whining “well, I don’t like the answer, so that can’t be right” doesn’t cut it.
    You do not appear to understand how math works; specifically, refutation. That is the specific problem.

    Precisely! This has been a “down the rabbit hole” experience for me. It’s heartening to see that a third party grasps the exchange, and has a sane response.

    The ultimate problem, I suspect, is that Eric has steeped in apologetics-masquerading-as-science from an early age. Apologists construct debates regarding articles of faith — for Eric, who converted to Catholicism because he saw it as compatible with ID, “conservation of information” is an article of faith — and then do whatever they can to adduce support for their beliefs. It seems to me that Eric revealed a lot about himself in this response to me:

    EricMH: The reason I think you are doing this intentionally is because it seems you could be reading my work more charitably than you are, but instead seem to be going out of your way to come up with examples that portray it as negatively as possible.

    This bespeaks a view that there are strengths and weaknesses in his argument, and that I, his adversary in a debate, am unfairly ignoring the strengths, while doing all that I can to play up the weaknesses. I won’t say that I was unsurprised to see that you got the written equivalent of a blank stare when you brought up modus tollens. But I would say that it is understandable, in retrospect.

  26. P.S.–I’ve got a household crisis to deal with, and may respond even more slowly than I ordinarily do. OKC is in the middle of the most damaging ice storm I’ve ever seen — and I’ve seen many. From the Washington Post:

    A disruptive and dangerous ice storm is underway in Oklahoma, with ice storm warnings plastering the map and more than 200,000 people without power. “Tree carnage” has been reported in Oklahoma City, where vegetation and power lines have been collapsing beneath the weight of the accreting rime. Up to another half-inch of freezing rain — rain that freezes on contact with the surface — is possible as more waves move through the affected regions into Wednesday. [link]

    The problem is that there are still lots and lots of green leaves on the trees. I stepped outside for five minutes this morning, and heard three large crashes nearby. A huge limb that broke away from one of my trees is now resting against the corner of an absentee (elderly and infirm) neighbor’s roof. I have no idea how to contact him, other than by mail. A total of three large limbs have fallen from two of my trees. The other of my trees has come down entirely. There’s still a large branch hanging over my roof. All I can do is hope that it survives the next 24 hours. The power has blinked off at least a dozen times today, but, by sheer luck, I’ve had no extended outage.

  27. Tom English,

    Wow, looks pretty devastating, following your links. Is this weather expected even if rarely or is it another indication the climate is changing?

  28. dazz, Corneel,

    Thanks. The limb hanging over my roof did not break. But all of my trees will have to be removed. I estimate an expense of 7000 dollars (6000 euros). ETA: Please don’t think that I’m oblivious to carbon release. The two trees that remain standing were near the ends of their lives, anyway, and are now hazards to life and property.

    Alan Fox: Is this weather expected even if rarely or is it another indication the climate is changing?

    If I recall correctly, this is Oklahoma City’s first recorded ice storm for the month of October. It’s usually impossible to attribute a particular weather event to climate change. In this case, we’ve had freezing temperatures at ground level, and higher air temperatures at higher altitudes. I haven’t a clue as to how one would connect that to global warming.

  29. Tom English,

    Please don’t think that I’m oblivious to carbon release.

    If you’re able to burn it, and thereby reduce fossil fuel heating, you can offset this!

  30. Tom English: I haven’t a clue as to how one would connect that to global warming.

    Newthink is climate change triggered by global warming – more energy in the system leading to less stable conditions and more frequent weather extremes.
    My personal indicator is snow on the Pyrenees. A month ago heavy falls made them look like meringue peaks, we had some heavy rain the last couple of
    days which usually means more snow at altitude but looking now the peaks I can see are virtually snow-free. The albedo effect of the warm dark rock seems to be overcoming the usual buildup to winter.

    Sorry to hear of the damage. Hope your insurance covers it.

  31. Sorry to hear about your misfortune, Tom. Once I have reread what you’ve posted and have better understanding, I’ll post a response.

Leave a Reply