The Myth of Absolute Certainty

I was banned from Uncommon Descent this morning for reasons unknown (though here is a plausible hypothesis). At the time of my banning, I was in the midst of a long discussion of absolute certainty and whether it can rationally be claimed. Since I can’t continue the discussion at UD, I’ll start a thread here instead and solicit the opinions of the very smart locals here at TSZ.

The question is whether there we can be absolutely certain of anything. I am not speaking of absolute certainty in the colloquial sense (“I’m absolutely certain I left the keys on the counter!”), but in the precise sense of 100.0% (unrounded) certainty, with literally no possibility at all of error — not even a trillionth of a trillionth of a trillionth of a percent chance of error.

Continue reading

The Blind Watchbreaker would dispose of lunches even if they were free — mootness of anti-NFL arguments

[cross posted at UD: The Blind Watchbreaker would dispose of lunches even if they were free — mootness of anti-NFL arguments]

Our colleague Elizabeth Liddle has described the process of human design as trial and error, tinkering and iteration. Like Dawkins, she has argued nature (like human designers) is able to construct biological designs via trial and error, tinkering and iteration. However, when nature is properly compared and contrasted with the way humans go about creating designs, it is apparent Dawkins’ claim of a blind watchmaker is false.

I refer to Elizabeth’s description because she articulated some aspects of the blind watchmaker hypothesis better than Dawkins, but in so doing actually helped highlight why Dawkins’ blind watchmaker is refuted by the evidence.

[this is a follow up post to Selection falsely called a mechanism when it should be called an outcome]
Continue reading

I think I just found an even bigger eleP(T|H)ant….

I just checked out Seth Lloyd’s paper, Computational Capacity of the Universe,  and find, interestingly (although I now remember that this has been mentioned before), that his upper limit on the number of possible operations is 10^120 (i.e. 400 bits) rather than 10^150 (the more generous 500 bits usually proposed by Dembski).  However, what I also found was that his calculation was based  on the volume of the universe within the particle horizon, which he defines as:

…the boundary between the part of the universe about which we could have obtained information over the course of the history of the universe and the part about which we could not.

In other words, that 400 bit limit is only for the region of the universe observable by us, which we know pretty well for sure must be a minor fraction of the total. However, it seems that a conservative lower limit on the proportion of the entire universe that is within the particle horizon is 250, and could be as much as 10^23, so that 400 bit limit needs to be raised to at least 100,000, and possibly very much more.

Which rather knocks CSI out of the water, even if we assume that P(T|H) really does represent the entire independent random draw configuration space, and is the “relevant chance hypothesis” for life.

heh.

But I’m no cosmologist – any physicist like to weigh in?

cross posted at UD

What theists don’t understand about atheists

 

I so often find that people who reject “atheist materialism” seem convinced that scientists are engaged in a desperate effort avert their gaze from the evidence that would force them to confront the truth that they fear: that there is a God who will Judge Us. Often they seem remarkably impervious to evidence of the seriousness with which many atheists treated their religion, and the reluctance with which they rejected it. One of the things that has opened my eyes during internet discussions over the last few years is the number of atheists, including atheist scientists, who were actually committed YECs for many of their younger years.

Continue reading

Configuration and Configurational Entropy

From Wiki:

http://en.wikipedia.org/wiki/Configuration_entropy

In statistical mechanics, configuration entropy is the portion of a system’s entropy that is related to the position of its constituent particles rather than to their velocity or momentum. It is physically related to the number of ways of arranging all the particles of the system while maintaining some overall set of specified system properties, such as energy. The configurational entropy is also known as microscopic entropy or conformational entropy in the study of macromolecules. In general, configurational entropy is the foundation of statistical thermodynamics.[1]

Continue reading

C14 Dating of Dinosaur Bones

Jean De Pontcharra, a Phd in Physics, has a presentation for creationist conferences entitled “Is Radiocarbon dating reliable?”

De Pontcharra and a colleague got hold of some dinosaur bones and decided to date them with C14. This was reported at Uncommon Descent. Could someone who knows something about science spot the error? Bueller? Cordova?

So why did Pontcharra do what he did? Why has this been reported in various places on the web as evidence of a young Earth?

On a charitable interpretation, the best we can say about this story is that ID has always been plagued by gross incompetence.

An uncomplicated mind might conclude that the Intelligent Design movement is all about creationist propaganda for the uneducated and uninquisitive.

Is ‘Darwinism’ Science or Ideology or Both or Neither?

Recently, Neil Rickert wrote to me:

“To me, the technical distinction between “Darwinian” and “Darwinism” is that “Darwinian” is a adjective while “Darwinism” is a noun.

Please start a separate thread to help clear this up.”

Similarly, this post was added recently at UD and has generated some feedback from TSZers who dialogue there:

“Everyone now knows that Darwinism, adn [sic] its parent materialism, are ridiculous, but for some people they are the only possible position. Those people would abandon the follies in an instant if they could just come up with a reliably non-theistic alternative. Meantime, the public face of Darwinism is dominated by anti-religious fanatics and self-condemned trolls. That is a key reason we can dispense with any notion that Darwinism is some kind of a science. A real science offers few attractions for such types.” – Denyse O’Leary

Continue reading

The ID explanation is too easy to vary

I like David Deutsch’s description of explanations (I think it comes from him): a good explanation is hard to vary.

An explanation says that some particular settings of claimed causes best explain some observed pattern in nature.

An explanation is likely to be good if other settings of the proposed causes predict a different pattern from what we observe – it is hard to vary the settings and still explain things as they actually are.

To make it concrete, a particular setting of gravity (acceleration of 32 feet/sec/sec) at the surface of the Earth explains the pattern we see when apples fall off trees. Any other setting of gravity would not yield the pattern we actually observe. The explanation is hard to vary and still explain the observed results.

An explanation is likely to be bad if a wide range of settings of the cause(s) can be chosen and the resulting pattern remains the same.

The basic argument of intelligent design is that there is a cause (the Intelligent Designer) for the observed pattern of life. Any number of other, subsidiary causes may be involved, but it is impossible for the diversity of life to have arisen without the intervention of an intelligent designer.

How should we assess this explanation? Look at the settings. As Daniel Dennett advises, twiddle with the causal knobs. What do we find in the intelligent design explanation?

Continue reading

Assume the IDers are right.

When we look at designed objects we can often tell a lot about the designers. For example, if we look at medical tools, fluffy teddies and cellos, we can see that the designers are compassionate and Value music. When we look at iron maidens and racks, we can see they have a sadistic streak.

Look at life on earth and assume it is designed, what can we tell about the designer?

A Designed Object’s Entropy often Increases with Its Complexity

[This is an abridged version of a post at UD: A Designed Objects Entropy Must Increase for Its Design Complexity to Increase, Part 2. I post it under a different title at TSZ, because upon consideration, the new title should be above reproach. What I put forward should happily apply to man-made designs. A student recently wanted to challenge his professors regarding the 2nd law and evolution, and I pointed him to my essay. If that student is a creationist, at least I feel I did my job and made him understand science better than he would from most creationist literature. Hence, the rather torturous discussions at TSZ and UD had benefit in furthering this student’s understanding of science. If he is going to reject Darwinism, he should reject it for good reasons, not because of the 2nd law.]

In order for a biological system to have more biological complexity, it often requires a substantial increase in thermodynamic entropy, not a reduction of it, contrary to many intuitions among creationists and IDists. This essay is part II of a series that began with Part 1
Continue reading

Materialism and Emergentism

At Uncommon Descent, Elizabeth mentioned that she liked what I was calling “emergentism”. Here’s a brief overview, in contrast with dualism and materialism, that perhaps will spark some discussion.

(1) Dualism gives us The Bifurcated World: the world consists of two fundamentally different kinds of substance (mind and matter), each of which is characterized by an essential property (mental and physical), and is constituted by logically and metaphysically distinct substantial particulars (minds and bodies). Nothing is essentially both physical and mental, although some things may exist as temporary unions of mind and body. (How logically and metaphysically distinct things can causally interact (or even appear) to causally interact is a serious problem.)

(2) Materialism gives us The Layered World: the world consists of a series of “levels”, each of which hierarchically imposed on the others, and each level supervenes on the level below it. Mental facts –> biological facts –> chemical facts –> molecular, atomic, and quantum facts. (A major problem with this view is that each ‘level’ has its own conceptual, ontological, and causal integrity — whereas some philosophers hold that biology is irreducible to chemistry for merely epistemological and methodological reasons, I hold the stronger view that biology is irreducible to chemistry for ontological (or metaphysical) reasons.)

(3) Emergentism gives us the Dynamic World: the world consists of processes that are inherently active and reactive, energetic, and operating at all ‘scales’ of temporal and spatial resolution — some processes are vast and slow, others small and fast, and many in-between. Some of these processes are merely physico-chemical, some are biological, and some are mental. The basic elements in this ontology are processes, not substances (as in dualism) or even particles (as in materialism).

As I see it, the frequently-heard allegation (made by dualists and theists) that emergentism is an intellectual fraud depends on whether there is a difference that makes a difference between emergence and supervenience.

What qualifies as science in the wonderful world of Disney

[cross posted at uncommondescent: What Qualifies as Science in the Wonderful World of Disney]

The scientific enterprise entails:

1. observation
2. hypothesis
3. testing

Consider this passage from the class text of an introductory cosmology class I took once upon a time:

galaxies farther than 4300 megaparsecs from us are currently moving away from us at speeds greater than that of light. Cosmological innocents sometimes exclaim, “Gosh! Doesn’t this violate the law that massive objects can’t travel faster than the speed of light?” Actually, it doesn’t. The speed limit that states that massive objects must travel with v < c relative to each other is one of the results of special relativity, and refers to the relative motion of objects within a static space. In the context of general relativity, there is no objection to having two points moving away from each other at superluminal speed due to the expansion of space.

page 39
Introduction to Cosmology
by Barbara ryden

Continue reading

A resolution of the ‘all-heads paradox’

There has been tremendous confusion here and at Uncommon Descent about what I’ll call the ‘all-heads paradox’.

The paradox, briefly stated:

If you flip an apparently fair coin 500 times and get all heads, you immediately become suspicious. On the other hand, if you flip an apparently fair coin 500 times and get a random-looking sequence, you don’t become suspicious. The probability of getting all heads is identical to the probability of getting that random-looking sequence, so why are you suspicious in one case but not the other?

In this post I explain how I resolve the paradox. Lizzie makes a similar argument in her post Getting from Fisher to Bayes, but there are some differences, so keep reading.

Continue reading

Getting from Fisher to Bayes

(slightly edited version of a comment I made at UD)

Barry Arrington has  a rather extraordinary thread at UD right now, ar

Jerad’s DDS Causes Him to Succumb to “Miller’s Mendacity” and Other Errors

It arose from a Sal’s post, here at TSZ, Siding with Mathgrrl on a point,and offering an alternative to CSI v2.0

Below is what I posted in the UD thread.

Continue reading

Siding with Mathgrrl on a point,and offering an alternative to CSI v2.0

[cross posted from UD Siding with Mathgrrl on a point, and offering an alternative to CSI v2.0, special thanks to Dr. Liddle for her generous invitation to cross post]

There are two versions of the metric for Bill Dembski’s CSI. One version can be traced to his book No Free Lunch published in 2002. Let us call that “CSI v1.0”.

Then in 2005 Bill published Specification the Pattern that Signifies Intelligence where he includes the identifier “v1.22”, but perhaps it would be better to call the concepts in that paper CSI v2.0 since, like windows 8, it has some radical differences from its predecessor and will come up with different results. Some end users of the concept of CSI prefer CSI v1.0 over v2.0.
Continue reading

Lines of reasoning as opposed to scientific evidence

In a recent comment, Robert Byers said:

Yes lines of reasoning as opposed to scientific evidence is a criticism I strongly make!!

I’m not quite sure what is going on in Robert’s way of thinking.  I am not sure what he means by “scientific evidence”.  Here I want to explore what Robert appears to be arguing.

Let’s take crossword puzzle solving as an illustration.  The puzzle has a grid where one can enter words.  And then there are the clues.  There is a list of “Across” clues and a list of “Down” clues.

Continue reading

VJ Torley on Order versus Complexity

VJ has written, by his standards, a short post distinguishing order from complexity (a mere 1400 words). To sum it up – a pattern has order if it can be generated from a few simple principles. It has complexity if it can’t.  There are some well known problems with this – one of which being that it is not possible to prove that a given pattern cannot be generated from a few simple principles.  However, I don’t dispute the distinction. The curious thing is that Dembski defines specification in terms of a pattern that can  generated from a few simple principles. So no pattern can be both complex in VJ’s sense and specified in Dembski’s sense.

At the heart of this the problem is that Dembski has written a paper that most IDers would find unacceptable if they took the trouble to understand it.  But they don’t quite have the courage to say they disagree. That is why this comment from  Eric Anderson made me chuckle:

Second, why is it so hard for some people to get it through their heads that the issue is not “order”? Is this really hard to understand, or just that so many people haven’t been properly educated about the issues?

I wonder which one it is in William Dembski’s case?

Appendix

TJ has written a 4000 word appendix to this post. I haven’t time to read it all but it appears that he accepts that some of his original OP was wrong. It is rare for people to admit they are wrong in heated Internet debates so all credit to him. In particular he appears to accept that there is considerable confusion about order, complexity, and specification within the ID community (why else the need to propose his own definitions?).

What would be really nice would a similar admission from Eric Anderson that the ID community needs to sort out its own definitions before complaining that others are incapable of understanding them.