Is Academic Chauvinism As Dangerous As Climate Disruption Denial

The following appeared in my hometown paper today.
I wondered what all of you might have to say about it.
I typed it all out but please remember the title was created by the editor not the letter writer.

Not all ‘science’ is equal

EDITOR: The word science has been bastardized. Its common usage does not distinguish between the hard, soft and historical sciences.

Among the hard sciences, namely physics, chemistry and some aspects of biology, e.g., microbiology, genetics, etc., one relies on experiments that generate mathematical theories that make definite predictions that can be experimentally verified, and thus to theories that can definitely be falsified. The hard sciences deal with only the physical aspect of nature, where purely physical devices can be used to collect relevant data.

Among the soft sciences — social sciences, psychology, etc., — studies are rarely based on mathematical descriptions, and so definite predictions are elusive and pregnant with complicated assumptions. Here one is dealing with humans, and as such with essentially the nonphysical, e.g., issues of the human mind, and the supernatural aspects of nature, owing to humans being spiritual beings.

Finally, among the historical sciences, such as evolutionary theory, climate change, etc., studies are more akin to forensic science, where extant data is used to extrapolate and make tentative predictions. There is no single, well-defined theory that makes predictions that can be experimentally tested, and thus falsify the theory. Here one is certainly dealing with the whole of reality — the physical/nonphysical/supernatural aspect of nature.

It was written by a PhD. of Physics and my personal take is that it represents professional chauvinism.
As a side note the author is also a creationist though I don’t know if an Old or Young Earth creationist.

Moral Behavior Without Principled Intent

Addendum: The original title of this post was “An Evolutionary Antecedent of Morality?”. In the comments Petrushka pointed out the difficulties of this phrase and I have given it a better title.

In the comments of an old post I linked to the story of a Bonobo chimp named Kanzi who is the research subject of a project called The Great Ape Trust. Since then I have been mentally groping for, what was, an amorphous concept I needed to concretize in order to turn that comment into an OP. Petrushka has helpfully formalized that concept with his very own neologism, enabling me to write this.

Continue reading

Englishman in Istanbul proposes a thought experiment

at UD:

… I wonder if I could interest you in a little thought experiment, in the form of four simple questions:

1. Is it possible that we could discover an artifact on Mars that would prove the existence of extraterrestrials, without the presence or remains of the extraterrestrials themselves?

2. If yes, exactly what kind of artifact would suffice? Car? House? Writing? Complex device? Take your pick.

3. Explain rationally why the existence of this artifact would convince you of the existence of extraterrestrials.

4. Would that explanation be scientifically sound?

I would assert the following:

a. If you answer “Yes” to Question 4, then to deny ID is valid scientific methodology is nothing short of doublethink. You are saying that a rule that holds on Mars does not hold on Earth. How can that be right?

b. If you can answer Question 3 while answering “No” to Question 4, then you are admitting that methodological naturalism/materialism is not always a reliable source of truth.

c. If you support the idea that methodological naturalism/materialism is equivalent to rational thought, then you are obligated to answer “No” to Question 1.

 

Well, I can never resist a thought experiment, and this one seems quite enlightening….

Continue reading

Response to Kairosfocus

[23rd May, 2013 As Kairosfocus continues to reiterate his objections to the views I express in this post, I am taking the opportunity today to clarify my own position:

  1. I do not think that OM was calling KF a Nazi, merely drawing attention to commonality between KF’s apparent views on homosexuality as immoral and unnatural to those of Nazis who also regarded homosexuality as immoral and unnatural.  However, I accept that one huge difference is that KF appears to considers that homosexuality is non-genetic and can be cured; whereas Nazis considered that it was genetic and should be eradicated.
  2. I agree with KF that inflammatory comparisons with those one disagrees with to Nazis is unhelpful and divisive.  I will not censor such comparisons, but I will register my objections to them.  This includes OM’s comparison (although I find KF’s views on homosexuality morally abhorrent, and factually incorrect, his view is profoundly different to those of the Nazis), and it also includes KF’s frequent comparisons of those of us who hold that a Darwinist account of evolution is scientifically justified to those “good Germans” who turned a blind eye to Nazi-ism.
  3. When referring to CSI as “bogus” I mean it is fallacious and misleading.  I do not mean that those who think it is calculable and meaningful are being deliberately fraudulent.  I interpret AF to mean the same thing by the term.  However, even if he does not, I defend his right to say so on this blog, just as I will defend KF’s right to defend CSI (or even his views on homosexuality) on this blog.]

I will move this post to the sandbox shortly, but as I am banned from UD, and therefore cannot respond to this in the place where it was issued, I am doing so here.  Kairosfocus writes:

Continue reading

Asymmetry

When I started this site, I had been struck by the remarkable symmetry between the objections raised by ID proponents to evolution, and the objections raised by ID opponents to ID – both “sides” seemed to think that the other side was motivated by fear of breaking ranks; fear of institutional expulsion; fear of facing up to the consequences of finding themselves mistaken; not understanding the other’s position adequately; blinkered by what they want, ideologically, to be true, etc.  Insulting characterisations are hurled freely in both directions. Those symmetries remain, as does the purpose of this site, which is to try to drill past those symmetrical prejudices to reach the mother-lode of genuine difference.

But two asymmetries now stand out to me:

Continue reading

Andre’s questions

Andre poses some interesting questions to Nick Matzke. I thought I’d start a thread that might help him find some answers.  I’ll have first go :

Hi Nick

Yes please can we get a textbook on Macro-evolution’s facts!

I’ll make it easy for you;

1.) I want to see a step by step process of the evolution of the lung system.

Google Scholar: evolution of the lung sarcopterygian

Continue reading

Reductionism Redux

As I’ve mentioned, I’m a great fan of Denis Noble, and recommend his book, the Music of Life, but you can get the content pretty well in total in this video of a lecture:

Principle of Systems Biology illustrated using the Virtual Heart

and there’s other material on his site.

So I was interested to see Ann Gauger making a very similar set of points in this piece: Life, Purpose, Mind: Where the Machine Metaphor Fails.

Continue reading

Is Scepticism a Worldview?

During the recent debate with Gpuccio at one point he claimed was that it was my prior adoption of particular ideology or worldview that led me to exclude design as an explanation. Thus reducing our disagreement to a choice of worldviews.  I am not sure I know what a worldview is – but scepticism falls far short of being an ideology.  All it amounts to is the demand for strong evidence before believing anything.  This is just an approach to evidence and is compatible with all sorts of beliefs about the nature of reality.

To take the particular issue of whether life is designed.  Scepticism does not exclude design. It just asks that a design explanation is evaluated by the same standards as any other explanation. It is not sufficient that other explanations are considered to be inadequate.  If you happen to believe in a designer with the appropriate powers and motivation then you may well accept that as the best explanation for life. If you happen to believe in a designer with evil motivations and sufficient power then that is a perfect explanation for natural disasters. But these are beliefs which need to be separately evaluated with their own evidence. You cannot use the fact that a designer is a good explanation for life as evidence for that designer.

What has Gpuccio’s challenge shown?

(Sorry this is so long – I am in a hurry)

Gpuccio challenged myself and others to come up with examples of dFSCI which were not designed. Not surprisingly the result was that I thought I had produced examples and he thought I hadn’t.  At the risk of seeming obsessed with dFSCI I want assess what I (and hopefully others) learned from this exercise.

Lesson 1) dFSCI is not precisely defined.

This is for several reasons. Gpuccio defines dFSCI as:

Continue reading

Gpuccio’s challenge

29th Oct: I have offered a response to Gpuccio’s challenge below.

I think this is worth a new post.

Gpuccio has issued a challenge here and here. I have repeated the essential text below. Others may wish to try it and/or seek their own clarifications. Could be interesting. Something tells me that it is not going to end up in a clear cut result. But it may clarify the deeply confusing world of dFSCI.

Challenge:

Give me any number of strings of which you know for certain the origin. I will assess dFSCI in my way. If I give you a false positive, I lose. I will accept strings of a predetermined length (we can decide), so that at least the search space is fixed.

Conditions:

a) I would say binary strings of 500bits. Or language strings of 150 characters. Or decimal strings of 150 digits. Something like that. Even a mix of them would be fine.

No problem with that.

b) I will literally apply my procedure. If I cannot easily see any function for the string, I will not go on in the evaluation, and I will not infer design. If you, or any other, wants to submit strings whose function you know, you are free to tell me what the function is, and I will evaluate it thoroughly.

That’s OK. I will supply the function in each case. I note that when I tried to define function precisely you said that the function can be anything the observer wishes provided it is objectively defined, so, for example, “adds up to 1000” would be a function. So I don’t think that’s going to be an issue!

c) I will be cautious, and I will not infer design if I have doubts about any of the points in the procedure.

I am not happy with this. If the string meets the criteria for dFSCI you should be able to able to infer design. You can’t pick and choose when to apply it. At the very least you must prove that the string does not have dFSCI if you are going to avoid inferring design.

d) Ah, and please don’t submit strings outputted by an algorithm, unless you are ready to consider them as designed if the algorithm is more than 150 bits long. We should anyway agree, before we start, on which type of system and what time span we are testing.

I don’t understand this – a necessity system for generating digital strings can always be expressed as an algorithm e.g Fibonacci series. Otherwise it is just a copy of the string. Also it is not clear how to define how many bits long an algorithm is. Maybe it will suffice if I confine myself to algorithms that can be expressed mathematically in less than 20 symbols?

And anyway, I am afraid we have to wait next week for the test. My time is almost finished.

That’s OK. I need time to think anyway. But also I need to get your clarification on b,c and d.

 

On the Circularity of the Argument from Intelligent Design

There is a lot of debate in the comments to recent posts about whether the argument from ID is circular.  I thought it would be worth calling this out as a separate item.

I plead that participants in this discussion (whether they comment here or on UD):

  • make a real effort to stick to Lizzie’s principles (and her personal example) of respect for opposing viewpoints and politeness
  • confine the discussion to this specific point (there is plenty of opportunity to discuss other points elsewhere and there is the sandbox)

What follows has been covered a thousand times. I simple repeat it in as rigorous a manner as I can to provide a basis for the ensuing discussion (if any!)

First, a couple of definitions.

A) For the purposes this discussion I will use “natural” to mean “has no element of design”. I do not mean to imply anything about materialism versus supernatural or such like. It is just an abbreviation for “not-designed”.

B) X is a “good explanation” for Y if and only if:

i) We have good reason to suppose X exists

ii) The probability of Y given X is reasonably high (say 0.1 or higher). There may of course be better explanations for Y where the   probability is even higher.

Note that X may include design or be natural.

As I understand it, a common form of the ID argument is:

1) Identify some characteristic of outcomes such as CSI, FSCI or dFSCI. I will use dFSCI as an example in what follows but the point applies equally to the others.

2) Note that in all cases where an outcome has dFSCI, and a good explanation of the outcome is known, then the good explanation includes design and there is no good natural explanation.

3) Conclude there is a strong empirical relationship between dFSCI and design.

4) Note that living things include many examples of dFSCI.

5) Infer that there is a very strong case that living things are also designed.

This argument can be attacked from many angles but I want to concentrate on the circularity issue. The key point being that it is part of the definition of dFSCI (and the other measures) that there is no good natural explanation.

It follows that if a good natural explanation is identified then that outcome no longer has dFSCI.  So it is true by definition that all outcomes with dFSCI fall into two categories:

  • A good explanation has been identified and it is design
  • No good explanation has yet been identified

Note that it was not necessary to do any empirical observation to prove this. It must always be the case from the definition of dFSCI that whenever a good explanation is identified it includes design.

I appreciate that as it stands this argument does not do justice to the ID position. If dFSCI was simply a synonym for “no  good natural explanation” then the case for circularity would be obviously true. But is incorporates other features (as do its cousins CSI and FSCI). So for example dFSCI incorporates attributes such as digital, functional and not compressible – while CSI (in its most recent definition) includes the attribute compressible. So if we describe any of the measures as a set of features {F} plus the condition that if a good natural explanation is discovered then measure no longer applies – then it is possible to recast the ID argument this way:

“For all outcomes where {F} is observed then when a good  explanation is identified it turns out to be designed and there is no good natural explanation. Many aspects of life have {F}.  Therefore, there is good reason to suppose that design will be a good explanation and there will be no good natural explanation.”

The problem here is that while CSI, FSCI and dFSCI all agree on the “no good natural explanation” clause they differ widely on {F}. For Dembski’s CSI {F} is essentially equivalent to compressible (he refers to it as “simple” but defines “simple” mathematically in terms of easily compressible). While for FSCI {F} includes “has a function” and in some descriptions “not compressible”. dFSCI adds the additional property of being digital to FSCI.

By themselves both compressible and non-compressible phenomena clearly can have both natural and designed explanations.  The structure of a crystal is highly compressible. CSI has no other relevant property and the case for circularity seems to be made at this point. But  FSCI and dFSCI  add the condition of being functional which perhaps makes all the difference.  However, the word “functional” also introduces a risk of circularity.  “Functional” usually means “has a purpose” which implies a purpose which implies a mind.  In archaeology an artefact is functional if it can be seen to fulfil some past person’s purpose – even if that purpose is artistic. So if something has the attribute of being functional it follows by definition that a mind was involved. This means that by definition it is extremely likely, if not certain, that it was designed (of course, it is possible that it may have a good natural explanation and by coincidence also happen to fulfil someone’s purpose). To declare something to be functional is to declare it is engaged with a purpose and a mind – no empirical research is required to establish that a mind is involved with a functional thing in this sense.

But there remains a way of trying to steer FSCI and dFSCI away from circularity. When the term FSCI is applied to living things it appears a rather different meaning of “functional” is being used.  There is no mind whose purpose is being fulfilled. It simply means the object (protein, gene or whatever) has a role in keeping the organism alive. Much as greenhouse gasses have a role in keeping the earth’s surface temperature at around 30 degrees. In this case of course “functional” does not imply the involvement of a mind. But then there are plenty of examples of functional phenomena in this sense which have good natural explanations.

The argument to circularity is more complicated than it may appear and deserves careful analysis rather than vitriol – but if studied in detail it is compelling.

Conflicting Definitions of “Specified” in ID

I see that in the unending TSZ and Jerad Thread Joe has written in response to R0bb

Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI. Even Stephen C. Meyer says CSI is not amendable to compression.

A protein sequence is not compressable- CSI.

So please reference Dembski and I will find Meyer’s quote

To save Robb the effort.  Using Specification: The Pattern That Signifies Intelligence by William Dembski which is his most recent publication on specification;  turn to page 15 where he discusses the difference between two bit strings (ψR) and (R). (ψR) is the bit stream corresponding to the integers in binary (clearly easily compressible).  (R) to quote Dembksi “cannot, so far as we can tell, be described any more simply than by repeating the sequence”.  He then goes onto explain that (ψR) is an example of a specified string whereas (R) is not.

This conflict between Dembski’s definition of “specified” which he quite explicitly links to low Kolmogorov complexity (see pp 9-12) and others which have the reverse view appears to be a problem which most of the ID community don’t know about and the rest choose to ignore.  I discussed this with Gpuccio a couple of years ago. He at least recognised the conflict and his response was that he didn’t care much what Dembski’s view is – which at least is honest.

The LCI and Bernoulli’s Principle of Insufficent Reason

(Just found I can post here – I hope it is not a mistake. This is a slightly shortened version of a piece which I have published on my blog. I am sorry it is so long but I struggle to make it any shorter. I am grateful for any comments. I will look at UD for comments as well – but not sure where they would appear.).

I have been rereading Bernoulli’s Principle of Insufficient Reason and Conservation of Information in Computer Search by William Dembski and Robert Marks. It is an important paper for the Intelligent Design movement as Dembski and Marks make liberal use of Bernouilli’s Principle of Insufficient Reason (BPoIR) in their papers on the Law of Conservation of Information (LCI).  For Dembski and Marks BPoIR provides a way of determining the probability of an outcome given no prior knowledge. This is vital to the case for the LCI.

The point of Dembski and Marks paper is to address some fundamental criticisms of BPoIR. For example  J M Keynes (along with with many others) pointed out that the BPoIR does not give a unique result. A well-known example is applying BPoIR to the specific volume of a given substance. If we know nothing about the specific volume then someone could argue using BPoIR that all specific volumes are equally likely. But equally someone could argue using BPoIR all specific densities are equally likely. However, as one is the reciprocal of the other, these two assumptions are incompatible. This is an example based on continuous measurements and Dembski and Marks refer to it in the paper. However, having referred to it, they do not address it. Instead they concentrate on the examples of discrete measurements where they offer a sort of response to Keynes’ objections. What they attempt to prove is a rather limited point about discrete cases such as a pack of cards or protein of a given length. It is hard to write their claim concisely – but I will give it a try.

Imagine you have a search space such as a normal pack of cards and a target such as finding a card which is a spade. Then it is possible to argue by BpoIR that, because all cards are equal, the probability of finding the target with one draw is 0.25. Dembski and Marks attempt to prove that in cases like this that if you decide to do a “some to many” mapping from this search space into another space then you have at best a 50% chance of creating a new search space where BPoIR gives a higher probability of finding a spade. A “some to many” mapping means some different way of viewing the pack of cards so that it is not necessary that all of them are considered and some of them may be considered more often than others. For example, you might take a handful out of the pack at random and then duplicate some of that handful a few times – and then select from what you have created.

There are two problems with this.

1) It does not address Keynes’ objection to BPoIR

2) The proof itself depends on an unjustified use of BPoIR.

But before that a comment on the concept of no prior knowledge.

The Concept of No Prior Knowledge

Dembski and Marks’ case is that BPoIR gives the probability of an outcome when we have no prior knowledge. They stress that this means no prior knowledge of any kind and that it is “easy to take for granted things we have no right to take for granted”.  However, there are deep problems associated with this concept. The act of defining a search space and a target implies prior knowledge. Consider finding a spade in pack of cards. To apply BPoIR at minimum you need to know that a card can be one of four suits, that 25% of the cards have a suit of spades, and that the suit does not affect the chances of that card being selected. The last point is particularly important. BPoIR provides a rationale for claiming that the probability of two or more events are the same. But the events must differ in some respects (even if it is only a difference in when or where they happen) or they would be the same event. To apply BPoIR we have to know (or assume) that these differences are not relevant to the probability of the events happening. We must somehow judge that the suit of the card, the head or tails symbols on the coin, or the choice of DNA base pair is irrelevant to the chances of that card, coin toss or base pair being selected. This is prior knowledge.

In addition the more we try to dispense with assumptions and knowledge about an event then the more difficult it becomes to decide how to apply BPoIR. Another of Keynes’ examples is a bag of 100 black and white balls in an unknown ratio of black to white. Do we assume that all ratios of black to white are equally likely or do we assume that each individual ball is equally likely to be black or white? Either assumption is equally justified by BPoIR but they are incompatible. One results in a uniform probability distribution for the number of white balls from zero to 100; the other results in a binomial distribution which greatly favours roughly equal numbers of black and while balls.

Looking at the problems with the proof in Dembski and Marks’ paper.

The Proof does not Address Keynes’ objection to BPoIR

Even if the proof were valid then it does nothing to show that the assumption of BPoIR is correct. All it would show (if correct) was that if you do not use BPoIR then you have 50% or less chance of improving your chances of finding the target. The fact remains that there are many other assumptions you could make and some of them greatly increase your chances of finding the target. There is nothing in the proof that in anyway justifies assuming BPoIR or giving it any kind of privileged position.

But the problem is even deeper. Keynes’ point was not that there are alternatives to using BPoIR – that’s obvious. His point was that there are different incompatible ways of applying BPoIR. For example, just as with the example of black and white balls above, we might use BPoIR to deduce that all ratios of base pairs in a string of DNA are equally likely. Dembski and Marks do not address this at all. They point out the trap of taking things for granted but fall foul of it themselves.

The Proof Relies on an Unjustified Use of BPoIR

The proof is found in appendix A of the paper and this is the vital line:

image

This is the probability that a new search space created from an old one will include k members which were part of the target in the original search space. The equation holds true if the new search space is created by selecting elements from old search space at random; for example, by picking a random number of cards at random from a pack. It uses BPoIR to justify the assumption that each unique way of picking cards is equally likely. This can be made clearer with an example.

Suppose the original search space comprises just the four DNA bases, one of which is the target. Call them x, y, z and t. Using BPoIR, Dembski and Marks would argue that all of them are equally likely and therefore the probability of finding t with a single search is 0.25. They then consider all the possible ways you might take a subset of that search space. This comprises:

Subsets with

no items

just one item: x,y,z and t

with two items: xy, xz, yz, tx, ty, tz

with three items: xyz, xyt, xzt, yzt

with four items: xyzt

A total of 16 subsets.

Their point is that if you assume each of these subsets is equally likely (so the probability of one of them being selected is 1/16) then 50% of them have a probability of finding t which is greater than or equal to probability in the original search space (i.e. 0.25). To be specific new search spaces where probability of finding t is greater than 0.25 are t, tx, ty, tz, xyt, xzt, yzt and xyzt. That is 8 out of 16 which is 50%.

But what is the justification for assuming each of these subsets are equally likely? Well it requires using BPoIR which the proof is meant to defend. And even if you grant the use of BPoIR Keynes’ concerns apply. There is more than one way to apply BPoIR and not all of them support Dembski and Marks’ proof. Suppose for example the subset was created by the following procedure:

    • Start with one member selected at random as the subset
    • Toss a dice,
      • If it is two or less then stop and use current set as subset
      • If it is a higher than two then add another member selected at random to the subset
    • Continue tossing until dice throw is two or less or all four members in are in subset

This gives a completely different probability distribution.

The probability of:

single item subset (x,y,z, or t) = 0.33/4 = 0.083

double item subset (xy, xz, yz, tx, ty, or tz) = 0.66*0.33/6 = 0.037

triple item subset (xyz, xyt, xzt, or yzt) = 0.66*0.33*0.33/4 = 0.037

four item subset (xyzt) = 0.296

So the combined probability of the subsets where probability of selecting t is ≥ 0.25 (t, tx, ty, tz, xyt, xzt, yzt, xyzt) = 0.083+3*(0.037)+3*(0.037)+0.296 = 0.60 (to 2 dec places) which is bigger than 0.5 as calculated using Dembski and Marks assumptions. In fact using this method, the probability of getting a subset where the probability of selecting t ≥ 0.25 can be made as close to 1 as desired by increasing the probability of adding a member. All of these methods treat all four members of the set equally and are equally justified under BpoIR as Dembski and Marks assumption.

Conclusion

Dembski and Marks paper places great stress on BPoIR being the way to calculate probabilities when there is no prior knowledge. But their proof itself includes prior knowledge. It is doubtful whether it makes sense to eliminate all prior knowledge, but if you attempt to eliminate as much prior knowledge as possible, as Keynes does, then BPoIR proves to be an illusion. It does not give a unique result and some of the results are incompatible with their proof.

The Law(?) of Conservation of Information

(Preamble: I apologize in advance for cluttering TSZ with these three posts. There are very few people on either side of the debate that actually care about the details of this “conservation of information” stuff, but these posts make good on some claims I made at UD.)

For the past three years Dembski has been promoting his Law of Conservation of Information (LCI), most recently here. The paper he most often promotes is this one, which begins as follows:

Laws of nature are universal in scope, hold with unfailing regularity, and receive support from a wide array of facts and observations. The Law of Conservation of Information (LCI) is such a law.

Dembski hasn’t proven that the LCI is universal, and in fact he claims that it can’t be proven, but he also claims that to date it has always been confirmed. He doesn’t say whether he as actually tried to find counterexamples, but the reality is that they are trivial to come up with. This post demonstrates one very simple counterexample.

Definitions

First we need to clarify Dembski’s terminology. In his LCI math, a search is described by a probability distribution over a sample space Ω. In other words, a search is nothing more than an Ω-valued random variable. Execution of the search consists of a single query, which is simply a realization of the random variable. The search is deemed successful if the realized outcome resides in target T ⊆ Ω. (We must be careful to not read teleology into the terms search, query, and target, despite the terms’ connotations. Obviously, Dembski’s framework must not presuppose teleology if it is to be used to detect design.)

If a search’s parameters depend on the outcome of a preceding search, then the preceding search is a search for a search. It’s this hierarchy of two searches that is the subject of the LCI, which we can state as follows.

Given a search S, we define:

  • q as the probability of S succeeding
  • p2 as the probability that S would succeed if it were a uniform distribution
  • p1 as the probability that a uniformly distributed search-for-a-search would yield a search at least as good as S

The LCI says that p1 ≤ p2/q.

Counterexample

In thinking of a counterexample to the LCI, we should remember that this two-level search hierarchy is nothing more than a chain of two random variables. (Dembski’s search hierarchy is like a Markov chain, except that each transition is from one state space to another, rather than within the same state space.) One of the simplest examples of a chain of random variables is a one-dimensional random walk. Think of a system that periodically changes state, with each state transition represented by a shift to the left or to the right on an state diagram. If we know at a certain point in time that it is in one of, say, three states, namely n-1 or n or n+1, then after the next transition it will be in n-2, n-1, n, n+1, or n+2, as in the following diagram:

Assume that the system is always equally likely to shift left as to shift right, and let the “target” be defined as the center node n. If the state at time t is, say, n-1, then the probability of success q is 1/2. Of the three original states, two (namely n-1 and n+1) yield this probability of success, so p1 is 2/3. Finally, p2 is 1/5 since the target consists of only one of the final five states. The LCI says that p1 ≤ p2/q. Plugging in our numbers for this example, we get 2/3 ≤ (1/5)/(1/2), which is clearly false.

Of course, the LCI does hold under certain conditions. To show that the LCI to biological evolution, Dembski needs to show that his mathematical model of evolution meets those conditions. This model would necessarily include the higher-level search that gave rise to the evolutionary process. As will be shown in the next post, the good news for Dembski is that any process can be modeled such that it obeys the LCI. The bad news is that any process can also be modeled such that it violates the LCI.

Is Any Form Of Atheism Rationally Justifiable?

Definition of God:   First cause, prime mover, objective source of human purpose (final cause) and resulting morality, source of free will; omnipotent, omniscient and omnipresent inasmuch as principles of logic allow. I am not talking in particular about any specifically defined religious interpretation of god, such as the christian or islamic god.

Definition: Intellectual dishonesty occurs when (1)one deliberately mischaracterizes their position or view in order to avoid having to logically defend their actual views; and/or (2) when someone is arguing, or making statements against a position while remaining willfully ignorant about that position, and/or (3) when someone categorically and/or pejoratively dismisses all existent and/or potential evidence in favor of a conclusion they claim to be neutral about, whether they are familiar with that evidence or not.

Continue reading

Is purpose necessary to acquire any apparently purposeful effects?

For purposes of this discussion.

.
Chance = non-teleological causes that happen to result in particular effects via regularities referred to as “lawful” and stochastic in nature.

.
Purpose = teleological causes that are intended to result in particular effects; the organization of causes towards a pre-defined future goal.

.
My question is: can chance causes generate all of the effects normally associated with purpose,but without purpose? IOW, is purpose necessary to produce all, most, or some apparently purposeful effects, or is purpose, in effect, only an associated sensation by-product or side-effect that isn’t necessary to the generation of any particular effect normally associated with it?

The LCI and Bertrand’s Box

Tom English has recommended that we read Dembski and Marks’ paper on their Law of Conservation of Information (not to be confused with the Dembski’s previous LCI from his book No Free Lunch). Dembski also has touted the paper several times, and I too recommend it as a stark display of the the authors’ thinking.

Most people won’t take the time to carefully read a 34-page paper, but I submit that the authors’ core concept of “conservation of information” is very easily understood if we avoid equivocal and misleading terms such as information, search, and target. I’ll illustrate it with a setup borrowed from Joseph Bertrand.

The “Bertrand’s box” scenario is as follows: We’re presented with three small outwardly identical boxes, each containing two coins. One has a two silver coins, one has two gold coins, and one has a silver coin and a gold coin. We’ll call the boxes SS, GG, and SG. We are to randomly choose a box, and then randomly pull a coin from the chosen box.

Continue reading

A Few Comments on A Vivisection of the ev Computer Organism

I’ll follow Patrick’s lead and offer a few comments on another paper from the Evolutionary Informatics Lab. The paper analyzes Tom Schneider’s ev program, and while there are several problems with the analysis, I’ll focus on the first two sentences of the conclusions:

The success of ev is largely due to active information introduced by the Hamming oracle and from the perceptron structure. It is not due to the evolutionary algorithm used to perform the search.

To explain the authors’ terminology, active information is defined quantitatively as a measure of relative search performance — to say that something provides N bits of active information is to say that it increases the probability of success by a factor of 2N. The Hamming oracle is a function that reports the Hamming distance between the its input and a fixed target. The perceptron structure is another function whose details aren’t important to this post. Figure 1 shows how these three components are connected in an iterative feedback loop.

Continue reading

Natural Selection- What is it and what does it do?

Well let’s look at what natural selection is-

 “Natural selection is the result of differences in survival and reproduction among individuals of a population that vary in one or more heritable traits.” Page 11 “Biology: Concepts and Applications” Starr fifth edition

“Natural selection is the simple result of variation, differential reproduction, and heredity—it is mindless and mechanistic.” UBerkley

“Natural selection is the blind watchmaker, blind because it does not see ahead, does not plan consequences, has no purpose in view.” Dawkins in “The Blind Watchmaker”?

“Natural selection is therefore a result of three processes, as first described by Darwin:

Variation
Inheritance
Fecundity

which together result in non-random, unequal survival and reproduction of individuals, which results in changes in the phenotypes present in populations of organisms over time.”- Allen McNeill prof. introductory biology and evolution at Cornell University

OK so it is a result of three processes- ie an output. But is it really non-random as Allen said? Nope, whatever survives to reproduce survives to reproduce. And that can be any number of variations taht exist in a population.

Continue reading