Michael Behe is best known for coining the phrase Irreducible Complexity, but I think his likening of biological systems to Rube Goldberg machines is a better way to frame the problem of evolving the black boxes and the other extravagances of the biological world.
But even before going to the question of ID in biology, I’d like to explore philosophical and complexity questions of man-made Rube Goldberg machines. I will, however, occasionally attempt to show the relevance of the questions I raised regarding man-made Rube Golberg machines to God-made Rube Goldberg machines in biology.
Analysis of man-made Rube Goldberg machines raises philosophical questions as to what may or may not constitute good or bad design and also how we make statements about the complexity of man-made systems. Consider this Rube Goldberg machine, one of my favourites:
Is the above Rube Goldberg machine a good or bad man-made design? How do we judge what is good? Does our philosophical valuation of the goodness or badness of a Rube Goldberg machine have much to say about the exceptional physical properties of the system relative to a random pile of parts?
Does it make sense to value the goodness or badness of the Rube Goldberg machine’s structure based on the “needs” and survivability of the Rube Goldberg machine? Does the question even make sense?
If living systems are God-made Rube Goldberg machines, then it would seem to be an inappropriate argument against the design of the system to say “its poor design for survivability, its fragility and almost self-destructive properties imply there is no designer of the system.”
The believability of biological design is subjective to some extent in as much as some would insist that in order to believe design, they must see the designer in action. I respect that, but for some of us, a system that is far from physical expectation, design is quite believable.
But since we cannot agree on the question of ID in biology, can we find any agreement about the level of specificity and complexity in man-made Rube Goldberg machines? I would hope so.
What can be said in certain circumstances, in terms of physics and mathematics, as far as man-made systems, is that certain systems are far from what would be expected of ordinary non-specific processes like random placement of parts. That is, the placement of parts to effect a given activity or structure is highly specific in such circumstances — it evidences high specificity.
My two favourite illustrations of high specificity situations are:
1. a domino standing on its edge on a table.
2. cards connected together to form a house of cards. The orientation and positioning of the cards is highly specific.
We have high specificity in certain engineering realms where the required tolerances of the parts is extremely narrow. In biology, there are high specificity parts (i.e. you can’t use a hemaglobin protein when an insulin protein is required to effect a chemical transaction). I think specificity of individual interacting parts can be occasionally estimated, but one has to be blessed enough to be dealing with a system that is tractable.
In addition to specificity of parts we have the issue of the complexity of the system made of such high specificity parts. I don’t think there is any general procedure, and in many cases it may not be possible to make a credible estimate of complexity.
A very superficial first-pass estimate of complexity would be simply tallying the number of parts that have the possibility of being or not being in the system. This is akin to the way the complexity of some software systems is estimated by counting the number of conditional decisions (if statements, while statements, for statements, case statemets, etc.)
In light of these considerations we might possibly then make statements about our estimate for how exceptional a system is in purely mathematical and physical and/or chemical terms — that is providing the system is tractable.
I must add, if one is able to make credible estimates of specificity and complexity, why would one need to do CSI calculations at all? CSI is doesn’t deal with the most important issues anyway! CSI just makes an incomprehensible mess of trying to analyze the system. CSI is superfluous, unnecessary, and confusing. This confusion has led some to relate the CSI of a cake to the CSI of a recipe like Joe G over at “Intelligent Reasoning”.
Finally, I’m not asserting there are necessarily right or wrong answers to the questions I raised. The questions I raise are intended to highlight something of the subjectivity of how we value good or bad in design as well as how we estimate specificity and complexity.
If people come to the table with differing measures of what constitutes good, bad, specified, complex and improbable, they will not agree about man-made designs, much less about God-made designs.
I’ve agreed with many of the TSZ regulars about dumping the idea of CSI. My position has ruffled many of my ID associates since I so enthusiastically agreed with Lizzie, Patrick (Mathgrrl), and probably others here. My negative view of CSI (among my other heresies) probably contributed to my expulsion from Arrington’s echo chamber.
On the other hand, with purely man-made designs, particularly Rube Goldberg machines, I think there is a legitimate place for questions about the specificity of system parts and the overall complexity of engineered systems. Whether such metrics are applicable to God-made designs in biology is a separate question.