In a recent OP at Uncommon Descent, Vincent Torley (vjtorley) defends a version of libertarian free will based on the notion of top-down causation. The dominant view among physicists (which I share) is that top-down causation does not exist, so Torley cites an essay by cosmologist George Ellis in defense of the concept.
Vincent is commenting here at TSZ, so I thought this would be a good opportunity to engage him in a discussion of top-down causation, with Ellis’s essay as a starting point. Here’s a key quote from Ellis’s essay to stimulate discussion:
However hardware is only causally effective because of the software which animates it: by itself hardware can do nothing. Both hardware and software are hierarchically structured, with the higher level logic driving the lower level events.
I think that’s wrong, but I’ll save my argument for the comment thread.
No, to argue for dualism you would need to show that the entity in question continues to exist in the absence of any physical realization.
As dazz says, it’s the ideas that are physically instantiated. I can envision an elephant lying on its back and juggling bowling balls with all four feet while it recites Shakespeare. The idea exists and is physically instantiated in my brain, but the object of my thought does not exist.
Ideas are not things. They are activities of the brain.
My point is that all of those things, including the “abstract valuation” itself, are physical phenomena. Non-physical money would have no causal power. How could it interact with the physical world?
Well, in the end it all amounts to physical interactions, by any reasonable judgment.
Of course one can always hold out for something non-physical happening, but that’s not what the evidence in hand indicates.
More precisely (to my way of thinking, anyway), dualism or immaterialism would require showing that the entity in question has causal efficacy without being exemplified at any spatio-temporal location.
Whether there are objects that have no causal powers is a separate (though related) question. But one could think that an abstract object still exists even if it is not exemplified anywhere — it just wouldn’t have any causal efficacy independent of its exemplifications.
The issue gets tricky when we turn to the epistemology of metaphysics and ask how we could verify such a claim. No appeal to “intuition” is going to work. We would need some further argument as to how we know that our intuitions are reliable in that case. That problem either puts the need for verification back on the table, or else we just stipulate that we know by intuition that our intuitions are reliable. It’s hard not to see that second option as simply begging the question.
On the other hand, both the problem of “intentional inexistence” (that we can conceive of objects that don’t exist) and the fact that concepts aren’t described in terms of the common and proper sensibles does raise rather serious obstacles for a naive physicalism and for a naive empiricism.
The problem for naive physicalism is that intentionality is a relation, and relations usually require that the relata are actual. But when I conceive of a mere possibility, what am I doing? It’s a problem that Brentano, Meinong, Husserl, Russell, Quine, and Sellars all wrestled with, and I don’t think there’s any obvious solution.
And if the naive empiricist insists that everything that exists can be described in terms of the common and proper sensibles, she’s going to have a real problem, because thoughts and judgments just can’t be described that way.
As far as I can see, one attractive alternative to Cartesianism is to deny that language is simply the medium in which concepts gets packaged for communication and insist rather that language is the very being of concepts. But then one wants to know if that means that animals without language lack concepts, and that’s a bitter pill to swallow.
Identifying concepts with brain-states is also problematic, for all sorts of reasons well explored by Putnam and also (I would suggest) by Dennett. Throwing Dennett into the mix could be tricky, but I mention him because of the distinction he makes between the personal and subpersonal. “Concept” is a personal-level concept; it’s part of the whole framework of persons as beings that can think and judge. “Brain state” is a subpersonal-level concept; it’s part of our model of the underlying machinery. Conflating the personal and the subpersonal is the kind of category mistake I can understand!
What, then, are concepts? My best response for right now (and this is not a worked-out view, but I hope the beginning of one) is that concepts are habits. or if you prefer, regularities of behavior. My cat has a concept of his food bowl just because his sensory awareness of his food bowl is reliably correlated with a suite of behavioral responses to it. But I do not think we can specify what his concept of it is, since — in the absence of language — his concepts are ineluctably private and unknowable, even to him. (Contra T. S. Eliot, he does not even know his own name. He responds reliably to the sound of it when called, but he does not know that that sound is a name.)
Ellis himself seems ambivalent. On the one hand, he argues for non-physical causes:
On the other hand, he seems to acknowledge that causation can only happen via the physical:
My claim is that the “lower-level physical operations” are causally sufficient. The higher-level “cause” is really just a redescription, not an ontologically distinct cause in itself.
Yeah, I don’t understand why Ellis thinks that software is non-physical. The software/hardware distinction is a distinction in kinds of description of what computers do; it’s not a distinction between kinds of thing. A program without a computer doesn’t do anything. Considered by itself, a program is just a deductively valid argument in an artificial language.
(Please correct me if I’m wrong; I’m almost entirely ignorant of computer theory.)
You have the basic idea. Imagine an old-fashioned telephone switchboard with a switchboard operator, pulling plugs from here and plugging them into there. Essentially, the operator was rewiring the system, and each phone call represented a new wiring configuration.
Software is no different in concept. Each instruction constitutes a rewiring. The CPU is designed to handle extremely rapid rewiring, and the instructions represent a rewiring of the memory locations where they reside. This approach makes the system highly flexible, but it’s all hardware. Computer languages are just tools to make complex rewiring sequences easier to conceptualize and create.
That makes sense, and it’s a nice metaphor. Thank you!
I was thinking about the question, “what would a list of instructions all by itself be like?” I haven’t done any programming since BASIC in elementary school and LOGO in middle school, but my sense (at the time and now) is that the “if, then” structure at the heart of programming is just logic. If that’s right, then the genius of Turning, Von Neumann, and the others was to imagine and build a machine that could implement any logical argument. But a program by itself is just an argument: a set of premises and a conclusion in an artificial language. If the argument is deductively valid, the program works; if it isn’t, then it has a bug (or crashes, etc.).
And since a deductively valid argument isn’t the right sort of thing to have any causal efficacy, it makes no sense to say that software has its own causal efficacy. Which means in turn that the software can’t be initiating a causal chain, and hence there’s no top-down causation from software to hardware.
While Ellis has some nice discussions of how initial boundary conditions establish parameters for the behavior of components in various kinds of causal systems, the software/hardware relation is just a different kind of thing altogether.
From what I can tell Ellis is just mistaken in treating the macro-to-micro constraint and the software/hardware relation as both kinds of “top-down causation,” and neither is of any help in making sense of intentional action, let alone showing that intentional action requires an immaterial will.
While Keiths and I still disagree about reduction — he thinks that I’m conflating reduction in principle and reduction in practice, and I’m arguing that we’re not entitled to make that distinction in the first place — I think we would nevertheless agree that Ellis’s basic argument is fatally flawed.
If there’s any good argument for an immaterial will or top-down causation, it won’t be found in Ellis’s essay.
At a lower level, the CPU has a register of flags. These can be viewed as individual testable bits that are cleared and set depending on the results of prior instructions. There is a carry flag, an overflow flag, a parity flag, a zero flag, and so on. Many instructions test these flags to determine flow of control. What you are calling “if-then” at a logical level, is simply an instruction that only executes if the appropriate flag(s) state(s) apply.
What makes this less than entirely deterministic is IO – that is, data flowing into, around, and out of the computer, ultimately interfacing with the outside world. At the lowest software level, data not provided in the program (as values in memory locations at start time) is determined as voltage levels at pins on chips.
The logical argument is an abstraction in your mind, many levels up from the hardware. I tend to think of software as instantiating a process, rather than some sort of syllogistic argument.
Yes, that’s about right.
In a computer, the actual causal chain is initiated by a clock pulse. The “clock” emits a series of pulses at a regular rate, and those trigger action. The rewiring of the pathways that Flint described directs those pulses. But the clock initiates them.
At the physical level, it isn’t really doing logic. It’s an electrical appliance redirecting clock pulses. But it is done systematically enough that it very accurately fits our logic models.
I can sort of see where his mistake comes from. The language we use when talking about software encourages that kind of thinking. We speak of burning software onto a DVD, of loading it into memory, or of typing it into a computer. The implication is that it exists apart from the medium that carries it, since it can be transferred from one medium to another vastly different one. And if it’s separate from the medium, then it must be non-physical, right?
It’s a mistake, but an understandable one.
I would characterize it roughly as a distinction between parts of a computing system that are easily changed and highly flexible versus those that are fixed and inflexible. It’s a spectrum, with firmware falling somewhere in the middle. Software, firmware and hardware are all physical, but they differ greatly in their flexibility.
True, though I don’t think people like Ellis believe that it does.
If programs were arguments, they would contain premises, deductive steps, and conclusions. That’s not the case. Programs are just sequences of instructions and data packaged together for execution on a computer.
No, for the reasons I just mentioned.
A program doesn’t argue for its correctness; it just runs. An argument for a program’s correctness would have a very different structure from the program itself.
I think it does make sense to speak of software’s causal efficacy. Software is a physical phenomenon, after all, and so it can have physical effects. Yet if Ellis were right, and software were non-physical, then it would make no sense to speak of its causal power. You’d run into dualism’s interaction problem all over again, though in a different context.
Software has causal power, but it isn’t downward causal power. Software can be viewed and described at the physical level, or it can be viewed and described at higher levels of abstraction. But when we describe it at a higher level we are just redescribing something already present at the lower levels. We aren’t introducing ontologically distinct causes.
I can’t see why not. We distinguish in-principle possibilities from in-practice possibilities all the time. What’s special about reduction in that regard?
By that logic, brains aren’t really doing logic either. They’re just biological organs composed of cells operating according to the laws of physics.
I think that’s a mistaken view. Computers and brains both do logic, albeit imperfectly, in the only way that logic can be done: via physics. Both computers and brains can make logical mistakes, depending on conditions, but much of the time they’re right.
This may be irrelevant, but can we know all the books that can be written?
When I think of reductionism, i think, can we know what is possible by knowing the rules by which things are assembled?
I agree. Brains are not doing logic.
People do logic. Brains don’t.
Hmm, no, physics cannot do logic.
Logic is conceptual, not physical.
Logic is an activity or process.
I think a program is more like a recipe written for a cook who only understands a very restricted subset of a natural language and who takes each instruction literally.
One can say that it is the actions of a particular cook carrying out a recipe in a particular language that causes a meal to be prepared.
But if the recipe changed, the meal would be different. What is wrong with considering that counterfactual possibility evidence of causation by the recipe itself, independent of any realization of that recipe in a particular language and execution by a particular cook?
Even more importantly, would you give your mother’s secret recipe for apple pie to the FBI? And is it simply a co-incidence that the leader of Apple is called Tim Cook. Or is there some deeper cause? Inquiring minds want to know. Or should I have said that certain neural patterns must be instantiated? Or perhaps certain quantum fields must be measured?
I think as long as you are considering closed systems, causation looks like thermodynamics. And you get people saying that human behavior is completely determined; there is no point in attributing “free will” to people, particularly in matters of criminal law.
If you allow feedback, and a system that changes it’s behavior as a result of feedback, you can think about feedback as a cause.
There’s no change to the way physics works, but there is a significant change to the way we describe what is happening.
We have programs that modify themselves as a result of feedback. One could argue that it’s only parameters changing, but at the lowest level of analysis, programs are parameters.
The meal changes because some physical realization of the recipe changes, even if that realization is only in the brain of the cook. If a recipe had some sort of non-physical existence, it could change all day long without affecting anything in the physical world.
You need a change in the causal chain in order to get something different on your plate, and the causal chain is physical.
Even if that were true, how would it help you? People are just as physical as brains.
Evidence, please. If something non-physical is going on when we do logic, how does it reach in and influence things in the physical world?
If I can think about it, it must be made of matter.
According to keiths it really is doing logic and logic is physical.
Sure it does.
My go-to strategy, as always, is to focus the question of naturalism around what living animals do, rather than around what any object that conforms to the universal laws of fundamental physics does. A lot of philosophical questions can be handled in terms of what humans similar to and different from other kind of animals. We can hold off as a separate issue whether ecology, development, and cognitive neuroscience are “reducible” to fundamental physics in any sense that actually matters.
I mention this generic point because we do in fact have pretty good evidence that logical reasoning is not limited to human beings. In particular we can study strategic reasoning, abductive inference, and even basic syllogisms in chimpanzees and other apes. We know that chimps are able to implement complex strategies in both physical and social environments and will manipulate tools and gestures to accomplish their goals.
I see no reason to deny that chimpanzees — as well as other great apes and some cetaceans — are thinking and reasoning animals. This is a very interesting discovery of comparative psychology and behavioral ecology that still hasn’t been absorbed by philosophers (for the most part). And we’re still figuring out the neurcomputational correlates of animal inference. (Since animals are not persons, Dennett’s personal/subpersonal distinction is not quite right. What we need is something like an animal/subanimal distinction. But I am not delighted with that phrasing.)
I think that we do not yet actually know what logic and reasoning are. What we have in the past three thousand years of Greek, Roman, Chinese, Hindu, and Buddhist philosophy is a lot of speculation about how logic and reasoning seem to be. What they really are, in rerum natura, we do not yet know.
This much seems clear, however: the pragmatics of assessing conformity and transgression to shared norms, together with the semantics of shareable propositional contents across multiply embodied perspectives, allows human beings to engage in correction of each other’s logical inferences, and that process can be — under very specific circumstances — internalized so that one learns how to assess and improve one’s own logical inferences. (What we call “critical thinking”.)
In other words: logic is older than language, and by a lot. I mean, if there’s logical thought in monkeys, then logic on this planet is at least forty million years old. By contrast, language might be a half million years old, at the most.
Folks really need to get over the aversion to B.F. Skinner and take a look at how he addressed these issues. Not that he was right and everyone else is wrong, but that his approach is useful.
I’m inclined to call that “behaviorism” rather than “naturalism” or “physicalism”.
I would say that all mammals (or, at least, all healthy adult mammals) are thinking creatures. I’m not so sure about “reasoning”, because I think that term has implications about sharing what one thinks with others.
If they are conceptual, then how can they be different from what we conceive them to be?
I doubt that there is logical thought in monkeys. There might be thought that we can construe as logical. But to say that it can be construed as logical need not imply that it is logical.
Logic as a formal mode of thinking seems to have been invented in recent times, if not in historical times.
As I implied n my previous post, I think behaviorism has useful things to say about the continuity of thinking and perceiving in animals, including humans. Language extends and amplifies social and problem solving behavior, but in my opinion, is not qualitatively different.
I think the AI community realized this some time ago and stopped trying to make robots solve every problem with logic. Learning is now driving robotics.
This is actually the very heart of the matter. Concepts can be other than what they seem to be exactly in the same way that objects can be other than what they seem to be.
At the dawn of inquiry, objects seemed to be stable and enduring clusters of common and proper sensibles, but our current best science tells us that objects are massively entangled nested sets of quantum fields.
Similarly, at the dawn of inquiry, concepts seemed to be like metaphorical pictures — only pictures that were seen ‘with the mind’ rather than ‘with the eyes’. Our current best science isn’t settled on what concepts are, but they seem to be attractors in a dynamical state-space of possible neurophysiological processes.
Just as our brains and bodies have been sculpted by evolution, development, and learning for detecting affordances and making it very difficult to discern the hidden causal order that generates those affordances, so too our brains and bodies have been sculpted by evolution, development, and learning for not attending to our ability to detect affordances precisely in order that we can actually detect them. Or, if you prefer: there is no reason at all to believe that evolution and development would result in anything at all like reliable introspection.
Just as we did not really understand what objects are until we stopped trusting in our native sensory endowments and started tinkering with objects under carefully controlled experimental conditions, so too with minds. We had to stop trusting our naive sense of “introspection” and start experimenting. This is why behaviorism was so important.
I’m still interested in your answer to this question.
Evidence for what?
For your claims:
And if something non-physical is going on when we do logic, how does it “reach in” and influence things in the physical world?
I question that. I’m not aware that there is any quantum characterization of what constitutes an object. As far as I can tell, what constitutes an object is that humans decide to call it an object.
This seems to me to be a backward way of thinking. It comes from asking all of the wrong questions.
Those are not claims. They are assertions.
I take your physicalism to be a deeply held credal religion. That is to say, you appear to be committed to a system of unevidenced beliefs.
I have simply asserted that I am agnostic with respect to that religion.
My agnosticism does not require evidence. I am not seeking your agreement.
I don’t see how that is even relevant.
You crack me up, Neil.
No, you aren’t agnostic. You stated definitively that
I’m asking for evidence to support your
claimsassertions (heh). Apparently you don’t have any.
Let’s say you do some logic and write down a valid deductive argument. The act of writing down that argument is a physical act, the result of a chain of causation. If “doing logic” is a non-physical process, then how does that non-physical process result in the physical effects that cause you to physically write down the argument?
1. Assertions are claims and claims are assertions.
2. Neil is not any sort of metaphysical realist — he’s an instrumentalist about scientific models and a fictionalist about mathematics. He’s a direct realist about perceptible objects, which makes him (in my lights) the purest empiricist here. So it makes sense that he’d be as skeptical about physicalism as he is about theism.
3. I’ve been struggling to articulate why I disagree so strongly with Keiths about reduction. Here’s one way of putting it: for Keiths (it seems to me) reduction in principle turns on the “is composed of” relation. Let’s call this “the compositionalist account of reduction.” Organisms are composed of cells (no immaterial spirits); cells are composed of molecules (no elan vital); molecules are composed of atoms; atoms are composed of protons, neutrons, and electrons; protons and neutrons are composed of quarks and gluons. Thus, “in principle” all properties that are not describable in terms of fundamental particles supervene on properties that are describable in terms of fundamental particles.
Among well-known philosophers, Alex Rosenberg (The Atheist’s Guide to Reality) has defended this account. As Rosenberg puts it, “the physical facts fix all the facts” — and by “the physical facts” he means the facts about fundamental particles, plus space and time (whatever those are, since space-time might be emergent from more fundamental structures).
Thus we can see that compositionalism takes reduction as a metaphysical thesis.
By contrast, I take reduction to be an epistemological concept that focuses on the concept of explanation: x is reducible to y if and only if we can exhaustively explain x in terms of y. By “exhaustively explain,” I mean that we can account for all of the structures and relations that are described in terms of x by effectively replacing them with structures and relations described in terms of y. We can continue to talk about x because it is simpler and more efficient, or more customary, but we know that we’re not talking about anything real — it’s just a shortcut, and the y-explanation is available should anyone accuse us of inflating our ontology beyond respectability.
But in order for us to be assured that the explanation is exhaustive — which is what I’m stipulating would be required by “reduction” — we need to have actually gone through the tedious labor of showing that we can replace x-talk with y-talk. A good example of this would be the reduction of classical electrodynamics to quantum electrodynamics. We can actually build mathematical and physical models that show us how to take all the equations of optics and translate them into the more cumbersome but ontologically deeper framework of QED.
On this account of reduction — which is common among philosophers of science — actually showing a successful reduction is extremely difficult and rather uncommon. There are many cases in science where reduction (in this sense) can’t be achieved. For example, we cannot reduce Mendelian genetics to molecular genetics. And if we can’t even reduce Mendelian genetics to molecular genetics, there’s little hope of being able to reduce ecology to chemistry. There is even some doubt among philosophers of chemistry whether chemistry is reducible to physics!
The main reason why I want to take reduction as an epistemological concept rather than as a metaphysical one is because it’s only if we do so that reduction can become anything more than a mere doctrine of metaphysical faith. Many of us here maintain that claims about reality need to be grounded in claims about how we know what is real — this is the core of our objection to supernaturalism, paranormal phenomena, and so forth.
In light of that idea — that metaphysics is answerable to epistemology — I want to say that we’re not entitled to posit in-principle reductions (the compositionalist account of reduction) without some specification of how exactly that would work in empirical inquiry (the exhaustive explanation account of reduction).
It takes serious confusion to think that the assertion “Logic is conceptual, not physical” requires evidence.
Sure, you can disagree with it if you want. But demanding evidence is just weird.
It’s “weird” to demand evidence for a
claimassertion about reality?
The evidence supports the
claimassertion that “doing logic” is ultimately a physical process. Why would you claimassert the opposite?
The assertion “Logic is conceptual, not physical” isn’t a claim about reality. It’s a claim about meaning, about the ordinary usage of the words involved.
Yes, that seems about right to me.
To get to the specifics of my disagreement with keiths, I have no problem with the view that logic supervenes on the physical. However, “supervenes on the physical” seems to be pretty much useless.
Then you were wrong to disagree with my statement:
Yes. It’s bizarre that Neil could think otherwise.
It’s a strange empiricist who ignores evidence when formulating his
It isn’t just that. One can envision a world in which the “is composed of” relation holds, yet higher level phenomena aren’t reducible to lower-level phenomena.
For such an epistemological reduction to succeed, the metaphysical reduction must already hold — unless you happen to get lucky and the higher-level behavior matches the lower-level behavior purely by coincidence.
X is still real. You’re just describing the underlying reality at a different level of abstraction.
Not so. My reductionism is no more an article of faith than my physicalism. The evidence supports them both, but I’ll happily abandon either one if new evidence warrants it.
I understand supervenience.
Is that all you mean by “redescribing” and “reduction”? Do you just mean everything supervenes on the entities of our best physics?
Or do you mean something more. If so, what?.
Counterfactual causation is what random control trials are meant to find. RCTS are the gold standard of some sciences, but not others. There are different ways of thinking about cause in different sciences. And explanation is at least partly about making claims about causal networks. So there are different ways of thinking about cause and explanation and that can depend on the science being done.
Supervenience is one thing. Asserting that explanations and causal relations as expressed in various sciences can be inter-related is something different and much more challenging to justify. Multiple realization, like software on hardware, or money on physical media, or (maybe!) the mental on the physical, just makes that assertion even harder to cash out.
The questions at the end about knowledge, neural patterns, and quantum fields also had a point. Can what makes something knowledge (eg JTB) be redescribed in physics? Or even neural patterns?
ETA: Different sciences have conceptual resources for explanations that are not available in “lower” sciences. Not even in principle.
Further, explanations involving norms take advantage of another type of conceptual resource that is not available in principle to any science. That can be true even with supervenience.
A nice example of what I mean:
Where do morals come from?
Compare the last three paragraphs with the scientific model reviewed by the preceding text.
They are immune to disconfirmation. They are worse than faith.
And both abstractions/metaphors and objects/activities exist, right? Because if abstractions don’t exist, then they cannot “stand in” for things or be usable in any other way. But if abstractions exist, then non-physical things exist.