George Ellis on top-down causation

In a recent OP at Uncommon Descent, Vincent Torley (vjtorley) defends a version of libertarian free will based on the notion of top-down causation. The dominant view among physicists (which I share) is that top-down causation does not exist, so Torley cites an essay by cosmologist George Ellis in defense of the concept.

Vincent is commenting here at TSZ, so I thought this would be a good opportunity to engage him in a discussion of top-down causation, with Ellis’s essay as a starting point. Here’s a key quote from Ellis’s essay to stimulate discussion:

However hardware is only causally effective because of the software which animates it: by itself hardware can do nothing. Both hardware and software are hierarchically structured, with the higher level logic driving the lower level events.

I think that’s wrong, but I’ll save my argument for the comment thread.

540 thoughts on “George Ellis on top-down causation

  1. This all seems silly to me. Systems that can learn are changed by feedback. At the usual level of abstraction for such systems, the behavior of the system is caused by the feedback.

    Brains are such systems. Life is such a system. Learning and evolution incorporate changes to the system and its behavior that are steered by feedback. In a dynamic, feedback steered system, it can make sense to say that effects are causes.

    Not causes of the original behavior, but causes of change to the behavior of the system.

  2. In order to process instructions, processors were created. Causation was top-down, not bottom up.

    Even in a simulation such as Avida, the instruction set is imposed from above. the program doesn’t create the instruction set.

    Humans rely on representations. These are created by top-down causation.

    Perhaps it would be helpful if keiths says what he means by top-down causation. It would be nice to know just what it is that he is going to argue against. 🙂

  3. Mung:

    Perhaps it would be helpful if keiths says what he means by top-down causation. It would be nice to know just what it is that he is going to argue against.

    I’ll be arguing against Ellis’s version of top-down causation, at least initially. Didn’t you notice the title of the thread, the link to Ellis’s essay, and the quote from same?

  4. keiths: I’ll be arguing against Ellis’s version of top-down causation, at least initially. Didn’t you notice the title of the thread, the link to Ellis’s essay, and the quote from same?

    D’OH!

  5. keiths: I’ll be arguing against Ellis’s version of top-down causation…

    That remains to be seen, and you don’t help matters by refusing to say just what it is that you’ll be arguing against. No need to be coy. Are you going to argue against this?

    …top-down causation happens wherever boundary conditions and initial conditions determine the results.

    Is it your position then that boundary conditions and initial conditions never determine a result?

  6. Both hardware and software are hierarchically structured, with the higher level logic driving the lower level events.

    I think that’s probably right with respect to a single decision at a particular time. But of course in the brain the higher level responses are very much affected by lower level events as well, and the whole is a set of shifting responses to external conditions. The lower level might very well dictate which higher level response is activated, and over time the lower level shifts and shapes the various higher level responses available for various situations.

    I don’t think that the brain could respond to complex events in a coordinated manner without hierarchies. That doesn’t mean, however, that top-down causation prevails overall. That isn’t true of early brain development, of the learning process, or of decisions in general. That hierarchies in the brain exist to coordinate and direct responses is just a way of keeping track of things. Governments are hierarchically ordered for many of the same reasons.

    Glen Davidson

  7. RB,

    I’m worried about you, Mung.

    Is your bucket list a list of buckets?

    When told that he’s missing a few screws, Mung heads for the hardware store.

    We are blessed to have such an insightful critic.

  8. When I think of top down mechanisms in mechanical devices, I think of some childhood toys that openen a lid, an arm reached out and threw a switch that turned the machine off.

    A self contained device is most likely going to be deterministic in the usual and ordinary sense of the word. A device that can be modified by experience is deterministic only if the environment is simple and regular, and if the mechanics of modifying the behavior of the device is deterministic.

  9. I’ll expand on this later, but my position is that there is no inter-level causation, either of the top-down or bottom-up varieties.

    There is just causation, and it can be described differently at different levels of abstraction.

    We can describe a particular causal chain this way:

    The software directed the computer to calculate the standard deviation.

    That’s a high-level description. The same causal chain can be described in lower-level terms thus:

    The computer proceeded deterministically through the state sequence {S0, S1, S2, …, Sn}.

    Same causation, but the descriptions are at different levels of abstraction.

    Note in particular that we don’t need to take the higher level into account in order to predict the system’s behavior. The low-level description is causally complete, and the high-level description adds nothing causal. It’s just a re-description of the lower-level causation.

  10. petrushka,

    When I think of top down mechanisms in mechanical devices, I think of some childhood toys that openen a lid, an arm reached out and threw a switch that turned the machine off.

    That strikes me as reflexive behavior rather than top-down behavior. The system is operating on itself.

  11. petrushka,

    A device that can be modified by experience is deterministic only if the environment is simple and regular, and if the mechanics of modifying the behavior of the device is deterministic.

    The environment doesn’t have to be simple. No matter how complicated the environment or the system itself, the system is deterministic if its next state is determined by the current state plus the environmental “inputs”.

    Equivalently, if you

    1) run a deterministic system,
    2) observe the final state,
    3) reset the system to the initial state, and
    4) run the system again, with the same inputs from the environment as before,

    …the system will end up in the same final state as before, no matter how complicated the environment is.

  12. keiths: That strikes me as reflexive behavior rather than top-down behavior. The system is operating on itself.

    I see is as conceptually no different than your example of a computer doing a calculation.

    I would have to ally myself with the idea that free will is an illusion, but I don’t see any philosophical position that makes any difference in the way be behave or should behave. We are we, regardless of how we work or why. Our sense of being free to move about in the universe of possibilities is a result of being aware of possible consequences. If we could make a computer complex enough to see what we see, in terms of possibilities and consequences, I would provisionally qualify it as human.

    I am fairly comfortable with the legal stance toward free will and responsibility. I do not think the legal system is perfect, but I think it’s approach is correct, or at least useful. It is operational enough to be applied to self-driving cars.

  13. petrushka,

    I see is as conceptually no different than your example of a computer doing a calculation.

    That’s right. Where we differ is that I see no top-down (or bottom-up) causation in either case, while you see the toy as an example of top-down causation.

    I would have to ally myself with the idea that free will is an illusion…

    Or you could take the compatibilist position, which holds that the question of free will is orthogonal to the question of determinism.

  14. keiths, you appear to be arguing for top-down causation.

    …top-down causation happens wherever boundary conditions and initial conditions determine the results.

  15. For a long time, top-down programming was taught as the Right Way to do it. Eventually, people noticed that real systems have no top. What would be the top-level statement of an operating system? Or of a mind?

    I think Ellis has it backwards. Hardware by itself can do a lot. Software does not “animate” hardware, and can do nothing without the hardware. When you power up your computer, do you seriously believe software is giving that order? Software can’t even begin to execute until the hardware operates properly.

    At the lowest interface, the distinction between hardware and software is blurry. Software at that level is like the roll of punched holes controlling a player piano – simply more hardware and part of an interacting system. It’s like trying to identify that part of an airplane that actually does the flying.

    I’ll agree with Keiths on levels of abstraction, or frames of reference. The solution to the puzzle of whether the victim was killed by the bullet, the gun, or the shooter is a frame of reference question — all answers are correct at the appropriate level of abstraction. There is no top or bottom.

  16. Flint: For a long time, top-down programming was taught as the Right Way to do it. Eventually, people noticed that real systems have no top.

    You have misunderstood what was meant by “top down programming”.

    In any case, I’m unconvinced that “top down” and “bottom up” have clear meanings. What can look top down from one perspective can look to be bottom up from another.

  17. Neil Rickert: You have misunderstood what was meant by “top down programming”.

    Well, since that was my profession, and I read dozens of books and got a CS degree and kept up with the field, allow me to toss that one right back at you. If it’s my misunderstanding, I share it with every programming authority I’m familiar with.

  18. Flint,

    Software at that level is like the roll of punched holes controlling a player piano – simply more hardware and part of an interacting system.

    Right. At one level of description, you could say that Microsoft Word is loaded in a computer’s memory. At a lower level, you could say that a certain pattern of 1’s and 0’s is present in the caches and in RAM. At a still lower level, you could describe the voltages and charge distributions.

    The causal story is complete at each level. Software doesn’t have to “reach down” from a higher level to tell the electrons and holes how to behave within the transistors.

    It’s disappointing when a respected thinker like Ellis makes such an obvious mistake. It reminds me of another distinguished cosmologist who ventured out of his area of expertise and found himself in a junkyard during a tornado.

  19. Mung:

    keiths, you appear to be arguing for top-down causation.

    …top-down causation happens wherever boundary conditions and initial conditions determine the results.

    Mung,

    You’re assuming the truth of Ellis’s statement, but he is mistaken.

    And anticipating your likely next objection, no, that statement is not a definition of top-down causation. It’s a claim Ellis makes about top-down causation.

  20. Although I essentially see things the same way as Keiths, IMHO I do think several points are worth remembering:

    First, we should bear in mind that there is a world of difference between asserting that different levels of description, in principle, portray the same phenomena, and converting/reducing one level of description to another across conceptual expanses as broad as that between mental/intentional states to the levels of biology and physics.

    As an example, It is perfectly reasonable, and quite efficient, for me to explain my behavior, and for you to predict same, in intentional terms: “I am going to the library because I want to research the role of the Army Air Corps in the Japanese occupation following World War II, and I believe there are original references in their collection.” I might also assert that my wanting and believing those things is physically instantiated and describable, in principle, at the level of physics. But in reality there is little likelihood (zero, perhaps) that we will ever be able to give anything resembling physical descriptions of such states that are sufficiently complete and informative to enable a deduction from physical states to my corresponding intentional states – to discern, at a physical level, which intentional states are present.

    Further, there is no reason to believe that there is identity between my state of “wanting peace in the world” and your state of “wanting peace in the world,” and every reason to believe that, even if there were identity, that the intentional state of “wanting peace in the world” may be realized by many, perhaps countless, physical/computational states – the problem of multiple realizability.

    As a result of all this, while there is no “top down” or “bottom up” causation within the phenomena themselves (but rather a hierarchy of descriptions), and while events at one level may supervene on events at a lower level, there are many occasions for us to prefer, and some in which we have no choice but to utilize, a higher level description over a lower level of description. Perhaps this is especially true about one’s own intentional/mental states, because while we have subjective access to those, we have very little (perhaps no) subjective awareness of the states of our brains, nervous systems and bodies at the fundamental level of physics. Hence our sense that that level of description is somehow primary, and will never be replaced.

    ETA: small edits for less unclarity.

  21. Reciprocating Bill: As an example, It is perfectly reasonable, and quite efficient, for me to explain my behavior, and for you to predict same, in intentional terms: “I am going to the library because I want to research the role of the Army Air Corps in the Japanese occupation following World War II, and I believe there are original references in their collection.”

    I’m waiting for Freud to be resurrected as a prescient thinker on this. I’m kind of surprised that he isn’t mentioned when there’s discussion of decisions being made before we are aware of them.

    When looking at other people, it seems rather common for people to be unaware of their own motivations. It’s a bit more difficult to see this in ourselves.

  22. petrushka: I’m waiting for Freud to be resurrected as a prescient thinker on this. I’m kind of surprised that he isn’t mentioned when there’s discussion of decisions being made before we are aware of them.

    There has been a little of that, e.g. Drew Westen:

    Westen, D. (1998). The scientific legacy of Sigmund Freud: Toward a psychodynamically informed psychological science. Psychological Bulletin, 124(3), 333-371.

    He has argued generally that Freud’s notion of unconscious mental process was prescient in light of current cognitive science, even if his theories of psychosexual development were not.

  23. keiths: Software doesn’t have to “reach down” from a higher level to tell the electrons and holes how to behave within the transistors.

    Transistors are top-down artifacts.

  24. keiths: And anticipating your likely next objection, no, that statement is not a definition of top-down causation. It’s a claim Ellis makes about top-down causation.

    You haven’t anticipated my objection at all, keith, you’ve merely ignored it.

    see here

    So here we are 30 comments into the thread, and we still don’t know just what it is you’re going to argue against. Please share with us the definition of top-down causation that you claim Ellis is using.

  25. The Petrushka reading of Freud and cognitive science would assert that we hafe free will in the sense that there is a layer of awareness that “decides” how to respond to motives, desires, and such.

    That doesn’t seem like a formulation that would hold up to scrutiny, but it’s the best I’ve got.

    I don’t think we have any control over the wantness of ourselves. But as humans we have the ability to anticipate consequences and attempt to balance conflicts between outcomes. We also have language and culture, which enable us to change the environment in ways that remove or ameliorate undesirable consequences.

    We also seem to be able to engage in therapies that attempt to modify desires or emotions that lead to unpleasant outcomes. That’s kind of interesting.

    Still, there is the ghost in the machine. What is it that wants?

  26. Mung,

    So here we are 30 comments into the thread, and we still don’t know just what it is you’re going to argue against.

    What a bore (and boor) you are, Mung.

    I’m arguing against Ellis’s notion of top-down causation. Try to catch up with everyone else, please.

  27. keiths: I’m arguing against Ellis’s notion of top-down causation.

    You were asked as early as the second post in the thread to define top-down causation. I’m still waiting.

    I posted how I thought Ellis defined top-down causation and you disagreed with me but have offered nothing in return. According to you, how does Ellis define top-down causation? How do you define top-down causation?

    No one but you knows what you’re arguing against because right now what you’re arguing against is floating in a sea of vagueness.

    You now want to say you’re arguing against Ellis’s notion of top-down causation (whatever that may be). But that’s not what you said in the OP.

    Here’s what you wrote in the OP:

    Vincent Torley (vjtorley) defends a version of libertarian free will based on the notion of top-down causation. The dominant view among physicists (which I share) is that top-down causation does not exist… I thought this would be a good opportunity to engage him in a discussion of top-down causation

    If you don’t want to say what you mean by top-down causation that’s certainly your prerogative. Why be coy, though? Why not just tell us?

  28. However hardware is only causally effective because of the software which animates it.

    Do you disagree with Ellis about this?

    I guess computer hardware could be used as a doorstop and thus be causally effective without software.

  29. Mung,

    If you want to understand what Ellis means by “top-down causation”, then read his essay. I did not sign up to spoon-feed special-needs commenters like you and phoodoo.

  30. Mung: No one but you knows what you’re arguing against because right now what you’re arguing against is floating in a sea of vagueness.

    I find it perfectly clear.

  31. RB,

    As an example, It is perfectly reasonable, and quite efficient, for me to explain my behavior, and for you to predict same, in intentional terms: “I am going to the library because I want to research the role of the Army Air Corps in the Japanese occupation following World War II, and I believe there are original references in their collection.” I might also assert that my wanting and believing those things is physically instantiated and describable, in principle, at the level of physics. But in reality there is little likelihood (zero, perhaps) that we will ever be able to give anything resembling physical descriptions of such states that are sufficiently complete and informative to enable a deduction from physical states to my corresponding intentional states – to discern, at a physical level, which intentional states are present.

    Yes, but that stems from our cognitive and scientific limitations. We describe causes at different levels of abstraction for reasons of convenience, efficiency, or necessity. Ellis mistakenly infers that they exist at different levels, and that inter-level causation is therefore necessary.

  32. RB,

    Further, there is no reason to believe that there is identity between my state of “wanting peace in the world” and your state of “wanting peace in the world,” and every reason to believe that, even if there were identity, that the intentional state of “wanting peace in the world” may be realized by many, perhaps countless, physical/computational states – the problem of multiple realizability.

    I agree, but I don’t see that as problematic in this context. To me, it’s unsurprising that phenomena having identical descriptions at one level can have wildly differing descriptions at a lower level.

    For example, two computers might be averaging the same column of numbers, but one of them is an X86 machine running Microsoft Excel under Windows while the other is an ARM machine running Google Sheets under Linux.

  33. I suspect that causation is a fiction, and if there is a level of description that we find consistently useful, it meets the only relevant standard.

    If one accepts determinism, then there is, or could be, a god’s eye view in which the whole shebang is static.

  34. keiths: If you want to understand what Ellis means by “top-down causation”, then read his essay.

    As you must know, Ellis does not get to define the meaning of top-down causation any more than you do. But if you’re going to argue against top-down causation you ought to know what it means. If you can’t say what it means then no one ought accept anything you say about it, including your claims that Ellis is wrong about top-down causation.

    From where I sit this is simply hilarious. You are going to argue against something [top-down causation], but you won’t say what it [top-down causation] is.

    ok. maybe you can tell us what it is not. Top-down causation is not bottom-up causation. What is bottom up causation and how is it to be distinguished from top-down causation?

    Again, you ignored this question:

    How do you define top-down causation?

    Perhaps you don’t say because you can’t say.

  35. keiths: Jesus, Mung.

    I asked you about a quote from your OP and you refer me to a quote by Flint.

    Here it is again:
    However hardware is only causally effective because of the software which animates it.

    Do you agree or disagree with Ellis?

    Given enough hardware, you could use it to anchor a boat. It follows that it is not the case that hardware is only causally effective because of the software which animates it.

  36. So far all that keiths has managed to argue is that human descriptions are subjective, but who ever thought otherwise? Different descriptions can be offered. Who ever thought otherwise?

    Is keiths arguing that it is our choice of description which determines top-down causality? Does it not then follow that it is our choice of description which determines bottom-up causality?

    Subjective keiths.

  37. Interestingly, Ellis comes very close to recognizing a major problem with his thesis:

    6: Room at the bottom

    Given this evidence for top-down causation, the physicist asks, how can there be room at the bottom for top-down causation to take place? Isn’t there over-determination because the lower level physics interactions already determine what will happen from the initial conditions?

    The answer, of course, is that there would be overdetermination if Ellis were correct about top-down causation.

    But he’s not. The low-level causes aren’t distinct from the high-level causes. He’s confusing levels of description with levels of causation.

  38. keiths: The low-level causes aren’t distinct from the high-level causes. He’s confusing levels of description with levels of causation.

    You are confusing levels of description with levels of causation.

  39. In that article, Noble correctly describes some of the practical constraints on biological models:

    The central feature from the viewpoint of biological modelling can be appreciated by noting that the equations for structure and for the way in which elements move and interact in that structure in biology necessarily depend on the resolution at which it is represented. Unless we represent everything at the molecular level which, as argued above, is impossible (and fortunately unnecessary as well), the differential equations should be scale-dependent. As an example, at the level of cells, the equations may represent detailed compartmentalization and non-uniformity of concentrations, and hence include intracellular diffusion equations, or other ways of representing non-uniformity [72–74]. At the level of tissues and organs, we often assume complete mixing (i.e. uniformity) of cellular concentrations. At that level, we also usually lump whole groups of cells into grid points where the equations represent the lumped behaviour at that point.

    This is all true, but it doesn’t reveal anything about actual causality. It simply underscores the limits of our cognitive abilities, our computers, and our models. Like Ellis, Noble is confusing levels of description with levels of causality.

    Noble goes on to flirt with crackpottery:

    These are practical reasons why the equations we use are scale-dependent. The formal theory of scale relativity goes much further since it proposes that it is theoretically necessary that the differential equations should be scale-dependent. It does this by assuming that space–time itself is continuous but generally non-differentiable, therefore fractal, not uniform.

    It isn’t that fractal spacetime itself is an outrageous idea. Some theories of quantum gravity do propose it. What’s bizarre is Noble’s idea that the scale-dependence of differential equations at cellular, tissue, or organ levels is somehow a consequence of the (proposed) fractal nature of spacetime.

Leave a Reply