The Ghost in the Machine

Let’s suppose there really is a Ghost in the Machine – a “little man” (“homunculus”) who “looks out” through our eyes, and “listens in” through our ears (interestingly, those are the two senses most usually ascribed to the floating Ghost in NDE accounts).  Or, if you prefer, a Soul.

And let’s further suppose that it is reasonable to posit that the Ghost/Soul is inessential to human day-to day function, merely to conscious experience and/or “free will”; that it is at least possible hypothetically to imagine a soulless simulacrum of a person who behaved exactly as a person would, but was in fact a mere automaton, without conscious experience – without qualia.

Thirdly, let’s suppose that there there are only a handful of these Souls in the world, and the rest of the things that look and behave like human beings are Ghostless automatons – soulless simulacra. But, as in an infernal game of Mafia, none of us know which are the Simulacra, and which are the true Humans – because there is no way of telling from the outside – from an apparent person’s behaviour or social interactions, or cognitive capacities – which is which.

And finally, let’s suppose that souls can migrate at will, from body to body.

Let’s say one of these Souls starts the morning in Lizzie’s body, experiencing being Lizzie, and remembering all Lizzie’s dreams, thinking Lizzie’s thoughts, feeling Lizzie’s need to go pee, imagining all Lizzie’s plans for the day, hearing Lizzie’s alarm clock, seeing Lizzie’s watch, noting that the sky is a normal blue between the clouds through the skylight.

Somewhere an empty simulacrum of Barry Arrington is still asleep (even automatons “sleep” while their brains do what brains have to do to do what brains have to do).  But as the day wears on, the Soul in Lizzie’s body decides to go for a wander.  It leaves Lizzie to get on with stuff, as her body is perfectly capable of doing, she just won’t be “experiencing” what she does (and, conceivably, she might make some choices that she wouldn’t otherwise make, but she’s an extremely well-designed automaton, with broadly altruistic defaults for her decision-trees).

The Soul sees that Barry is about to wake up as the sun rises over Colorado, and so decides to spend a few hours in Barry’s body.  And thus experiences being Barry waking up, probably needing a pee as well, making Barry’s plans, checking Barry’s watch, remembering what Barry did yesterday (because even though Barry’s body was entirely empty of soul yesterday, of course Barry’s brain has all the requisite neural settings  for the Soul to experience the full Monty of remembering being Barry yesterday, and what Barry planned to do today, even though at the time, Barry experienced none of this.  The Soul also notices the sky is its usual colour, which Barry, like Lizzie calls “blue”.

Aha.  But is the Soul’s experience of Barry’s “blue” the same as the Soul’s experience of Lizzie’s “blue”?  Well, the Soul has no way to tell, because even though the Soul was in Lizzie’s body that very morning, experiencing Lizzie’s “blue”, the Soul cannot remember Lizzie’s “blue” now it is in Barry’s body, because if it could, Barry’s experience would not simply be of “blue” but of “oh, that’s interesting, my blue is different to Lizzie’s blue”. And we know that not only does Barry not know what Lizzie’s blue is like when Barry experiences blue (because “blue” is an ineffable quale, right?), he doesn’t even know whether “blue” sky was even visible from Lizzie’s bedroom when Lizzie woke up that morning.  Indeed, being in 40 watt Nottingham, it often isn’t.

Now the Soul decides to see how Lizzie is getting on.  Back east, over the Atlantic it flits, just in time for Lizzie getting on her bike home from work.  Immediately the Soul accesses Lizzie’s day, and ponders the problems she has been wrestling with, and which, as so often, get partly solved on the bike ride home.  The Soul enjoys this part.  But of course it has no way of comparing this pleasure with the pleasure it took in Barry’s American breakfast which it had also enjoyed, because that experience – those qualia – are not part of Lizzie’s experience.  Lizzie has no clue what Barry had for breakfast.

Now the Soul decides to race Lizzie home and take up temporary residence in the body of Patrick, Lizzie’s son, who is becoming an excellent vegetarian cook, and is currently preparing a delicious sweet-potato and peanut butter curry.  The Soul immediately experiences Patrick’s thoughts, his memory of calling Lizzie a short while earlier to check that she is about to arrive home, and indeed, his imagining of what Lizzie is anticipating coming home to, as she pedals along the riverbank in the dusk.  Soul zips back to Lizzie and encounters something really very similar – although it cannot directly compare the experiences – and also experiences Lizzie’s imaginings of Patrick stirring the sweet potato stew, and adjusting the curry powder to the intensity that he prefers (but she does not).

As Baloo said to Mowgli: Am I giving you a clue?

The point I am trying to make is that the uniqueness of subjective experience is as defined as much by what we don’t know as by what we do.  “Consciousness” is mysterious because it is unique.  The fact that we can say things like  “I’m lucky I didn’t live in the days before anaesthesia” indicates a powerful intuition that there is an “I” who might have done, and thus an equally powerful sense that there is an “I” who was simply lucky enough to have landed in the body of a post-anaesthesia person.  And yet it takes only a very simple thought experiment, I suggest, to realise that this mysterious uniqueness is – or at least could be – a simple artefact of our necessarily limited PoV.  And a simple step, I suggest, to consider that actually a ghostless automaton – a soulless simulacrum is – an incoherent concept.  If my putative Soul, who flits from body to body, is capable not only of experiencing the present of any body in which it is currently resident, but that body’s past and anticipated future, but incapable of simultaneously experiencing anything except the present, past, and anticipated future of that body, then it becomes a redundant concept.  All we need to do is to postulate that consciousness consists of having accessible a body of knowledge only accessible to that organism by simple dint of that organism being limited in space and time to a single trajectory.  And if that knowledge is available to the automaton – as it clearly is – then we have no need to posit an additional Souly-thing to experience it.

What we do need to posit, however, is some kind of looping neural architecture that enables the organism to model the world as consisting of objects and agents, and to model itself- the modeler – as one of those agents.  Once you have done that, consciousness is not only possible to a material organism, but inescapable. And of course looping neural architecture is exactly what we observe.

I suggest that the truth is hiding in plain sight: we are conscious because when we are unconscious we can’t function.  Unless the function we need to perform at the time is to let a surgeon remove some part of us, under which circumstances I’m happy to let an anaesthetist render me unconscious.

367 thoughts on “The Ghost in the Machine

  1. keiths:

    Can you defend your position, whatever it is, against your own criticisms, William?

    William:

    My criticisms of materialism are rooted in materialist assumptions. I don’t share those assumptions. For example, I don’t believe my will is a computed or caused phenomena, so criticism based on the view that it is computed or caused cannot apply.

    Your criticism of physically-based reasoning is that it might give wrong answers or be influenced by irrelevancies. But we already know that human reason — however you explain it, whether through materialism, dualism, idealism, or William’s special explanation — can give wrong answers and be influenced by irrelevancies.

    You’re not off the hook, William. Your criticism applies to your own position.

  2. Your criticism of physically-based reasoning is that it might give wrong answers or be influenced by irrelevancies.

    No, it isn’t.

  3. William J. Murray:
    Liz,

    You agreed that everything that occurs is a result of the computation of material commodities (including energy) according to physics. You said that this computation is chaotic and unpredictable. You agreed that if you ran pre-X exactly the same as before, X would invariably occur.

    There. I didn’t even use the words “determined” or “predictable”. Isthere something in the above that is incorrect?

    No, but you’ve just passed the buck to “exactly the same as before”.

    Let’s say that event X is my choice between chocolate and strawberry icecream.

    And let’s say that in Universe A it is preceded a chain of events A.
    And in universe B it is preceded by chain of events B.

    A and B, are, as per your premise identical – there is a one-to-one mapping between A and B. Another way of putting this would be to say that an intelligent agent in possession of the event-train A, would be able to predict any given element of the event-train B, by means of a simple-look up, with 100% success.

    So is your question simply: Would or could X be different if it followed event train A as opposed to following event-train B?

    There really isn’t a sensible answer to that question – the answer depends on whether or not X depends, or does not depend, on the state of the universe fractionally before X. As far as I know, it doesn’t, but possibly it does. But as we do seem to live in a intrinsically stochastic universe, and as my choice between strawberry and chocolate is extremely finely poised, it is just possible that a stray uncaused virtual particle may, in universe A tip a critical ion just near enough to an ion channel to tip a neuron into an action potential that, butterfly like, tips the balance of decision to strawberry, while in universe B it tips it into chocolate.

    My point is that that is entirely irrelevant to whether I have free will, because I am not a virtual particle, I am an entire organism who happens to find strawberry and chocolate icecream equally delectable. Give me a choice between pistachio and chocolate, and I will probably choose chocolate, unless I’m chocolated out, and just might consider pistachio for a change. Now I’m acting like a proper free agents, able to consider many factors when making my final choice.

    In other words, the more informed my choice, the more willed it is, not the less willed. A totally “uncaused” choice would be necessarily unwilled – if the choice might as well be a coin toss, in what case is it volitional?

  4. Liz,

    In the case of your hypothetical outside observer of the two universe run-ups, can you tell me what the run-up for that particular observer is? If not, how can either of us predict what prediction that entity will make about the two parallel universes in question?

  5. petrushka: You are responsible because you are capable of learning from experience.

    I can’t believe you are so stupid not to notice that responsibility is predicated on knowledge and experience. Children are typically not held to the same standards as adults and do not face the same level of consequence for infractions of the law. Interestingly (as in the case of child actors) they also do not receive the same level of benefits for “good” behavior. This is not a difficult or complex concept.

    What if no matter someone learned but other factors make him have a bad behavoir? No matter he has the information of the consequences, all the other information “make” him do the bad behavior?

    petrushka:
    Systems that can learn are held responsible.

    A computer is responsible then.

  6. The whole point of libertarian free will, Liz, is that an uncaused, deliberate agency can will a thing, and that “will” is not caused by (in the sufficient sense) what came before or by “virtual” particles or random events.

    All of those contextual and influential things may be necessary, but they are not sufficient, under libertarian free will. All of those contextual influences can say, “do X,” but with libertarian free will, I will not do X, cannot do X, unless I (as my uncaused free will) say so. The computation doesn’t arrive at X (meaning, X is willed) until the user, sitting at the computer, delivers it’s input.

    It’s like a program that stops at a point and waits for user input to continue; that user input comes from outside of the system, and that particular computation waits for the input. The user can switch programs. The user can program its own program. The user is not tied down to whatever the computer computes. The user is not caused by the computer or the computation.

  7. If a virtual particle intrudes in one run and didn’t in the other, it’s not the same run-up to X.

    If pre-X invariable generates X, where X is my willful decision, this then is the fundamental antithesis of what libertarian free will is; the capacity to deliberately will anything at point X, regardless of pre-X,

  8. You agreed that everything that occurs is a result of the computation of material commodities (including energy) according to physics. You said that this computation is chaotic and unpredictable. You agreed that if you ran pre-X exactly the same as before, X would invariably occur.

    These are YOUR words, William; YOUR words.

    Elizabeth was attempting to interpret your words; the words of someone who does not have even the most rudimentary vocabulary of science.

    As a physicist, I can understand what Elizabeth is saying. When Elizabeth refers to “stochastic resonance,” I know exactly what she is talking about. When I followed up her explanation with a simple mechanical analogy, I was using concepts she and I and most other scientists understand.

    The “chaotic and unpredictable” part of Elizabeth’s explanation refers to the properties of the neural substrate on which our thoughts occur. That “chaotic,” “unpredictable” aspect of that neural substrate comes about because it is non-linear and comprised of many feedback and feed-forward loops.

    None of this means that thoughts cannot take place on such a substrate. The substrate exists in a heat bath within a very narrow temperature range; go below that range, the nerves cannot transmit. Above that range, the system starts coming apart.

    The fact that this neural substrate is maintained in such a state is what makes it sensitive to stimuli both from the sensors responding to the surrounding environment and to stimuli produced by memory events. Such a system is in a “superposition of states;” that is, it is spread out over many possibilities. Small stimuli then select out which of those many states gets enhanced.

    In the case of your hypothetical outside observer of the two universe run-ups, can you tell me what the run-up for that particular observer is? If not, how can either of us predict what prediction that entity will make about the two parallel universes in question?

    As to determinism in such a system, there is no operational method to nail down determinism. We have been all through that; we can’t trace the history of every event that impacts the system, we can’t even do it in principle. Try to imagine what is involved in tracking every photon, phonon, or particle that is about to impact us in the near future. Try to track the history of all the events that led to any particular photon, phonon, or particle being set on a track to impact us. Now read the next paragraph.

    Try to imagine how you would communicate with a Laplacian demon that stood outside spacetime and could presumably see all of history for all photons, phonons, and particles in the universe. What would be the mechanism of communication between us and that demon? If that mechanism involved interactions, how would those interactions not change what was already “predetermined?”

    You are attributing “beliefs” to people without bothering to learn the vocabulary and the concepts of science. Scientists are far better at philosophical and epistemological issues than you know.

    As Elizabeth signaled to Arrington over at UD, scientists fit their concepts to data; not the other way around. You also should write that down.

  9. It’s pretty clear from QM that the ‘base’ of reality – as close as we can get to it – is stochastic. Take a lump of radioactive material. Entirely predictably, a certain proportion will decay in a certain time. The rate is fairly constant, and even after a few billion years (if half-life is long enough) a few atoms will remain undecayed. An atom’s an atom, yet some decayed instantly, some took yonks. There do not appear to be any ‘hidden variables’ (people have tried and failed to find ’em) that would allow some atoms of the same material to be more robust than others, such that, if you rewound the clock and placed exactly the same lump in exactly the same place, exactly the same alpha particles would fly off from exactly the same atoms at exactly the same time in exactly the same direction. And the same goes for all other interactions at or near the the quantum level, and some way above it.

    None of which is directly relevant to the ‘causality’ question, of course, simply the ‘predictability’ one. It can still be the case that every run is unique, but still everything be a chain of causality over which we have no ‘real’ control. This does not stop us making decisions, but does provide endless fuel for spliff-enhanced dorm conversations. And inevitability could still be the case even if we have souls. William has provided nothing but his personal conviction that he has avoided the ‘inevitability’ issue by (partially) rejecting the material. What constrains souls? Nothing? How do you know?

  10. William and Blas want to make some sort of philosophical statement out of the properties of amatter and of material causation.

    the problem is that physics doesn’t support that way of thinking.

    It’s not that physicists are evading the question. Indeed, causation is a kind of holy grail in physics. But evidence doesn’t support the idea that causation can be nailed down.

    The bottom line is that nothing relevant to free will or moral responsibility can be laid at the feet of materialism, because matter does not have perfectly definable properties.

    Nor definable limitations.

  11. keiths:

    Your criticism of physically-based reasoning is that it might give wrong answers or be influenced by irrelevancies.

    WJM:

    No, it isn’t.

    William, be a big boy and own up to your own statements:

    While one might, for months, attempt to convince EL of another view via logic and evidence, such a tactic may simply not have a causal pathway to change EL’s mind about something, where pizza and butterfly wind might just be what is needed.

    Under my schema, this is the opposite of free will and how one rationally comes to a conclusion…

    I realize that you wish you hadn’t taken such an easily refuted position, but take it you did. Show some integrity and admit that.

    As I said:

    But we already know that human reason — however you explain it, whether through materialism, dualism, idealism, or William’s special explanation — can give wrong answers and be influenced by irrelevancies.

    You’re not off the hook, William. Your criticism applies to your own position.

  12. William J. Murray:
    Liz,

    In the case of your hypothetical outside observer of the two universe run-ups, can you tell me what the run-up for that particular observer is? If not, how can either of us predict what prediction that entity will make about the two parallel universes in question?

    Well, you specified that they are identical. If the observer hasn’t seen the play before, she won’t know what happens next. If she has seen the play before, and the universes are stochastic, then she still won’t know. If she has seen the play before, and the universes aren’t stochastic, she will.

  13. The wave function solutions to Schrödinger’s equation are “deterministic” in their evolution in an idealized configuration of potential wells and energy levels. However, the square of the absolute value of that wave function is a probability density; the probability per unit volume per unit time that the particle or event in question will occur at a specified position at a specified time.

    There are classical systems that behave like this also. A string on a guitar vibrates in a superposition of states consisting of harmonics and various other frequencies. A slight perturbation in the form of a light touch can suppress one or more of those states and leave only one standing. This is the technique of generating “artificial harmonics;” and the tone that is produced is not necessarily an integer multiple of the fundamental.

    But even more importantly, mesoscopic systems are those that are sensitive to initial and boundary conditions at a level that overlaps quantum mechanics.

    Furthermore, there are macroscopic examples of quantum mechanics such as superconductivity or superfluidity. In these cases, entire macroscopic collections of particles act coherently in a way that can be described by a single wave function.

    The fact that superconductivity has now been observed above temperatures of liquid nitrogen suggests that such coherent states are maintained by a number of different kinds of interactions.

    There is much about the behaviors of the neural networks in living organisms that appears analogous to phenomena like superconductivity; except that, with neural networks, the phenomenon appears within a narrow energy window rather than below an energy threshold. Soft matter systems are on the verge of coming apart, whereas superconducting systems are much more tightly bound.

  14. William J. Murray:
    The whole point of libertarian free will, Liz, is that an uncaused, deliberate agency can will a thing, and that “will” is not caused by (in the sufficient sense) what came before or by “virtual” particles or random events.

    OK. I understand what you are saying. What I am trying to probe is: in what sense can a decision be deliberate yet free from any causal factor, such as the information on which such a decision might sensibly be based?

    Let’s say, for instance, that I am poised between rounding up and rounding down a tip. If I round up, the waitress will be better off, and I will be worse off. If I round down, the waitress will be worse off, and I will be better off. Many factors may affect my decision, therefore my final decision is “caused” by the weighting of those factors. Alternately, I could wait for an uncaused virtual particle to come along and tip me one way or the other, in which case my decision would also be “uncaused”.

    In the first case, my decision would be caused, and deliberate, but not “free” in your sense; in the second, my decision would be uncaused, and thus, in some sense, “free”, but not deliberate.

    Describe to me a scenario under which my decision is both uncaused and deliberate 🙂

    All of those contextual and influential things may be necessary, but they are not sufficient, under libertarian free will. All of those contextual influences can say, “do X,” but with libertarian free will, I will not do X, cannot do X, unless I (as my uncaused free will) say so. The computation doesn’t arrive at X (meaning, X is willed) until the user, sitting at the computer, delivers it’s input.

    I take it that “I” is the “user”, wielding “libertarian free will”. So what does “I” take into account before delivering the decision?

    It’s like a program that stops at a point and waits for user input to continue; that user input comes from outside of the system, and that particular computation waits for the input.The user can switch programs. The user can program its own program. The user is not tied down to whatever the computer computes. The user is not caused by the computer or the computation.

    So on what basis does the “user” – “I” – decide to release the decision, or abort it, or start another program? It can’t be information, because that would be a cause. And it can’t be a virtual coin-toss, because that would be abnegating responsibility.

    So what is this input-that-is-not-causal?

  15. petrushka:

    the problem is that physics doesn’t support that way of thinking.

    But evidence doesn’t support the idea that causation can be nailed down.

    If what you say is true, tell Lizzie that his post Neuroimaging is a waste of time.

  16. I don’t think one has to be a physicist to notice that when a creationist says “matter” or “materialism” they are not talking about anything recognizable by a physicist.

  17. Lizzie,

    I think the difference between what you are referring to as “cause” and what William is describing as “causality” are very different. You are using the term in a way I call “soft causality”. That is to say that the information that is weighted and analyzed that is ultimately used as input into the decision making process and ultimately tips the scale for a given decision is still an abstract formulation by the decision maker is itself a product of material processes. William is referring to causation of those very structured physical material processes.

    In William’s view, it seems, the “cause” of a given decision is the very specific interaction of physical matter. In his view, according to materialism, if we went back to a point prior to the decision process and started the process again, the person would make the exact same decision – and would do so no matter how many times we went back to that point – because the decision has to be a product of the specific physical process and specific material cascade that will always come out the same way based on the physical laws that govern the matter underlying the material world.

    I think that summarizes how William understands the materialist view of the world and universe and why we tend to talk past one another.

  18. petrushka:
    I don’t think one has to be a physicist to notice that when a creationist says “matter” or “materialism” they are not talking about anything recognizable by a physicist.

    May you explain the difference? Thanks.

  19. In order to make clear your point, that is the reason we are commenting in blog.

  20. I cannot explain the difference because there is nothing in the intersection between science and apologetics..

    The terms we wish to use need to have operational definitions. You don’t accept those definitions. One cannot reason using incompatible premises. It looks to each of us as if the other is being perverse or dishonest.

    We do not appear to be interested in the same things or the same aspects of what we see.

    It’s as if we both went to an art gallery, and you looked at the paintings and said they are no good because they are flat, and i said the sculpture is no good because it lacks color. Both observations could be based on true observations, but they talk past each other.

    This is a forum started by and mostly inhabited by people who are interested in science. You might have perfectly true things to say, but they are irrelevant to science.

  21. Robin:
    Lizzie,

    I think the difference between what you are referring to as “cause” and what William is describing as “causality” are very different. You are using the term in a way I call “soft causality”. That is to say that the information that is weighted and analyzed that is ultimately used as input into the decision making process and ultimately tips the scale for a given decision is still an abstract formulation by the decision maker is itself a product of material processes. William is referring to causation of those very structured physical material processes.

    In William’s view, it seems, the “cause” of a given decision is the very specific interaction of physical matter. In his view, according to materialism, if we went back to a point prior to the decision process and started the process again, the person would make the exact same decision – and would do so no matter how many times we went back to that point – because the decision has to be a product of the specific physical process and specific material cascade that will always come out the same way based on the physical laws that govern the matter underlying the material world.

    I think that summarizes how William understands the materialist view of the world and universe and why we tend to talk past one another.

    hmm.

    I don’t know, Robin. This is not my picture:

    Robin: That is to say that the information that is weighted and analyzed that is ultimately used as input into the decision making process and ultimately tips the scale for a given decision is still an abstract formulation by the decision maker is itself a product of material processes.

    In fact, it’s closer to your description of what you think William’s view is of mine 🙂

    Robin: the decision has to be a product of the specific physical process and specific material cascade that will always come out the same way based on the physical laws that govern the matter underlying the material world.

    Because we don’t live in a deterministic world, I think this is untrue, but I also think it’s irrelevant, and I’m happy to use it as a simplification.

    I think our decisions are indeed “the product of [a]…specific material cascade”. But that material cascade itself consists of relevant information. For example, in brain imaging we use software that “learns” how to distinguish the brain scan of someone from one group of people from another, and we can then present the software with new data from an unknown person, and it will “decide” which group the person belongs to. The decision tree is highly complex (although not nearly as complex as our on) and includes decisions to sample more information, or information from different sources, in order to come to “its” decision.

    It usually come up with slightly different probability estimate each time because we reset the random number seed, but if we didn’t, it would come up with the same decision, with the same confidence interval, each time.

    And essentially I think that’s how we make decisions except that some of those decision branches involve branches that require us to consult the state of “I” in some past circumstance or some simulated future circumstance, which I think is the core of what we call “consciousness”, and our goals, and thus the values with which we weight our decisions are related to the value we place various possible states of “I” (as well as various possible states of “you”, and “she” and “they”).

    William separates the “I” from that process – for him, it seems the “I” is merely the “user” of the software, and when presented with the results of the material cascade, has the option of acting on it, rejecting it, or, presumably, setting it to run a bit longer, or with different inputs.

    What I am saying is that that process is not only part of the cascade, but is meaningless outside the cascade. It just gets us back to the homunculus again – on what basis does the “I” decide to let the decision run longer, reject it, or act on it? It’s got a recommendation from the system – so what else does it need to know in order to decide whether to act? If nothing, how is it causal? If something, where does that information come from? If random, how is it willed?

  22. Because we don’t live in a deterministic world, I think this is untrue, but I also think it’s irrelevant, and I’m happy to use it as a simplification.

    I wholly agree. I was just trying to summarize William’s take. If history is any indication however, my summary is likely incorrect in some way.

  23. What I am saying is that that process is not only part of the cascade, but is meaningless outside the cascade. It just gets us back to the homunculus again – on what basis does the “I” decide to let the decision run longer, reject it, or act on it? It’s got a recommendation from the system – so what else does it need to know in order to decide whether to act? If nothing, how is it causal? If something, where does that information come from? If random, how is it willed?

    I think the key sticking point is that for you – or rather in your understanding (and I think most folk hereon) – the cascade process allows “us” to view and analyze input and thus come up with a weighted decision based on the various options available. In William’s view, the cascade process forces “us” to select as specific object regardless of the choices, and thus we really have no “choice”.

  24. Lizzie,

    What I am trying to probe is: in what sense can a decision be deliberate yet free from any causal factor, such as the information on which such a decision might sensibly be based?

    This is indeed the crux of the matter and what I’ve been trying to get William to give an example of, in my own way.

    Here’s how I see it.
    A and B cause C to be the most logical decision at time T.
    However as William is free from that causal relationship forcing that particular outcome (the logical choice) to be C he is free to choose C1 instead. But on what basis? Where has this “extra” information come from that enables C1 to be the “better” choice given that only A and B are available?

    Seems to me that William has fooled himself here.

  25. OMagain:
    Lizzie,

    This is indeed the crux of the matter and what I’ve been trying to get William to give an example of, in my own way.

    Here’s how I see it.
    A and B cause C to be the most logical decision at time T.
    However as William is free from that causal relationship forcing that particular outcome (the logical choice)to be C he is free to choose C1 instead. But on what basis? Where has this “extra” information come from that enables C1 to be the “better” choice given that only A and B are available?

    Seems to me that William has fooled himself here.

    I fooled me the same way for about half a century.

    Easily done.

  26. Variety is the spice of life. Think about it. Think not just about intelligent and rational humans making rational decisions, but also about curiosity and exploration. Every living thing, in one way or another, boldly goes where it hasn’t gone before. Curious cats, mice, rats. Evrn plants.

    And evrn bacteria.

  27. Lizzie: I fooled me the same way for about half a century.

    Easily done.

    No you both are fooling yourselfs now. There is no more information. If free will exists there is no more information with the same information we are free to chose C or C1.
    The deterministic will needs new information in the cascade of events to change the result from C to C1. As your scanner needs a new imput to change the classification of the image.
    If we need new information to chose C instead of C1, then we are not free, our answer depends fully and competey on the data we have and we are not more responsible of our actions than your scanner for the classification of the image.
    Off course the possibilitie of change C for C1 has no support in the physical laws, it is impossible, unless the choice is by chance. Then again we have no responsabiities for our actions.
    That is why Coyne says “free will” and morality is an illusion, if materialism is true there is no way to explain an informed decision between C and C1 with the same data in the cascade of events.

  28. Every living thing, in one way or another, boldly goes where it hasn’t gone before. Curious cats, mice, rats. Evrn plants.

    And evrn bacteria.

    nitpick

    With “tumble and run” E coli never know where they are going. They only know if they are worse or better off than where they have just been.

    /nitpick

    The evolution of plants is often neglected but the ability of plants to stumble into available niches without sensory system or taxis (either sense 🙂 ) never ceases to amaze me.

  29. Blas: ..if materialism is true

    I think the consensus here is that “materialism” as pejoratively used by WJM and perhaps Blas is a strawman. In the light of Heisenberg’s uncertainty principle, particle/wave duality and radioactivity, strict determinism is unsustainable.

  30. if determinism is ussustainable why Lizzie´s scanner classifies always in the same way the images?
    Also if determinism is unsustainable, which is the aternative? Chance.
    Then we act not because of a deterministic chain of process but by chance. That do not solve the problem of free will. Also a combination of both deterministic and stochastic process do not solve the problem of free will.
    Lizzie has no choice, she has to admit if the materiaistic model of reality is true, personal responsability for our action is an illusion.
    Coyne wins.

  31. Blas:
    if determinism is ussustainable why Lizzie´s scanner classifies always in the same way the images?

    At a sufficiently fine scale, all raw images are unique. Any filter will sort them into whatever categories the chosen parameters specify. What does that have to do with determinism?

    Also if determinism is unsustainable, which is the aternative? Chance.

    I don’t currently know. But I’d rather not know than make some stuff up to believe.

    Then we act not because of a deterministic chain of process but by chance. That do not solve the problem offree will. Also a combination of both deterministic and stochastic process do not solve the problem of free will.

    As I said, there is no strict determinism for our universe, according to current observations. We can only predict statistical rates of radioactive decay, never when a particular atom will decay (nor can we tell one atom of an isotope from another). The”problem” of free will is an imaginary one

    Lizzie has no choice, she has to admit if the materiaistic model of reality is true, personal responsability for our action is an illusion.
    Coyne wins.

    I am sure Lizzie can and will speak for herself and will choose to respond or not.

  32. Alan Fox:

    The”problem” of free will is an imaginary one

    Thank you to confirm my view. Lizzie will do the same is matter of time.

  33. I agree for the sake of this discussion that libertarian free will (LFW) adds nothing necessary to the process description of how anyone behaves. The computational model, as a theory, is perfectly fine (arguendo), perfectly sufficient, to describe how people act and make decisions from a functional perspective. There is literally, IMO, no experience, including an experience of free will, that cannot be explained (at least hypothetically) as phenomena generated by the computation. As Liz has said, for the sake of debate, it is a difference that makes no difference, and so is an unnecessary added commodity, at the level of describing what we experience and observe.

    But, that is not why one premises libertarian free will. The reason LFW is believed in is not because it makes a describable, quantifiable, experience-able, functional difference in what we experience and observe that the biological computation model cannot hypothetically provide. LFW is presumed because of the difference it would make (if real) in what we are (the nature of being), ontologically speaking, and what existence is. It moves us from being computationally determined products of matter into being something that can do something other than what the computation dictates.

    These are two fundamentally different concepts about the nature of what a human being is, and the nature of what existence is. Yes, it adds a perhaps unnecessary commodity to the physical description, but it is not a scientific premise – it is a metaphysical premise, and one that is necessary to hold up a certain worldview – a certain perspective of what “self” is and means.

    That is what the argument is about – not that LFW is necessary or adds anything to the physical description or to even to one’s experience.

    I do think, however, that it can be argued that the two different schema have a profound impact on the behaviors of those operating under them, and on culture in general.

    Now, as to what LFW **might** add (even if unprovable) to the choosing process, is that it would be able to jump out of the computational program and gather alternate data or instigate new programming venues that was unavailable to the computation. There would be no way to definitively prove that this occurred, because of the non-linear, chaotic nature of the computation.

  34. Now, it might be that LFW is possible to evidence, and I have a few ideas on how that might be possible, but I’m not making that case here.

  35. William J. Murray:There is literally, IMO, no experience, including an experience of free will, that cannot be explained (at least hypothetically) as phenomena generated by the computation As Liz has said, for the sake of debate, it is a difference that makes no difference, and so is an unnecessary added commodity, at the level of describing what we experience and observe.

    You may be being overgenerous here, William, as some will say that the Hard Problem either has not been, or cannot be, accounted “by the computation”. We certainly don’t know all the details, although I do think that the remaining problem while hard, is not Hard.

    William J. Murray: Yes, it adds a perhaps unnecessary commodity to the physical description, but it is not a scientific premise – it is a metaphysical premise, and one that is necessary to hold up a certain worldview – a certain perspective of what “self” is and means.

    Yes, I would agree, and that was my own position until only a few years ago. As a neuroscientist, I knew (or at least considered) that neuroscience provides a potentially complete model. My reason for holding to LFW was essentially theological – I couldn’t see how one could assign moral responsibility in its absence, and so I posited as a necessary condition of moral responsibility in which I simply had faith.

    William J. Murray: Now, as to what LFW **might** add (even if unprovable) to the choosing process, is that it would be able to jump out of the computational program and gather alternate data or instigate new programming venues that was unavailable to the computation. There would be no way to definitively prove that this occurred, because of the non-linear, chaotic nature of the computation.

    Well, possibly. But then you are simply positing an additional data-gatherer – a virtual mini-brain for whom the final decision is the result of input. And so “caused”.

    There is still an irreducible incoherence about the idea of an ultimate arbiter that is both informed and uncaused.

    That’s why instead of trying to peel yet more layers off the onion in order to find “I” in the middle, I realised “I am the whole Onion”.

    There are worse things to be 🙂

  36. William J. Murray:
    Now,it might be that LFW is possible to evidence, and I have a few ideas on how that might be possible, but I’m not making that case here.

    Interesting. I hope you will consider presenting those ideas in a post here.

  37. Genetic variation is a form of going where you haven’t gone before. As is dispersing seeds and producing shoots and runners and roots.

    Life does trial and feedback. Doing stuff without knowing the consequences is simply one of those things living things do. As is learning from consequences.

  38. William,

    The reason LFW is believed in is not because it makes a describable, quantifiable, experience-able, functional difference in what we experience and observe that the biological computation model cannot hypothetically provide. LFW is presumed because of the difference it would make (if real) in what we are (the nature of being), ontologically speaking, and what existence is… it is a metaphysical premise, and one that is necessary to hold up a certain worldview – a certain perspective of what “self” is and means.

    In other words, you believe in libertarian free will not because you have evidence for it, but because you want to believe in it. I suspected as much.

    To borrow your phrase, that is “the opposite of how one rationally comes to a conclusion.”

    Now, as to what LFW **might** add (even if unprovable) to the choosing process, is that it would be able to jump out of the computational program and gather alternate data or instigate new programming venues that was unavailable to the computation.

    Presumably you wouldn’t want all “programming venues” to be equally available, because then our thoughts and behavior would become random and irrational.

    You still want constraints, but you want them to come from something other than the laws of physics.

    But constraints are still constraints, whether they come from the laws of physics or elsewhere. They rule out possibilities.

    What is so much better about immaterial constraints?

  39. Hi Lizzie,

    My comment above is stuck in the queue for some strange reason. Could you please fish it out?

    Thanks.

  40. William J. Murray,

    But, that is not why one premises libertarian free will. The reason LFW is believed in is not because it makes a describable, quantifiable, experience-able, functional difference in what we experience and observe that the biological computation model cannot hypothetically provide. LFW is presumed because of the difference it would make (if real) in what we are (the nature of being), ontologically speaking, and what existence is. It moves us from being computationally determined products of matter into being something that can do something other than what the computation dictates.

    This is still a mischaracterization of the current scientific understanding of “mind.” It still implies a deterministic view of conscious behavior; and it explicitly hauls in determinism on the back of “computation.”

    There are hierarchies of sensor responses to the environment going all the way from the sensing of gradients, to the sensing of light, to the responses of heliotropes to light, to automatic movement toward prey, all the way up to thinking combined with memories of the past and projections of the future. Humans aren’t the only ones who have “minds.”

    Some of the more primitive sensing systems, such as those that sense gradients or light, produce automatic responses just as a servo system does.

    Once there are more complex inputs and memory, the responses of the system take on “more intelligent” behavior by making “decisions” depending on which sensors and which memories carry the dominant weight as inputs to the system.

    When we get to the level of memory that can “remember” remembrances and their chronological order, then we are approaching the point where projections of those sequences of memories into an “imagined” future become grist for input to the responding system.

    The sense of time depends on hierarchies of memory; some of which keep track of the order of memories, i.e., distinguishes which memories have more and more complete chains of events contained in them.

    There isn’t any “computation” going on here; there are simply accumulations of memories of sensor input that become available for additional input. Memories of events that are painful feed into the system as suppressors of movement toward those experiences. Memories of events that made the system “feel better” are fed in as input to repeat those experiences.

    Eventually, at a sufficiently complex level of a neural network, the organism’s experiences and memory inputs are registered as thought. “Computation” is only apparent if the neural system is sufficiently complex to compute and be conscious of computing; for example, when laying out a navigational route from one part of the planet to another, or when doing actual mathematical operations that symbolize experience.

  41. Lizzy, I haven’t had any luck talking to Blas and William. Could you please tell me what the operational consequences of “moral responsibility” are?

    We hold people accountable for purely utilitarian reasons. What is added by calling it moral responsibility?

    The law has struggled with this issue for centuries, and I suspect will continue to struggle. The analysis gains nothing from adding layers of homunculi.

  42. petrushka,

    We hold people accountable for purely utilitarian reasons. What is added by calling it moral responsibility?

    The law has struggled with this issue for centuries, and I suspect will continue to struggle. The analysis gains nothing from adding layers of homunculi.

    The evolutionary view of morality says that “moral behavior” is learned from experience among members of a species that has evolved to a point where individuals in a group begin to understand that what makes them unhappy is what also makes others unhappy. One can even see this development take place in children as they mature.

    The reason the “law” struggles is because the human population has been continually increasing and encountering new experiences and interactions with others; and new experiences require reassessments of collective behavior.

    It is easy to observe that “moral” behavior is not handed down from deities through people who proclaim themselves to be spokesmen for those deities. Sectarian notions of “morality” have yielded to secular experiences.

    Sectarian blood wars are themselves clear evidence that even an aleged single deity doesn’t provide a “universal moral compass;” as many sectarians seem to believe.

  43. We hold people accountable for purely utilitarian reasons.

    That’s how it should be, in my opinion, but the idea of retributive justice is very much alive in the US and elsewhere.

  44. Mike Elzinga:

    There isn’t any “computation” going on here; there are simply accumulations of memories of sensor input that become available for additional input.Memories of events that are painful feed into the system as suppressors of movement toward those experiences.Memories of events that made the system “feel better” are fed in as input to repeat those experiences.

    And that seems like a perfectly acceptable starting point for the evolutionary source of animals being able to admire ‘qualia’.

  45. Well, possibly. But then you are simply positing an additional data-gatherer – a virtual mini-brain for whom the final decision is the result of input. And so “caused”.

    First, a fundamental distinction: you’re mistaking one of the things that I propse LFW does (if true) for what LFW is.

    Second, data collection unlimited by computational parameters is fundamentally different from computationally limited data collection. Computationally limited data-gathering and data interpretation is, IMO, functionally not only prone to bias, it necessarily entails an intrinsic, systemic bias that cannot be surmounted.

    Third, LFW would be able to switch perspectives (interpretive programs) and invent new ones in ways not available to computed systems.

    Fourth, LFW could create new data, given the metaphysical system I’ve proposed for it. It would create new things –

    Now, please keep in mind that all of the above would be describable under the materialist system as stuff the computation generates, but if LFW is true, LFW grants one data-gathering, interpreting and creative abilities far beyond what a local (individual) computational system could possibly provide. However, none of that matters when it comes to the meat of LFW: providing a metaphysical, essential freedom and responsibility that a computation simply cannot provide – at the existential level.

    And that takes us to where I think there might be evidence found – if not now, some day – to back up the LFW (and non-physical consciousness) perspective; perhaps we will find that humans generate, store and use a quality and quantity of information beyond the physical capacity of any enclosed physical system the size of a human body.

    Or, we may find that conscious observation (or a proxy thereof) is necessary before quantum states are locally real, which would mean that the computational result (decision) cannot precede the willful intent of the observer. IOW, if the computational results of the quantum states are indeterminate until an observer decides how to look at them, the physical computation cannot be the thing that is “deciding” the result, because it would be indeterminate until the observer (1) decides how to look at it, and (2) looks. This may be true of brain states (and has been proposed to be true of brain states).

Leave a Reply