The Ghost in the Machine

Let’s suppose there really is a Ghost in the Machine – a “little man” (“homunculus”) who “looks out” through our eyes, and “listens in” through our ears (interestingly, those are the two senses most usually ascribed to the floating Ghost in NDE accounts).  Or, if you prefer, a Soul.

And let’s further suppose that it is reasonable to posit that the Ghost/Soul is inessential to human day-to day function, merely to conscious experience and/or “free will”; that it is at least possible hypothetically to imagine a soulless simulacrum of a person who behaved exactly as a person would, but was in fact a mere automaton, without conscious experience – without qualia.

Thirdly, let’s suppose that there there are only a handful of these Souls in the world, and the rest of the things that look and behave like human beings are Ghostless automatons – soulless simulacra. But, as in an infernal game of Mafia, none of us know which are the Simulacra, and which are the true Humans – because there is no way of telling from the outside – from an apparent person’s behaviour or social interactions, or cognitive capacities – which is which.

And finally, let’s suppose that souls can migrate at will, from body to body.

Let’s say one of these Souls starts the morning in Lizzie’s body, experiencing being Lizzie, and remembering all Lizzie’s dreams, thinking Lizzie’s thoughts, feeling Lizzie’s need to go pee, imagining all Lizzie’s plans for the day, hearing Lizzie’s alarm clock, seeing Lizzie’s watch, noting that the sky is a normal blue between the clouds through the skylight.

Somewhere an empty simulacrum of Barry Arrington is still asleep (even automatons “sleep” while their brains do what brains have to do to do what brains have to do).  But as the day wears on, the Soul in Lizzie’s body decides to go for a wander.  It leaves Lizzie to get on with stuff, as her body is perfectly capable of doing, she just won’t be “experiencing” what she does (and, conceivably, she might make some choices that she wouldn’t otherwise make, but she’s an extremely well-designed automaton, with broadly altruistic defaults for her decision-trees).

The Soul sees that Barry is about to wake up as the sun rises over Colorado, and so decides to spend a few hours in Barry’s body.  And thus experiences being Barry waking up, probably needing a pee as well, making Barry’s plans, checking Barry’s watch, remembering what Barry did yesterday (because even though Barry’s body was entirely empty of soul yesterday, of course Barry’s brain has all the requisite neural settings  for the Soul to experience the full Monty of remembering being Barry yesterday, and what Barry planned to do today, even though at the time, Barry experienced none of this.  The Soul also notices the sky is its usual colour, which Barry, like Lizzie calls “blue”.

Aha.  But is the Soul’s experience of Barry’s “blue” the same as the Soul’s experience of Lizzie’s “blue”?  Well, the Soul has no way to tell, because even though the Soul was in Lizzie’s body that very morning, experiencing Lizzie’s “blue”, the Soul cannot remember Lizzie’s “blue” now it is in Barry’s body, because if it could, Barry’s experience would not simply be of “blue” but of “oh, that’s interesting, my blue is different to Lizzie’s blue”. And we know that not only does Barry not know what Lizzie’s blue is like when Barry experiences blue (because “blue” is an ineffable quale, right?), he doesn’t even know whether “blue” sky was even visible from Lizzie’s bedroom when Lizzie woke up that morning.  Indeed, being in 40 watt Nottingham, it often isn’t.

Now the Soul decides to see how Lizzie is getting on.  Back east, over the Atlantic it flits, just in time for Lizzie getting on her bike home from work.  Immediately the Soul accesses Lizzie’s day, and ponders the problems she has been wrestling with, and which, as so often, get partly solved on the bike ride home.  The Soul enjoys this part.  But of course it has no way of comparing this pleasure with the pleasure it took in Barry’s American breakfast which it had also enjoyed, because that experience – those qualia – are not part of Lizzie’s experience.  Lizzie has no clue what Barry had for breakfast.

Now the Soul decides to race Lizzie home and take up temporary residence in the body of Patrick, Lizzie’s son, who is becoming an excellent vegetarian cook, and is currently preparing a delicious sweet-potato and peanut butter curry.  The Soul immediately experiences Patrick’s thoughts, his memory of calling Lizzie a short while earlier to check that she is about to arrive home, and indeed, his imagining of what Lizzie is anticipating coming home to, as she pedals along the riverbank in the dusk.  Soul zips back to Lizzie and encounters something really very similar – although it cannot directly compare the experiences – and also experiences Lizzie’s imaginings of Patrick stirring the sweet potato stew, and adjusting the curry powder to the intensity that he prefers (but she does not).

As Baloo said to Mowgli: Am I giving you a clue?

The point I am trying to make is that the uniqueness of subjective experience is as defined as much by what we don’t know as by what we do.  “Consciousness” is mysterious because it is unique.  The fact that we can say things like  “I’m lucky I didn’t live in the days before anaesthesia” indicates a powerful intuition that there is an “I” who might have done, and thus an equally powerful sense that there is an “I” who was simply lucky enough to have landed in the body of a post-anaesthesia person.  And yet it takes only a very simple thought experiment, I suggest, to realise that this mysterious uniqueness is – or at least could be – a simple artefact of our necessarily limited PoV.  And a simple step, I suggest, to consider that actually a ghostless automaton – a soulless simulacrum is – an incoherent concept.  If my putative Soul, who flits from body to body, is capable not only of experiencing the present of any body in which it is currently resident, but that body’s past and anticipated future, but incapable of simultaneously experiencing anything except the present, past, and anticipated future of that body, then it becomes a redundant concept.  All we need to do is to postulate that consciousness consists of having accessible a body of knowledge only accessible to that organism by simple dint of that organism being limited in space and time to a single trajectory.  And if that knowledge is available to the automaton – as it clearly is – then we have no need to posit an additional Souly-thing to experience it.

What we do need to posit, however, is some kind of looping neural architecture that enables the organism to model the world as consisting of objects and agents, and to model itself- the modeler – as one of those agents.  Once you have done that, consciousness is not only possible to a material organism, but inescapable. And of course looping neural architecture is exactly what we observe.

I suggest that the truth is hiding in plain sight: we are conscious because when we are unconscious we can’t function.  Unless the function we need to perform at the time is to let a surgeon remove some part of us, under which circumstances I’m happy to let an anaesthetist render me unconscious.

367 thoughts on “The Ghost in the Machine

  1. Well, you specified that they are identical. If the observer hasn’t seen the play before, she won’t know what happens next. If she has seen the play before, and the universes are stochastic, then she still won’t know. If she has seen the play before, and the universes aren’t stochastic, she will.

    So, you’re saying you can predict what a hypothetical observer will know, and what it will say (predict), based on what what it observed? If the system of sentient entity is chaotic, unpredictable and non-linear, then you have no idea what the observer will say will happen next. Correct?

    This is what, to me, belies your entire position. Your arguments are constructed and comprised of epistemological inferences that only follow from an ontological position you claim is false. Your characterization of the “outside observer” is one that is not simply saying whatever its particular computation happens to output in its particular case of observing two identical universes running along their course (your claimed ontological position); your characterization of that observer only follows from an ontological position that the observer is capable of shrugging off whatever their own computation commands and understanding what is occurring, and making a prediction that is free from it’s own computational limitations and interpretive biases.

  2. William J. Murray,

    Fourth, LFW could create new data, given the metaphysical system I’ve proposed for it. It would create new things –

    How does this “libertarian free will” keep from violating the laws of physics; in particular, conservation of energy?

    We can easily measure the energies that are required to move atoms and molecules around in living systems; and we don’t see any violations of the laws of physics anywhere. In fact, the phenomena of hypothermia and hyperthermia set some very fine limits on the energy window in which the nervous system operates.

    How do you explain a non-material entity that can interact with a material neural network and not show itself as a violation of conservation of energy?

    Give us some magnitudes of energy that your homunculus – or whatever your “LFW” is – must add to the physical nervous system in order to push atoms and molecules around.

  3. William J. Murray,

    Second, data collection unlimited by computational parameters is fundamentally different from computationally limited data collection. Computationally limited data-gathering and data interpretation is, IMO, functionally not only prone to bias, it necessarily entails an intrinsic, systemic bias that cannot be surmounted.

    Unlimited computation (via data gathering) is still computation. You’re caught in a trap. You believe you’ve escaped some important barrier by going “metaphysical”, but you are no more “free of cause” and “free of constraints” in a superstitious mode than you are in a materialist mode.

    If we stipulate that your “supernatural will” has “unlimited data collection” capacities (whatever that may mean), you are still computing. You are processing input (the data) toward a directed output. Escaping the flesh doesn’t buy you any thing towards reifying “free will”. For any given act, you are either constrained by internal/external factors (determined action), or you are not (random output), or some combination of the two (deterministic narrrowing of choices to a random “coin flip”, for example).

    “Determined” and “random” exhaust the phase space for you. There are no more categories, and supposing that your supernatural mind is way more sophisticated (or — this is odd: unbiased?) doesn’t provide any relief from your actions as wholly comprised of deterministic and non-deterministic outcomes.

    And that takes us to where I think there might be evidence found – if not now, some day – to back up the LFW (and non-physical consciousness) perspective; perhaps we will find that humans generate, store and use a quality and quantity of information beyond the physical capacity of any enclosed physical system the size of a human body.

    This is just so much more computing, and via your own words. Processing “non-physical information” [sic] just makes you a “dualist computer”; your free will is still the product of internal and external states and trajectories, with some stochastic mixed in if you like. The “free” in your free will in this model rises no higher than a Churchland-style eliminative materialist view of free will, in that any “freedom” is random, indeterminate, beyond your choosing.

    Any self-control you assert metaphysically leaves you right where you were when you were thinking about this as a (hypothetical) materialist. If you choose A over B on the basis of X, A happens to you because of X. If you have A chosen for you over B by some stochastic process, A just happened to you all the same.

    The theist impulse is commonly driven by the safety afforded by pushing the important issues into the “spiritual realm”, where magical thinking avails much comfort, and avoids critical analysis. But this is one area where the concepts underneath this issue (“caused by internal/external states” and “random drivers of output”) provide no safe harbor in supernatural appeals.

  4. eigenstate,

    Escaping the flesh doesn’t buy you any thing towards reifying “free will”. For any given act, you are either constrained by internal/external factors (determined action), or you are not (random output), or some combination of the two (deterministic narrrowing of choices to a random “coin flip”, for example).

    Exactly, which is why I would like to hear William’s response to my earlier question:

    Presumably you wouldn’t want all “programming venues” to be equally available, because then our thoughts and behavior would become random and irrational.

    You still want constraints, but you want them to come from something other than the laws of physics.

    But constraints are still constraints, whether they come from the laws of physics or elsewhere. They rule out possibilities.

    What is so much better about immaterial constraints?

    It’s worth stressing that although many of us who deny libertarian free will are also materialists, our denial of LFW doesn’t depend on our materialism.

    Libertarian free will is just as incoherent in a dualist or idealist framework as it is in a materialist one.

  5. keiths:
    Libertarian free will is just as incoherent in a dualist or idealist framework as it is in a materialist one.

    I would be happy to ask WJM to respond to your earlier challenge, rather than (or before) mine.

    This is a crucial point, and I’m interested to see whether WJM can get hold of the real problem, here.

    LFW is not just “problematic under materialism”. It’s “invincibly incoherent” as a concept, and can’t even be helped, let alone saved by notions of supernatural dimensions and capacities.

  6. WJM’s Body: I’m hungry. Make me a sandwich.
    WJM’s Mind: Shove off. You’re not the boss of me.

    [Time passes…]:

    WJM’s Body: Look, I’m starving: Make me a fucking sandwich!
    WJM’s Mind: Oh, just to shut you up! But you’ll have a burger and be happy with it. Because I make the choices, not you. You are but a temporary vessel. Relish?
    WJM’s Body: Jeez, to we have to go through this every sodding time?

  7. petrushka:
    We hold people accountable for purely utilitarian reasons.

    Then, the winners are always right and loosers wrong. The law is the right of the mans that has power of rule. Stalin and Ghandi are the same.

  8. “Determined” and “random” exhaust the phase space for you. There are no more categories, and supposing that your supernatural mind is way more sophisticated (or — this is odd: unbiased?) doesn’t provide any relief from your actions as wholly comprised of deterministic and non-deterministic outcomes.

    Quite true Eigenstate. However, I suspect there are those of a theological bent who “feel within their heart” that choices that are “determined” through the act of the “spirit” or “soul” freely weighing the options independent of material constraints – guided and constrained, if you will, by “divine wisdom and insight” – is far more free and self-determining than having specific options forced upon us by the laws of nature. Clearly I do not subscribe to this thinking as it relies upon an erroneous, strawman understanding of material systems, but folks like William really do appear to see the distinctions this way.

  9. Robin: Quite true Eigenstate. However, I suspect there are those of a theological bent who “feel within their heart” that choices that are “determined” through the act of the “spirit” or “soul” freely weighing the options independent of material constraints – guided and constrained, if you will, by “divine wisdom and insight” – is far more free and self-determining than having specific options forced upon us by the laws of nature. Clearly I do not subscribe to this thinking as it relies upon an erroneous, strawman understanding of material systems, but folks like William really do appear to see the distinctions this way.

    Of course you are right, Robin, and as one who was a Christian for a large part of my life, I can well understand the view that at disembodied spiritual mind is somehow more noble and pure, unburdened as it is from the corruptions of the flesh, etc.

    Let’s stipulate the supernaturalist intuition for the moment, and assume the dualist stance; our “super-intellect” is an immaterial executive that operates free of carnal and all physical parameters.

    Now, my “mind’ in this model is “free of the flesh”, and thus not enslaved to its demands (although I think here Allan Miller’s point immediately above about the body needing food, etc. is a powerful one, but we’re sticking with the dualist thing for the moment), but has “free from the flesh” earned this mind any more freedom, in the sense of the word that WJM and other theists who embrace LFW intend?

    No.

    If this spiritual mind is going to act, to decide anything at all, it still must draw from deterministic and random inputs for its judgments. Do I want to make myself a sandwich? Being “disembodied” and somehow unchained by my hunger doesn’t give me LFW-style agency. I still either let some factors determine my choice, or I let no factors determine my choice.

    The “race to the bottom” on this idea ends up considering God as an agent. Can God choose A over B? Well, if God relies on some reasons X and Y to choose A, then X & Y are determining for God what to choose? But wait, you say, God is free to determine whether X & Y are sufficient to choose A!.

    That just pushes the problem down a level. Are X & Y sufficient to choose A for God? Well, yes, but why? Well, perhaps its for reason Z, in which case God is now bound to choose A, based on X & Y, which are preferable because of Z. Else, There is no reason for making X&Y dispositive here in choosing A, and it’s a random basis for the decision. Something that “happens to God”, just as much as having Z drive the primacy of X&Y as the driver of choice A.

    Once a random choice is identified, a “coin flip”, it’s the end of the line, and no further regress is needed. But that’s “choices happening to God without his choosing”. If we identify Z as the driver for X & Y in God’s case, we are just one more frame down the stack. Wash, rinse, repeat, as far toward infinite regress as you (or God) would like to go.

    There is no escape, even (especially) for an omniscient, all-spirit (see 2 TImothy, for example) God. For any putative act of will, God is either controlled by dispositive factors, or draws on coin flips. EIther way, God’s choices happen to him in ways that have him just as much a spectator to the process as you or I. God does not and cannot have any more LFW than you and me. God’s in a worse position if we understand that “coin flips” are even more problematic for a omni-god than they are for a human, never mind that an omniscient God understands all this perfectly before hand and so is not even able to have his choices be revealed to him as time goes by, but can see the pupet strings and coin flips from and to time eternal…

    All this really reveals is the nature of the conceits humans commonly maintain about free will and agency. It’s not a load bearing concept. The real error is the superstition that a logical contradiction obtains, that our choices are caused, yet uncaused, ours yet not of us, somehow.

  10. Let’s not get too bogged down, here William – eigenstate and keiths have expressed what I was trying to press you to much better than I have done. But for what it’s worth:

    William J. Murray: So, you’re saying you can predict what a hypothetical observer will know, and what it will say (predict), based on what what it observed?If the system of sentient entity is chaotic, unpredictable and non-linear, then you have no idea what the observer will say will happen next. Correct?

    I’m hypothesising an observer outside the system. You can make it a deterministic observer if you wish. But this raises precisely the ad aburdum I was getting at – asking whether you would get the same result if you “replayed” the system begs the question about what this would mean in the absence of any external observer.

    This is what, to me, belies your entire position. Your arguments are constructed and comprised of epistemological inferences that only follow from an ontological position you claim is false.Your characterization of the “outside observer” is one that is not simply saying whatever its particular computation happens to output in its particular case of observing two identical universes running along their course (your claimed ontological position); your characterization of that observer only follows from an ontological position that the observer is capable of shrugging off whatever their own computation commands and understanding what is occurring, and making a prediction that is free from it’s own computational limitations and interpretive biases.

    In that case let’s drop it. I was merely pointing out that your question is, IMO, incoherent – what would it mean to “replay” the system and get “the same” answer? I’m suggesting that there is no answer to the question absent an external observer, and if there is an external oberver, then the answer will depend on the knowledge possessed by that observer.

    You seem to want an answer that does not intrinsically assume an external observer. I’m saying there isn’t one.

  11. I’m suggesting that there is no answer to the question absent an external observer, and if there is an external oberver, then the answer will depend on the knowledge possessed by that observer.

    Furthermore, if there is an external observer, how does it “receive” and “communicate” that “knowledge?”

    The existence of a disembodied “intelligence” not only has all the problems of making decisions based on input from a set of sensors in contact with a material world full of contingency and distorted and incomplete data, it has to do it across a material/nonmaterial interface. What is the mechanism involved at this interface; and how does it NOT violate the laws of physics?

    Is there a purifying filter at that interface that extracts “TRUTH” from the messy signal of reality coming in through those materialist sensors and across the boundary to whatever the nonmaterial being is?

    How does this “TRUTH” get back across that boundary into the messy world of the material sensors and not be corrupted again when “pressed into action” by the manipulation of those material neurons that control the muscles?

  12. I’ve started a new thread here if people would like a change of scene.

    But feel free to respond to specific comments made on this thread here, if you prefer.

  13. eigenstate,

    God does not and cannot have any more LFW than you and me.

    Yes, and very few theists realize this.

    Ask theists “Is God free to sin?” and watch the tap-dancing begin.

  14. keiths: That’s how it should be, in my opinion, but the idea of retributive justice is very much alive in the US and elsewhere.

    Absolutely. Although it’s not absent from popular feeling in the UK, it’s on the whole a much less prominent part of sentencing. People tend to be much more concerned that perpetrators of terrible crimes should be prevented from ever repeating them than from being punished for them.

    It always strikes me as slightly odd that more religious societies seem to be stronger on retribution, given that if some Deity is going to give everyone their Just Desserts in the next life, then why worry about making sure they also get it in this one? But I guess it just goes with the territory – if you think in retributive terms at all, it probably transfers.

    Unless my correlation is screwy anyway (which it may be).

  15. Mike Elzinga” There isn’t any “computation” going on here; there are simply accumulations of memories of sensor input that become available for additional input.Memories of events that are painful feed into the system as suppressors of movement toward those experiences.Memories of events that made the system “feel better” are fed in as input to repeat those experiences.

    Aardvark: And that seems like a perfectly acceptable starting point for the evolutionary source of animals being able to admire ‘qualia’.

    So the origin of memories is deftly left unexplored, unexplained in favor of getting that much faster to an ‘evolutionary starting point’.

    Yep. Critical thinking at its best.

  16. Lizzie,

    It always strikes me as slightly odd that more religious societies seem to be stronger on retribution, given that if some Deity is going to give everyone their Just Desserts in the next life, then why worry about making sure they also get it in this one?

    Yes, although Christian criminals (Protestants, anyway) are off the hook if they die having repented and affirmed their belief in Christ as savior. Perhaps many of their fellow Christians are determined to see them punished in this life since they won’t be punished in the next.

    ‘WWJD?’ is not a question that occurs to many Christians when they are meting out punishment.

    Blasphemy is another interesting case. Many believers think that blasphemy should be punished, but why? If it bothers God, he is free to turn blasphemers into toads or smite them on the spot. He certainly doesn’t need puny humans leaping to his defense.

    And if the punishment is for the sake of believers who are “offended” by the blasphemy, then why are they offended? If their faith is strong, they won’t grant the blasphemy any credence. If their faith is weak and the blasphemy sows the seeds of doubt, then that must be okay with their omnipotent God because otherwise he would have prevented it.

  17. Steve: So the origin of memories is deftly left unexplored, unexplained in favor of getting that much faster to an ‘evolutionary starting point’.

    Yep.Critical thinking at its best.

    Not every post has to give a Complete Explanation of Everything, Steve 🙂

    Memory is actually very well understood, but better, IMO, understood as the capcity to recreate an experience, than as a “store”. But “store” is a reasonable working model nonetheless.

Leave a Reply