Inside looking out?

Barry has a post up at UD that is on the same topic as one that I’ve had half written for a while now, but I thought I’d jump the gun and comment on Barry’s here, as it raises an important point, nicely and simply made: that, as Barry’s post-title puts it:

 

 

 

“If my eyes are a window, is there anyone looking out?”

 

 

 

Barry writes:

As we were winding our way through Custer State Park I became aware of myself looking through my eyes as if they were a window. I had a keenly felt sensation of what theorists of mind call the “subject-object” phenomenon. I perceived myself as a “subject” contemplating and having a reaction to an “object” (the beautiful scenery of the park).

Given their premises, materialists must believe the brain is a sort of organic computer, in principle very much like the computer on which I am writing this post. The subject-object problem is a seemingly insurmountable obstacle to this theory. Closely related to this issue is the idea of “qualia,” the subjective perception of experience (the cool blueness of the sky, the sadness of depression, the warmth of a fine sunset, the tangy-ness of a dill pickle).

Consider a computer to which someone has attached a camera and a spectrometer (an instrument that measures the properties of light). They point the camera at the western horizon and write a program that instructs the computer as follows: “when light conditions are X print out this statement: ‘Oh, what a beautiful sunset.’” Suppose I say “Oh, what a beautiful sunset” at the precise moment the computer is printing out the same statement according to the program. Have the computer and I had the same experience of the sunset? Obviously not. The computer has had no “experience” of the sunset at all. It has no concept of beauty. It cannot experience qualia. It is precisely this subjective experience of the sunset that cannot be accounted for on materialist principles. It follows that if materialist premises exclude an obviously true conclusion – i.e., that there is someone “in there” looking out of the window of my eyes – then materialist premises must be false.

The question in the title of this post is: “If my eyes are a window, is there anyone looking out?” The materialist must answer this question “no.” That the materialist must give an obviously false answer to this question is a devastating rebuke to materialism.

 

So, first of all:

Given their premises, materialists must believe the brain is a sort of organic computer, in principle very much like the computer on which I am writing this post. The subject-object problem is a seemingly insurmountable obstacle to this theory.

It is true, in a sense, that I think (“believe” is not a word I find very useful – “posit” would be better) that “the brain is a sort of organic computer”.  It is certainly organic, and it certainly computes things (in my case, not very well, which is why I use a computer!)  But I do not posit that my brain is “in principle very much like the computer on which I am writing this post”.  If it were, then the “subject-object problem” would indeed be “a seemingly insurmountable obstacle to this theory”.

For a start, the computer on which I am writing this post receives all its input from human sources.  My brain, in contrast, receives its inputs from all manner of external sources, and, what is more, depending on those inputs, “computes” a motor response which it sends to my body (my eyes, my neck, my torso, my legs) that changes the input. In other words, my brain is not (or not simply) a tool of some other intelligent agent, my brain is the “tool” of the organism that I call “me”, and which incorporates (literally incorporates) not only my brain, but my entire body, motor and sensory apparatus, digestive, circulatory and endocrine system and all.  So the brain is not simply an information-processor, like the computer on my desk, but part of an information-gathering system – moreover, one in which the information to-be-gathered is itself an output of the system.

Secondly, as implied above, this makes brains a subsystem of a whole system that is most strongly characteristics by re-entrant feedback loops, in which not only is information processed, but in which the output of that processing is re-entered as input into the decision-making process as to what further information to seek. So if we want a materialist analog to the brain, we need to look at robots, not computers – i.e. things that can move their sensory apparatus as a function what information they need.

Ah, need.  That’s another thing – organisms have needs (at its most simplest, to survive, but with all kinds of sub-needs, and epiphenomenal needs supporting that basic need – we should probably leave the origin of those needs to one side for now…)  Organisms have needs, therefore they potentially have goals – outcomes that they seek, which we can also express as “desire, and take action to fulfill”.  And those goals themselves are part of what the brain sets, and changes, in the light of new information.

So no, Barry.  I, as a materialist (and, it should be said, a cognitive neuroscientist!) do not “believe” that the brain is merely an “organic computer” that is “similar in principle”, to the one on your desk.  I think it is radically different to the computer on your desk, not least because it does a heck of a lot more than “compute”.  It is part of the system of tools that enables me, as organism, to survive, by not only computing what I have to do to reach those goals, but to compute those goals themselves, in the light of current information, to seek further information in that may result in further adjustment of those goals, and to select actions that will enable me to fulfill them.

So….

Closely related to this issue is the idea of “qualia,” the subjective perception of experience (the cool blueness of the sky, the sadness of depression, the warmth of a fine sunset, the tangy-ness of a dill pickle).Consider a computer to which someone has attached a camera and a spectrometer (an instrument that measures the properties of light). They point the camera at the western horizon and write a program that instructs the computer as follows: “when light conditions are X print out this statement: ‘Oh, what a beautiful sunset.’” Suppose I say “Oh, what a beautiful sunset” at the precise moment the computer is printing out the same statement according to the program. Have the computer and I had the same experience of the sunset? Obviously not. The computer has had no “experience” of the sunset at all. It has no concept of beauty. It cannot experience qualia.

 

Indeed, your computer cannot.  That is because your computer is not an autonomous interacter with its environment, in which it controls and adapts its own goals according to its needs – indeed, it has no needs.  We may need computers; the computer does not need itself – it does not need to survive.

It is precisely this subjective experience of the sunset that cannot be accounted for on materialist principles.

And so we have the non-sequitur:

  • P1 Materialists think brains are computers
  • P2 Computers cannot have experience
  • C: Materialist principles cannot account for experience.

Not only is the first premise wrong (see above), but C doesn’t follow anyway, because experience is not simply a function of brains but of entire organisms.

So this is wrong:

The question in the title of this post is: “If my eyes are a window, is there anyone looking out?” The materialist must answer this question “no.” That the materialist must give an obviously false answer to this question is a devastating rebuke to materialism.

My answer is no, not because I think there is no-one “looking out” but because I don’t accept the premise that “my eyes are a window”.  My eyes are not a window, they are simply the things I (qua organism) use for looking with, and I don’t “look out” of my eyes – I just look.

So there’s certainly someone looking.

Who it is will (probably) be the subject of my next post 🙂

183 thoughts on “Inside looking out?

  1. It’s actually rather difficult to explain the perception of blue. It can be induced by moving patterns of light and dark, independently of wavelength. It can be induced by monochromatic yellow light.

    This would seem to be because the signal from the light receptors is converted to a digital code before arriving at the brain.

  2. You are equivocating on the word “evolution”, because the word is used in two senses. It’s used as a descriptive term, to describe a body of related observations (the set of facts). In this sense, evolution IS a fact, because the observations are real.

    But it’s also used to refer to a theory,. a proposed set of mechanisms which together result in the observations we make. And as a proposed explanation, it becomes increasingly accurate. But the theory isn’t a fact. Evolution-the-fact is the set of observations, evolution-the-theory tries to explain them.

  3. As I read them, their view of reality has teleology woven inextricably through it, down to the the finest detail. “Final cause” must exist for everthing, and for everything that everything is composed of. It’s teleology all the way down.

    And I think that’s human nature. When a child asks why the sky is blue, he’s not really asking about Rayleigh scattering, he is asking for the purpose of the blueness.

    So when they try to construct a non-teleological model, they are defeated before they start. Such a thing is simply incomprehensible. And so their efforts are ludicrous, but I don’t think they are dishonest. Their reality consists of implemented purposes. Nothing else.

  4. Blas: It would be nice that evolutionists talk in that way instead of saying “evolution is a fact” or “it would be perverse to withold provisional assent”. I would agree that ToE is an increasingly acccurate model.

    It would be perverse to deny evolution, just as it would be perverse to maintain the earth is flat. The accuracy of the evolution model will proceed without altering the fact that it occurred.

  5. Joe Felsenstein:
    It seems strange for people like Denyse and Barry to maintain that although your brain is in your head, your “mind” is Somewhere Else. (Am I wrong about their position on this? It seems hard to imagine that they actually believe that, so perhaps I misunderstand them.)

    If you are walking through your kitchen one day and happen to hit your head, hard, on an open cabinet door, and you fall down unconscious, is your brain knocked out, while your mind keeps on working, as it’s Somewhere Else?

    Yes. Its called dreaming. The soul keeps on thinking.
    Even if there’s no dreaming there is no reason to presume the mind is not still thinking. Its just, perhaps, the memory that is not recording.
    The brain is not knocked out either i think.
    Anyways the bible teaches we have a soul and this made in gods image. Thats why there is no difference in our thinking ability upon going to the afterlife.
    Our brain is unrelated to our thinking or intelligence.

  6. At my age my soul mostly needs to pee in the morning, but all the toilets in heaven are out of order.

  7. Cooling the blood?

    I do accept Robert’s contention at face value that his brain is not used for thinking.

  8. And, equally, the statistical inevitablility of evolutionary change in a finite population of variant replicators can be predicted. What cannot be predicted – in any stochastic system – is the ‘next’ outcome.

  9. Colour is an interesting one. We perceive (approx.) 7 different colours in the rainbow. These can be reduced to discrete interactions – we have 3 different cone receptors, responding to Short, Medium and Long wavelengths (from that fragment of the em spectrum we call ‘visible’). But because each has a range, in a curve of sensitivity, the tails intersect, and so we have the possibilitiy of distinguishing both peak and intermediate effects, giving us 7 bands that we tend to name, though we can distinguish up to a million shades. But it all boils down to the ability of protein to respond to specific photon wavelengths, most of which, for any given protein, shoot straight past, invisibly.

    The perception of the rainbow depends entirely on perspective. One has to be in one place, gathering the light internally-reflected in a particular arc of water droplets, and picking out those photons that interact most strongly with the 3 spectral proteins in our retinas (by some miracle, we perceive most strongly the part of the solar spectrum that suffers the least atmospheric filtration …!).

    Unless he has eyeballs, and isn’t everywhere at once, God can’t see rainbows.

  10. BA

    The existence of a universal trait that cannot be accounted for on the premise that it conferred a selective advantage to our ancestors is a devastating blow to the materialist creation myth (Darwinism).

    It is a devastating blow only to a strawman version of evolution. Natural Selection is but a special case of the more general principle that finite populations will inexorably lose their variation over the generations, due to sampling effects. This inevitably means that what is left is a ‘universal trait’.

    When NS is in operation, the variation lost tends to be that relatively detrimental, leading to adaptation. But even when not in operation, the population changes, and concentrates some variants at the expense of others.

    See also: spandrels.

  11. Barry Arrington: Given their premises, materialists must believe the brain is a sort of organic computer, in principle very much like the computer on which I am writing this post.

    Barry Arrington is confusing the analogy with the thing itself. It used to be gears and levers, a clockwork universe. Now it’s computers.

  12. Lizzie: Some people call consciousness an “illusion”.

    A more apt description is that consciousness is a sensation.

  13. Lizzie: I’d say consciousness is a model.

    Heh. It’s a sensation (“fact”) and a model (“theory”).

  14. Joe Felsenstein,

    I think the issue isn’t so much that Barry and Denyse think the mind is somewhere else from the brain per se. I think that they assume that the sensation of having the perspective of the world through their eyes that this means there is some essence behind their eyes that is actually separate from their eyes. They may well think their mind is that thing behind their eyes, but the point is they don’t seem to be able to understand that their mind/brain/eyes is an interactive system.

  15. Thanks, I take your point. I was perhaps off-topic in my comment: I was actually reacting to multiple previous UD posts in which Denyse and Barry were upset about the notion that the “mind” was embodied in the brain. As far as I am concerned, the mind is what the brain does. But they see things very differently and seem to think that the “mind” exists not in the brain but Somewhere Else. Again, I was probably OT.

  16. Back in the bronze age when I took physiological psychology, I was taught that eyes are part of the brain. Or vice versa.

    Barry and his tribe are cartographers printing “here be dragons” on every available uncharted territory.

  17. petrushka: It would be perverse to deny evolution, just as it would be perverse to maintain the earth is flat. The accuracy of the evolution model will proceed without altering the fact that it occurred.

    Then evolution it is not an educated guess is a truth. If it were not a truth you cannot be sure that “the evolution model will proceed without altering the fact that it occurred.” So you mantain your

    “science is an iterative series of educated guesses and tests of their validity. Science does not produce TRVTH. It produces increasingly accurate models.”

    or are you going to do an exception for ToE?

  18. Blas: Then evolution it is not an educated guess is a truth. If it were not a truth you cannot be sure that “the evolution model will proceed without altering the fact that it occurred.”So you mantain your

    “science is an iterative series of educated guesses and tests of their validity. Science does not produce TRVTH. It produces increasingly accurate models.”

    or are you going to do an exception for ToE?

    Blas, why are you playing this stupid “gotcha” game? What do you hope to get from this?

    What do you think you’re going to win, if you win?

  19. Lizzie:

    Ah, no – My “perception” of the sky is that it is blue.Actually, grey, right now.But someone with slightly different perceptual apparatus (a colour-blind person, for instance) might have a different “perception”.

    Well you do not answer what a perception is. But I will take two of your sentences to replay:

    “by re-entrant feedback loops, in which not only is information processed, but in which the output of that processing is re-entered as input into the decision-making process as to what further information to seek.”

    How we can “seek” information? Can a robot “seek” informatin if t is not programed to do it?

    “Organisms have needs, therefore they potentially have goals – outcomes that they seek, which we can also express as “desire, and take action to fulfill”.  And those goals themselves are part of what the brain sets, and changes, in the light of new information.”

    How a robot have needs? An amoeba have needs? If all life evolved from a replicator that only replicates itself because the energy and entropy balance make it happen, how needs, will, purpose appear? I´m writing on this computer because it is determined by the energy and entropy balance make me do it given my chemical composition?

  20. Blas: Then evolution it is not an educated guess is a truth. If it were not a truth you cannot be sure that “the evolution model will proceed without altering the fact that it occurred.”So you mantain your

    “science is an iterative series of educated guesses and tests of their validity. Science does not produce TRVTH. It produces increasingly accurate models.”

    or are you going to do an exception for ToE?

    Facts are not the same as truth. Facts can be refined, even disproved. We send people to prison, even execute them, based on facts. Facts can be adjudicated.

    You may spend your life in the hope that evolution or the solar system or the spherical earth may be disproved. Hope does not alter their status as fact.

  21. petrushka
    You may spend your life in the hope that evolution or the solar system or the spherical earth may be disproved. Hope does not alter their status as fact.

    It is not me that need to hope that solar system and spherical earth or change in the genoma of the species be disproved. I believe that we can know the truth. You are the one that had to believe that all that things are only educated guesses.

  22. hotshoe:

    What do you think you’re going to win, if you win?

    It is a pleasure to me discuss with Lizzie that remaind me Lisa Simpson.

  23. Blas: It is not me that need to hope that solar system and spherical earth or change in the genoma of the species be disproved.I believe that we can know the truth. You are the one that had to believe that all that things are only educated guesses.

    You can believe whatever you wish, but it doesn’t change anything. Science converges on reliable knowledge. It refines and expands knowledge, but it doesn’t obtain TRUTH.

    As others have pointed out, science cannot disprove intelligent design. It cannot disprove the possibility that many features of the world are the direct result of divine intervention. Science does not deal in truth.

    What science seeks is regularity and consilience. The convergence of theory and evidence.

  24. petrushka,

    petrushka: You can believe whatever you wish, but it doesn’t change anything. Science converges on reliable knowledge. It refines and expands knowledge, but it doesn’t obtain TRUTH.

    Agree with that. but for you that knowledge are only educated guesses, for me are Truth limited in space and time.

    petrushka:
    As others have pointed out, science cannot disprove intelligent design. It cannot disprove the possibility that many features of the world are the direct result of divine intervention. Science does not deal in truth.

    Agree, the only difference with my view is that deals with a part of the Truht.

    petrushka:
    What science seeks is regularity and consilience. The convergence of theory and evidence.

    Agree too, assuming true human logic, the deterministic behavior of the phisical world and the constancy of what we call physical laws at least in part of space and time.
    Probably we will disagree in what we call evidence, but it is a small detail.

  25. It ins not”only” guessing.

    It’s an iterative process of hypothesizing and testing. Science converges on more reliable and detailed facts and theories.

    Large outlines are almost never falsified. We still use Newton’s equations for much of our interplanetary travel. What we have that newton didn’t have is more precise equations that better fit extreme velocities.

    We could say something equivalent about Darwin and his ideas on evolution. His description of natural history is still valid in outline, but we have more accurate dates and much more detail.

    The general principle of common descent is not going to be overturned.

  26. Over at UD, “JDH” offers comment in response to “billmaz” who said:

    “..you know that at some point that will happen.”

    where “that” is the construction of robots with the “exact neuronal interconnections of the brain”. JDH lists his reasons for rejecting this idea:

    NO. I know for a fact that will not happen. I can list a few easy to understand reasons.

    1. We know the brain does not stay in one single wiring state, but has a great amount of “plasticity”. So arranging a complex circuit to be exactly wired as some wiring diagram reflecting the current state of the brain, is not the living brain.

    In any such robotic implementation, the synaptic connections would be virtualized, meaning they are more fully plastic and re-configurable than the neural pathways in the human brain. Any “physical rewiring” the human brain might effect in its synaptic connections is replicable via a remapping at the “synaptic connection” layer of the robot’s operating system. This is not an substantial objection or barrier to the prospect of “that” happening.

    2. Even if we could duplicate the wiring diagram of a current state of the brain, the outputs of a complex circuit is not solely due to its wiring diagram. Undoubtedly, any circuit with same complexity as the brain, would be extremely non-linear and would be dependent on a very large set of inputs. We know from the small set of simple non-linear equations that are actually tractable, that the solution to non-linear equations varies greatly not only on the relationships of the variables, but also according to the initial conditions. Even if the wiring diagram exactly duplicated the brains circuitry, there is just no way to figure out all the complex initial conditions.

    The initial conditions were not specified as a condition for billmaz’s example. A “synthetic brain” of the type conjectured would do what a “real brain” would do given some initially bootstrapping conditions.

    3. We know that no person can invent a program that disobeys its own programming. It is beyond our abilities. The best we can do is psuedo decision making or pseudo randomness. That is not will.

    Real randomness is available — see the high speed oscillator based cards that are used in cryptography, ecommerce, miilitary applications, etc. to harness truly random inputs. That’s not a problem. as for “disobeying it’s own programming”, that’s a tautology; if the machine does it as a result of the program, then by definition, it was “programmed to do it”. If I call some hypothetical function:

    uint result = chooseSubversiveGoal(randomSeed);

    Have I programmed the app to “disobey my orders”? No, because no matter what other goals are ostensibly authoritative, here, such a function would instantiate a command to disobey other commands. It’s still just doing my bidding.

    But this is not any different than humans. Any “disobeying orders” is the result of a supervening command structure we live under; we are ‘programmed’ to disobey, in such a case, “obeying the command to disobey another command”. It’s just as deterministic (or not, depending on how you view the role of stochastic inputs in the process) for humans as it is for my laptop.

    4. There is absolutely no evidence that consciousness comes about by the chaining together of necessary events and random events. Unfortunately those are the only events available to materialism.

    “Necessary events” and “random events” are a poor choice of labels for the events available, but taking “necessary events” to be “deterministic results” and “random events” to be “non-deterministic” events, those to categories exhaust the possibilities. Between the “deterministic events” and “non-deterministic events”, it’s all the events we’ve got available, materialist views or no. What would be an example of an event that is not [ deterministic | non-deterministic]?

    This strikes me as an appeal that the magico-incoherent notion of “free will”, a “force” of some kind that is somehow both not-random and not deterministic.

    Materialists are impossible to argue with because they actually believe that the above argument proposed by billmaz is an argument. What billmaz presented is a non-sensical hope we could do this some day.

    AND… to top it off with an ironic twist, even if it was possible, and a man CREATED a computer that has “…exact neuronal interconnections…” and figured out how to put in the inputs, you would just end up proving that a fantastic intelligence can construct another intelligence. This is an argument for Intelligent Design, not materialism. It would not prove anything about materialism at all because the behavior of the created brain would still need active control of the inputs.

    Even and especially the most ardent critics of ID embrace and acknowledge the capabilities of humans as designers. This is not a point in favor of ID, and the most hard core eliminative materialist will cheerfully acknowledge the design skills of humans. What a “synthetic human” would show, hoever, is the impotent and extraneous nature of Arrington’s commitment to a homunculus, the ghost inside his machine. The Cartesian Dualist intuition, which I know from experience can be quite compelling and visceral as an intuition, would be overthrown, rendered inert, at least for those who are open to reasoning against one’s intuitions.

  27. Anything close to AI. even soft AI, would have to be a learning device that seeks to “predict” or anticipate the future, and which would modify itself based on the outcomes of its predictions.

    By definition it would program itself.

    I think building such a device is approximately as difficult as reinventing life by building a replicator. The specs look straightforward, but implementation is going to be difficult.

  28. petrushka:
    Anything close to AI. even soft AI, would have to be a learning device that seeks to “predict” or anticipate the future, and which would modify itself based on the outcomes of its predictions.

    By definition it would program itself.

    I think building such a device is approximately as difficult as reinventing life by building a replicator. The specs look straightforward, but implementation is going to be difficult.

    I think such a replicator would be much more difficult, but that’s neither here nor there, I suppose. I’m a proponent of strong AI, but as a long time developer, and someone who’s worked in AI, or at least more pragmatic foundations of AI (machine learning, adaptive neural nets, etc.) I have a very healthy respect for the “size of the mountain” that must be climbed to achieve such an implementation. The human brain is so massively parallel compared to anything we’ve yet built, etc. But while a 30,000ft mountain is a hell of a challenge to scale, it’s just “more scaling”, if there is a route of ascent.

    The ID notion of the person I quoted, and many others like him/her, is “can’t get there from here, no way, no how.” I thought his post was illustrative in terms of the strength of the assertion juxtaposed with the conspicuous weakness of the support for his claims.

  29. Lizzie,

    I think this response to Barry is wrong-headed:

    So no, Barry. I, as a materialist (and, it should be said, a cognitive neuroscientist!) do not “believe” that the brain is merely an “organic computer” that is “similar in principle”, to the one on your desk. I think it is radically different to the computer on your desk, not least because it does a heck of a lot more than “compute”. It is part of the system of tools that enables me, as organism, to survive, by not only computing what I have to do to reach those goals, but to compute those goals themselves, in the light of current information, to seek further information in that may result in further adjustment of those goals, and to select actions that will enable me to fulfill them.

    I understand that perhaps you put “believe” in quotes as a nod to the idea that your view is not just a random suspicion, but is, rather, informed by the evidence available, but after reading it a couple of times, I take this at face value: you do not believe the brain is an organic computer.

    The soundness of this statement of course hinges on what you mean, precisely, by “computer”. If by computer, you mean “something just like this MacBook Pro” — CPUs, RAM, hard drive, ethernet card, USB controller, etc., well, I can’t argue that the human brain is not structured like that, and won’t. But that’s a trivial form of distinction here, if so. The brain is a computing system just like your MacBook Pro, albeit one with a) a very different hardware architecture, and b) different bootstrapping instructions.

    The core of the problem I have with your paragraph here is the idea that our brains are “not only computing what I have to do to reach those goals, but to compute those goals themselves,…”. I don’t think the “but” in there changes anything. Computing goals in some real-time, adaptational way is interesting, a form of meta-computing, but it’s still computation qua computation. It’s computing about what to compute. That’s brings the brain toward an architecture that Hoftstadter would call “strange loops”, self-referential interactions that produce all kinds of counter-intuitive effects, etc., but meta-computing, or meta-meta-computing, or meta-meta-meta-computing (….) is still pure computation.

    Perhaps you are concentrating on the distributed nature of thinking and cognition, with “thinking as the act of a body”, rather than just the brain? If so, I emphatically support pushing the understanding that a brain in isolation from its body, senses, and an ongoing feedback loop from the external environment can’t be said to think in a human sense. A “brain in a vat” can’t think as a human, because human cognition is a function of the brain interacting with the rest of the body and its inputs/outputs.

    But this is not a break from the computing model. Your MacBook pro doesn’t have neurotransmitters in your gut to assist in and influence its operations, of course. But that is an intrinsic feature of human thinking, right — and yes, I know full well this is your area of expertise, and not mine. I don’t think that point is controversial at all to you, but if we view human cognition as a “human computer”, rather than just the operations of the brain, I don’t think there’s any other view supported by the evidence than that the human as a system is an “organic computer”. The human has a distinct meta-computing form of computation that serves very high level imperatives (survive, eat, have sex, etc.) that would be problematic to say the least as the driving goals for a MacBook Pro on my desk, but it at least appears from reading you that meta-computing gets a measure of magic ascribed to it that is not warranted. It is “meta-“, but it is computational, right?

  30. petrushka:
    Anything close to AI. even soft AI, would have to be a learning device that seeks to “predict” or anticipate the future, and which would modify itself based on the outcomes of its predictions.

    By definition it would program itself.

    I think building such a device is approximately as difficult as reinventing life by building a replicator. The specs look straightforward, but implementation is going to be difficult.

    Exactly. I think we will achieve AI, but only when we start incorporating Darwinian algorithms to do it. We aren’t Intelligent enough to Design it otherwise. Moreoever, any artificially intelligent entity is going to itself have to employ Darwinian algorithms, because that’s how human intelligence works.

    And my prediction is that when we do produce an AI robot, it will be as difficult figure out how it does what it does as it is to figure out how we do what we do. And physicians, rather than IT guys, to fix them when they go wrong. Possibly even psychiatrists.

    That’s almost true of ordinary computers now.

  31. Lizzie: Exactly.I think we will achieve AI, but only when we start incorporating Darwinian algorithms to do it.We aren’t Intelligent enough to Design it otherwise.Moreoever, any artificially intelligent entity is going to itself have to employ Darwinian algorithms, because that’s how human intelligence works.

    An algorithm that replicate himself? And what will act as NS?

    Lizzie
    And my prediction is that when we do produce an AI robot, it will be as difficult figure out how it does what it does as it is to figure out how we do what we do. And physicians, rather than IT guys, to fix them when they go wrong.Possibly even psychiatrists.

    That’s almost true of ordinary computers now.

    Having the blue prints and the programs and do not know what they are going to do? I do not believe it.

  32. Neil Rickert: I doubt that we will ever achieve AI.

    Glad to see someone share my skepticism. I think it will be achieved, but not any time soon.

  33. There’s an old saying among programmers, that computers are a lot dumber than people but a lot smarter than programmers. Many times I have spent weeks trying to figure out WTF could possibly be going on, and I had both the schematics and the source code.

    It’s often said that there is no such thing as a useful, non-trivial program without bugs of some kind. Generally, the symptoms of the error give little or no clue to the nature of the error, don’t show up anywhere near the error in the code, and are frequently “cascade” type failures where a nearly-harmless bug plants a little bomb, some other code goes a little wrong as a result, and yet other code goes WAY wrong as an utterly unintended side-effect of how the previous code messed up.

    And a lot of these bugs are data-dependent – certain specific values (that rarely occur) must show up in specific undesired memory locations at just the right time.

    Anyone who can take not-entirely-reliable hardware and run million-line programs on it who thinks they know what it’s going to do has never been there.

  34. I think we’ll get arbitrarily close, so long as we understand that while it will definitely be A, it won’t be I as we know it. As Lizzie keeps pointing out, our concept of intelligence implies a whole-body experience. The AI’s “body” may make its intelligence too alien even to categorize as intelligent.

  35. I think there are a zillion practical applications for artificial intelligence that doesn’t mimic human. I think we will achieve some applications within 30 years.

    Unfortunately the most likely is a red queen war with security applications.

  36. davehooke:
    What is the brain for then, Byers?

    The brain is a middleman bewtween the soul and the body. It includes in the body the memory. The memory is common to creatures.
    It is only the memory that can break down in human thought.
    Thje soul being spiritual means our thinking is always unaffected by anything that is wrong with our bodies.

  37. Blas,

    “Having the blue prints and the programs and do not know what they are going to do? I do not believe it”

    It is not just bugs. I very much doubt the programmers of Deep Blue or other top chess playing programmes could predict its move in complex situations.

  38. The specs look straightforward, but implementation is going to be difficult.

    And all we’d be doing is generating some cheap knock-off. Plagiarism is easy; coming up with the original draft is the tough part. By generating a copy of a living system, or an electronic brain, we’d have proven absolutely nothing vis a vis Design’s capacity to cook up these things from scratch.

  39. petrushka,

    In moments of idle speculation I wonder if it is a consequence of Godel’s theorem that the human brain is incapable of designing something which can do everything it is capable of. In which case, as Lizzie says, any machine that can duplicate human capabilities would have to evolve rather than be designed (or be designed by something with greater capabilities).

  40. Point taken, although in my defence I will note that what I actually said was:

    I, as a materialist (and, it should be said, a cognitive neuroscientist!) do not “believe” that the brain is merely an “organic computer” that is “similar in principle”, to the one on your desk.

    I do think action is important, though. I think it’s no coincidence organisms that move tend to evolve brains, whereas ones with roots tend not to.

    I see no theoretical bar to AI myself. But I think it will involve things that move around in the world.

  41. AI will have to evolve. Which is why I compare it to OOL in difficulty. Brains are not just a bunch of parallel processors. They are a single regulatory network whose structure has evolved. I think brains are effectively analog computers.

  42. Mark Frank:
    Blas,

    “Having the blue prints and the programs and do not know what they are going to do? I do not believe it”

    It is not just bugs.I very much doubt the programmers of Deep Blue or other top chess playing programmes could predict its move in complex situations.

    They will. It will take a lot of time but you can follow the options Deep blue will explore and the weight it will give to each one, then you know which “decission” it will take, because you know the algorithm it is following.

  43. Flint:
    I think we’ll get arbitrarily close, so long as we understand that while it will definitely be A, it won’t be I as we know it. As Lizzie keeps pointing out, our concept of intelligence implies a whole-body experience. The AI’s “body”may make its intelligence too alien even to categorize as intelligent.

    That is not true. We know that humans that needs machines to be kept alive and have only his head working are intelligents. I do not how much of the Howking`s body feels, but didn´t affect his intelligence. Our consciussness reside in our brain. The “holistic” approach is a way Lizzie takes to avoid the real problem for materiliasts to explain intelligence. That problems are first intelligence needs a “will” that it is not an answer to “needs” but something that is able to “choose” between “needs”, second you need the capacity of abstraction definition of new concepts, humans never saw a circle in nature but some of them could define a circle as the set of dots that are at the same distance of one point we call center.
    There is no machine that can do both things unless programmed by a human.

Leave a Reply