AI Skepticism

In another thread, Patrick asked:

If it’s on topic for this blog, I’d be interested in an OP from you discussing why you think strong AI is unlikely.

I’ve now written a post on that to my own blog.  Here I will summarize, and perhaps expand a little on what I see as the main issues.

As you will see from the post at my blog, I don’t have a problem with the idea that we could create an artificial person.  I see that as possible, at least in principle, although it will likely turn out to be very difficult.  My skepticism about AI, is because I see computation as too limited.

I see two problems for AI.  The first is a problem of directionality or motivation or purpose, while the second is a problem with data.

Directionality

Interestingly Patrick’s message, where he asked for this thread, contained a picture of Spock from Star Trek.  As a Star Trek character, Spock was known to be very logical and not at all emotional.  That’s what I think you get with computation.  However, as I see it, something like emotions are actually needed.  They are what would give an artificial person some sense of direction.

To illustrate, consider the problem of learning.  One method that works quite well is what we call “trial and error”.  In machine learning systems, this is called “reinforcement learning”.  And it typically involves having some sort of reward system that can be used to decide whether a trial-and-error step is moving in the right direction.

Looking at we humans, we have a number of such systems.  We have pain avoidance, food seeking, pleasure seeking, curiosity, and emotions.  In the machine learning lab, special purpose reward systems can be setup for particular learning tasks.  But an artificial person would need something more general in order to support a general learning ability.  And I am doubting that can be done with computation alone.

Here’s a question that I wonder about.  Is a simple motivational system (or reward system) sufficient?  Or do we need a multi-dimensional reward system if the artificial person is to be able to have a multi-dimensional learning ability.  I am inclined to think that we need a multi-dimensional reward system, but that’s mostly a guess.

The data problem

A computer works with data.  But the more I study the problems of learning and knowledge, the more that I become persuaded that there isn’t any data to compute with.  AI folk expect input sensors to supply data.  But it looks to me as if that would be pretty much meaningless noise.  In order to have meaningful data, the artificial person would have to find (perhaps invent) ways to get its own data (using those input sensors).

If I am right about that, then computation isn’t that important.  The problem is in getting useful data in the first place, rather than in doing computations on data received passively.

My conclusion

A system built of logic gates does not seem to be what is needed.  Instead, I have concluded that we need a system built out of homeostatic processes.

165 thoughts on “AI Skepticism

  1. Neil Rickert: The learning modes that you are contemplating are not the learning modes that I am contemplating.

    OK. But from what you say below, here, I’d agree that strong AI is impossible based on the rudimentary forms of learning heuristics you are limiting our “artificial human” to.

    The traditional AI view is that learning is pattern discovery, or something of the kind.This is vaguely like classical conditioning from psychology, though with a different vocabulary.

    Here’s the problem, then. In AI, machine learning certainly does rely heavily on pattern discovery and recognition, but these are low level subsystems that drive higher level heuristics, just like (surprise!) happens in human brains. Human rely heavily on pattern recognition infrastructure, and human learning depends on the products of those processes, but that isn’t the sum total of human learning, not nearly, right?

    Myview of learning is more like the “perceptual learning” from Eleanor Gibson in psychology, which plays a role in J.J. Gibson’s theory of direct perception.

    Here’s a comment that sounds pretty dismissive, but it’s worth considering: until you/I can build a model that performs against tests, one that you can implement, “your view” or “my view” isn’t worth very much epistemically. That is not to say that J.J. Gibson may not be right on, but if you are convinced by some means other than seeing a neuronal model actually perform that reifies his ideas, than we’re busted down to the level of theology and raw intuition.

    Note that these models (Gibson’s, anyone’s) don’t have to be “humanesque”, by which I mean they don’t have to reverse engineer how human pattern recognition or higher-level learning works. They just have to translate input stimuli into classified internal states and outputs that show what we call “learning” (e.g. circle-ish images A and B are grouped apart from square-ish images B and C).

    Is that something that the thinkers you defer to on this provide? If not, I guess I’d politely suggest that such thinking has very little equity in this kind of inquiry.

    As for the rest of your comment – I suspect some miscommunication.I took your “goal vectors” to be reward systems for learning particular goals, and was asking how you could break out of those goals.But I now suspect that I might have misunderstood your original point.

    There has to be some bootstrapping somewhere, of course. Humans, as animals, are borne with innate core goals — feed, fuck, fool around, we might say, coarsely. Any artificial brain/mind must have “prime directives” that are fundamental. How those top level goals are pursued is wide open, and evolution provides a compelling lesson in the dangers of “brittle strategies”.

    Given the high-level priorities, the architectural requirement is for a heuristic that is very flexible, plastic, and general. “Meta-learning” might be a helpful way to characterize it, or even “meta-meta-learning”. Heuristics that incorporate internal feedback loops that learn about it’s own learning activities, and, one step higher up, learn about the learning activities about the learning-about-learning system.

  2. Sorry for the delay in responding Eigenstate…been a little under-the-weather.

    eigenstate: This oversimplifies the goal matrix, doesn’t it? For example, if “seek pleasure” is one of the organism’s top level goals, and one has an abundance of energy, it’s not “frivolous” to, say, body surf for a whale (assuming here that that fulfills the “seek pleasure” goal, somehow). It’s goal pursuit.“Keep yourself nourished” may outrank “seek pleasure” when those two are opposed — I don’t imagine a malnourished and hungry whale does much body surfing for fun — but it’s an oversimplification to place “eat” as the sole measure driving all other decisions and actions. At some points, fuel is not a problem, and other goals become the focus of action.

    My skepticism here arises from doing ecological management in a wildlife preserve and doing a little work with animals in Africa. In my (albeit limited anecdotal experience) I just don’t think that “seek pleasure” is a top level goal. I really think that “seek pleasure” is a side effect of having drives that have evolved to motivate and sustain behavior. The goal is the behavior; the drive is what gives us impetus to pursue certain behaviors. “Hunger”, for example, is not a goal – it’s a drive. Similarly, “seek pleasure” is an offshoot of the drive “pleasure”. The goal, could be any behavior for which the organism experiences pleasure (such as sex or eating). The key distinction, at least for me, is that “seeks” pleasure then is the result of a complex system and does not inherently exist as part of all life. Few insects, for instance, appear to “seek pleasure”.

    Just to attach this back to the thread topic, that’s how a software implementation would work. Always monitor critical resource levels, and escalate those goals when good operations are at risk. But when the (digital) organism is resourced, then select down the list of goals and “give them some CPU time” as it were, some priority in the choice/action loop. Seeking pleasure, or diversion is fully logical, non-frivolous in the strict sense, when the organism is adequately resourced and other exigent priorities do not supercede it.

    Yes, I understand this model for approach to AI. I just think it’s limited and ultimately will not come close to the dynamic fluidity, flexibility, and diversity of biological intelligence.

    I understand your point, but I think you’re committed to a transcendental mistake here concern benefits. There is benefit in those behaviors because they are not servicing other (grueling or demanding) behaviors. I’m fine with saying those behaviors occur “in spite of”, say, gathering food to eat, but that is precisely why those behaviors are a priority. They are beneficial and satisfying in a psychological way.

    Oh, I agree! The issue, as I see it, is how one approaches such beneficial behaviors from a modeling standpoint. I see them as having to emerge and NOT BE modeled as top-level goals. I see such as then limited the dynamics of the AI.

    To take your point at face value, we’d have to deny that “seeking pleasure”, and the various kinds of recreation and diversion from that simply could not be a basic goal for the organism. I can’t see any basis for such a prohibition, and I think the evidence from humans and other animals is replete with examples that show that pleasure seeking, in all sorts of manifestations, is a core driver of action, if one that is subordinated when existential priorities need attention/action.

    I think that perhaps some organisms have developed a dependency (again as a side effect) on pleasure because the drivers have been successful in certain environments and that some organisms have gotten to a point where the goal is not as important as it once was. As such, gaining the pleasure is less costly than it used to be and thus can become a goal on it’s own. However, such behaviors still come with cost. Be that as it may, I still don’t see pleasure as at best a co-opted goal by some organisms in certain specific environments. It therefore makes no sense to me to approach AI with please as a goal from the beginning of the planning stage and including it as a top-tier goal rather than hoping that pleasure perhaps evolves as a goal in some AI exceptions.

    Ahh, this seems to be the key point of disagreement, then. I agree that the type of behaviors humans and other animals engage in for diversion or recreation is likely to be a by-product of the use of other capabilities that are needed for survival and adaptive demands of the environment. But the priority of pleasure-seeking itself is not a by product in that sense, but rather a primary dynamic for the organism.

    Yes…this is our point of disagreement. I just don’t see that.

    If that’s not clear, just think of “pleasure” as the broad “carrot” to the “stick” of pain and suffering. Organisms develop affinities for some behaviors and experiences and aversions to others simply be selective force; organisms that don’t derive pleasure and satisfaction from eating (especially when hungry!) don’t fare well. Competing organisms that do derive such pleasure fare better, by comparison.

    Quite so. The thing is – at least this has been my experience – most organisms do not appear to engage in behaviors simply for repeated pleasure. Most organisms appear to stop engaging in behaviors regardless of the continued pleasure sensation. Only a few – and of those, the majority appear to be mammals – engage in pleasure behavior just to experience the pleasure. As such, I don’t believe that it is a top-tier goal, but rather an offshoot of the evolutionary process.

    I know that’s not a revelation to you, but it should be a reminder that diversion and recreation and just goofing off are not mysterious members of the priority list. They are natural and predictable outcomes of states where “infrastructure” priorites (eating, shelter, etc.) are satisfied, and the organism has excess resources (energy, cognitive cycles, etc.)

    From a software standpoint, it’s another isomorphism to biological architecture. Food? Check. Shelter? Check. Healthy? Check. Now the software priority becomes “choosing a priority”, and it moves down the list, or even perhaps does some stochasting sampling from a list of non-emergency priorities. From a computing standpoint, this is choosing what to do with “idle cycles”.Cycling in a tight “wait loop” without doing anything may conserve valuable energy, but foregoes the cognitive and learning and therapeutic benefits of other available tasks. A software developer understands that “you have to do something, can’t just freeze”, if you want to optimize against the goal set.

    But then you’d have dragonflies and mosquitoes engaging in such behaviors, and we just don’t see that in nature. I therefore have my doubts about approaching AI from a priority list perspective.

    If the test is to build “software minds” that are as close to human (or animal) minds as possible in their behaviors and dynamics, then by definition, we cannot do better than nature itself. That is the standard we are judging by! And with that target (emulating human minds), the project is not so much designing a black box that matches inputs with outputs that resemble human minds, but implementing biological architectures in non-biological frameworks — silicon.

    We don’t need to design so much as implement natural designs that we see in our physiology, in a non-biological format. That’s a huge practical challenge, but it’s not conceptually intractable.

    My point there was that it would not be different the sense that mattered for our discussion, here. If an “artificial human” is made of carbon fiber, wire and transistors, that is “different than biology”, but does not disqualify it from the comparisons we care about — what kinds of decisions does it make? how does it learn? what does it know and remember, etc.

    All quite true. I don’t see the issue as a limit of the medium.

    I don’t think the realizations on the practical challenges are a backing down at all in terms of the ultimate success of the project. Rather, it’s just a lament that what many of us thought might be practicable in our lifetimes is not, and not nearly. I don’t think it’s any less inevitable, which you apparently do. Really important milestones are just many more decades down the road than was supposed back in the day…

    Very true I’m sorry to say. But hey…I just read that we are only a few years from an artificial internal kidney that’s about the size of an iPhone, so you know…there are some cool advancements out there. I certainly never thought that THAT would happen in my lifetime.

    There’s two discrete variables at work here. P, which is the probability that strong AI can/will obtain, rises and has continued to rise with everything we gain in knowledge about humans and minds. D, which is the “degree of difficulty” in terms of practically realizing strong AI, goes up and down, but has conspicuously gone up in recent years as we realize how astoundingly complex the brain is, and not just the brain, but its integration with the rest of the body.

    Yep.

    You’re asking how P can increase while D goes up, too. My answer is they are independent variables. The more we learn, the more certain strong AI becomes as an achievable outcome, and the farther out on the timeline our anticipation of practical implementations of strong AI goes.

    OK…I think I see your point here. It just sounds counter intuitive to me.

    Think of this as analogous to us discussing “Can the great mountain over the horizon be climbed?” As we explore and research, we may simultaneously become more and more confident that it can indeed be scaled, while we keep upping our estimates of how long it will take and how much resources it will demand.

    Hmmm…that’s not really helping. All I can think of is, “man…there’s more sheer cliffs than we ever realized when we were farther away and that weather really does change much more severely than we originally thought. The logistics just keep getting more and more insurmountable the more we learn…”

    Remember that strong AI is controversial primarily due to skeptics doubting that it is possible at all. The “time to delivery” question, the practical question is subordinated to that, and the argument is engaged over it’s plausibility in principle. The technology of strong AI is fascinating to me, but the debate is whether it’s even possible, and many say it’s not.

    I don’t disagree. As much as I like the topic, though, I’m not invested in the timelines or estimates of when, how and how much such a project will demand. My interest is really in providing a counter to B-type and C-type, which in my experience are pervasive.

    And on this we agree. I don’t see strong AI as insurmountable because of the resources or time.

  3. keiths: But I don’t think that the manipulation of symbols is an immaterial operation. Where did you get that idea?

    Symbols are inherently abstract, so immaterial. Marks (such as a pencil mark on paper) are material, but they are not symbols. We may use them as implementation details in our use of symbols. There is nothing in the mark that says it is a symbol or that says what it symbolizes.

  4. eigenstate: Here’s the problem, then. In AI, machine learning certainly does rely heavily on pattern discovery and recognition, but these are low level subsystems that drive higher level heuristics, just like (surprise!) happens in human brains. Human rely heavily on pattern recognition infrastructure, and human learning depends on the products of those processes, but that isn’t the sum total of human learning, not nearly, right?

    I already have a very different view of human cognition than what you have expressed.

    Before we talk of “pattern recognition” we need to know what we mean by “pattern” and what we mean by “recognition.”

    Let’s divide up the question of intelligence:

    Stage 1: getting to the intelligence of a dog (or cat, or similar)
    Stage 2: getting from the intelligence of a dog to that of a human.

    AI might well be capable of stage 2. It is stage 1 that I see as hard, and as outside the scope of computation.

    Here’s a comment that sounds pretty dismissive, but it’s worth considering: until you/I can build a model that performs against tests, one that you can implement, “your view” or “my view” isn’t worth very much epistemically. That is not to say that J.J. Gibson may not be right on, but if you are convinced by some means other than seeing a neuronal model actually perform that reifies his ideas, than we’re busted down to the level of theology and raw intuition.

    It’s a fair criticism of Gibson, that he does not get into implementation. That leads to a lot of misunderstanding, and perhaps I am also misunderstanding him. In any case, my views don’t come from Gibson. They come from me, and I have been concerned about what is implementable from the start. Someone later pointed me to Gibson, and it turned out that I was indeed coming up with similar views at a gross level of description.

  5. keiths: A digital thermostat that can adjust the inside temperature based on the time of day is exercising much more ‘judgment’ than a miscalibrated temperature sensor!

    A digital thermostat that does not contain or access a temperature sensor does nothing interesting.

    Neil:

    Where does “correct” come from?

    ‘Correct’ implementations give the desired behavior.

    I see that you have evaded the question. So where do desires (as in “desired behavior”) come from?

    I expect you will evade that, too. But that’s how AI discussions go. A question leads to another question which leads to another question. You start with abstract propositions, which are inherently immaterial. And when a question is raised, you head down an infinite regress of unanswered questions.

    For the fourth time:

    What specific homeostatic processes involved in intelligence cannot be implemented digitally, in your opinion?

    WTF. Where did I say anything about “specific homeostatic processes”? It is the non-specificity that is important for general learning.

    You are making an argument that is often made. When I suggest homeostasis is useful for learning, the counter claim is that a computer can emulate everything. Well, yes it can. But that requires that a lot of knowledge be preprogrammed into the computer for the particular emulation. And the knowledge that you have to program likely includes what the computer was going to learn. No, the ability of a human to program knowledge into a computer does not demonstrate that the computer has what is required to be an autonomous general learner.

  6. I tend to agree that having AI pass the Turing test is probably easier than having AI that could could achieve the behavioral repertoire of a cat or dog.

    I stand to be corrected by research and evidence, but I think brain architecture is every bit as convoluted as genomes. I don’t think it will be possible to deconstruct the function of brains, any more than it is possible to predict the results of genetic changes.

    It’s all emergent. I have seen nothing to indicate we are at the cusp of being able to predict emergent properties or behavior.

    Getting back to the point, computers excel in abstract reasoning. They are more consistent and faster than humans. That’s useful, because much of our social and technological infrastructure is built on abstract reasoning.

    It’s the underlying stuff that we don’t understand.

  7. Neil,

    Symbols are inherently abstract, so immaterial. Marks (such as a pencil mark on paper) are material, but they are not symbols. We may use them as implementation details in our use of symbols. There is nothing in the mark that says it is a symbol or that says what it symbolizes.

    Suppose I see a red light while driving, so I apply the brake and bring my car to a stop. At what point (if any) in the chain of events from ‘light turns red’ to ‘car comes to a stop’ did something non-physical happen? What was the non-physical event (or events)?

  8. Nothing non-physicalhappened, but some physical chsins of event could be called emergent events.

  9. Suppose I see a red light while driving, … What was the non-physical event (or events)?

    Seeing the red light is non-physical. It presumably supervenes on the physical. While physics can explain the optics and to wavelength sensitivities, the actual “seeing” part is outside what we expect physics to account for.

    That seeing is an intentional act, rather than a mechanical act, is part of what keeps it non-physical. A purely physical account of what happened would describe what could be considered mechanical acts, but would leave out the intentional acts.

  10. Neil,

    Seeing the red light is non-physical. It presumably supervenes on the physical. While physics can explain the optics and to wavelength sensitivities, the actual “seeing” part is outside what we expect physics to account for.

    The entire chain of events can be described in physical terms. The idea that a red light symbolizes “stop” is just shorthand for the fact that my brain will cause my body to take certain actions, such as pressing the brake pedal, when the light turns red.

  11. The entire chain of events can be described in physical terms.

    If you give only a mechanical sequence of events, then that can be described in physical terms. However, most people think the intentional acts are an important part of the sequence of events, and if you include those then I doubt that there is an explanation using only physical terms.

  12. Neil,

    If you give only a mechanical sequence of events, then that can be described in physical terms. However, most people think the intentional acts are an important part of the sequence of events, and if you include those then I doubt that there is an explanation using only physical terms.

    My point is that the physical sequence of events is sufficient to explain everything that physically happens.

    The only way to include my mental states (such as my belief that a red light symbolizes ‘stop’) in the chain of events is either a) to regard them as epiphenomenal and causally inefficacious, or b) to conclude that mental states are causally efficacious only because they reduce to physical brain states.

    If you insist that mental states are immaterial, you deprive them of a causal role (unless you believe that immaterial mental states somehow push atoms around in the brain).

  13. keiths: My point is that the physical sequence of events is sufficient to explain everything that physically happens.

    I don’t have a problem with that.

    The only way to include my mental states …

    I’m actually skeptical about a lot of mental state talk. I had been discussing intentional actions, such as seeing. I don’t think those are the same as mental states.

    I’m not sure why you are so hung up on the need to say that everything is physical. Somehow the discussion has gone off-track.

  14. Neil,

    I’m not sure why you are so hung up on the need to say that everything is physical. Somehow the discussion has gone off-track.

    Remember, you brought it up when you labeled me a dualist for some odd reason:

    In my book, that makes you a dualist. You see intelligence in the mindless manipulation of meaningless marks. Or, as it is often described, you see intelligence in the manipulation of abstract symbols. So apparently, you take intelligence to be a purely immaterial operation on abstract symbols, and the material part (the sensing, the thermodynamic actions of the air conditioner and furnace) don’t count. What is not dualist about that?

    Now you seem to understand that I’m not a dualist, but you’re still insisting that symbols are immaterial:

    Symbols are inherently abstract, so immaterial. Marks (such as a pencil mark on paper) are material, but they are not symbols. We may use them as implementation details in our use of symbols. There is nothing in the mark that says it is a symbol or that says what it symbolizes.

    I am arguing that if the complete causal chain from ‘red light’ to ‘foot on brake’ can be expressed in physical terms, with no assistance from any immaterial entities, then either:

    a) the symbol and its interpretation are epiphenomenal, or
    b) the symbol and its interpretation are ultimately physical phenomena.

    I believe that (b) is true. A red light is truly a symbol meaning ‘stop’. It has intentionality. When a driver sees it, interprets it, and puts her foot on the brake, bringing her car to a halt, the entire process can be described in physical terms without reference to anything immaterial.

  15. First off, thank you, Neil, for posting this. Unfortunately, real life work demands spiked just as you did, so I’m still playing catch up. This did catch my eye, though.

    Symbols are inherently abstract, so immaterial.

    Like keiths, I don’t understand what you mean here. Symbols are abstract, by some definitions, but that does not make them immaterial. At no point when a symbol is conceptualized or used is it not instantiated in some material form (chemical and electrical patterns, sound waves, light waves, marks on paper, etc.).

    Perhaps you are using a different definition than I am, but I see no distinction between “immaterial” and “non-existent”.

  16. Symbols are physical, but they have emergent properties that cannot be predicted from their constituent parts. They are a convervence of physical things and processes that have complex effects only in context.

  17. Patrick: Like keiths, I don’t understand what you mean here. Symbols are abstract, by some definitions, but that does not make them immaterial. At no point when a symbol is conceptualized or used is it not instantiated in some material form (chemical and electrical patterns, sound waves, light waves, marks on paper, etc.).

    I have had this discussion many time, mostly on usenet.

    The way that we use symbols (the way that we talk about symbols) requires that they be abstract an thus immaterial. My preference is to say that symbols are abstract, but we physically represent those symbols. So I see the symbols as immaterial, but there physical representations aren’t.

    Suppose that I want to copy some data from dynamic RAM to my disk drive. In RAM, that symbolic data is represented as electrostatic charges. If the data is the physical representation, then to copy I would have to copy those electrostatic charges into the disk drive. And that’s a problem, because electrostatic charges don’t do anything useful in a disk drive.

    If, on the other hand, the symbolic data is immaterial and abstract, then I copy it by looking at the electrostatic charges to determine which symbols are represented, and I create new magnetic representations of those same symbols on the disk drive. I can reasonably be said to be copying the immaterial symbols, but not their physical representations.

    The language we use for discussing computation is very much a language of symbols, rather than a language of their representations. We prefer that language of symbols because it allows us to easily discuss what is important for computation, and not get bogged down in physical details.

    That we discuss computation in terms of immaterial symbols does not, in itself, make us dualists. It is just a convenenient way of discussing the issues that we want to deal with.

    If we want to go physical, and discuss what computers do in physical terms, then it gets a lot messier. But we wouldn’t really be discussing computation. We would be discussing electronics and electrical switching.

    Let’s look at an ethernet card. Described computationally, it receives a sequence of bits (symbols) which it places in memory. Looked at physically, it monitors the signal line, looking for signal transitions. When it sees a particular pattern of transitions, it starts timing so that it can count off bits from the signal level. It also continues to monitor transitions, and feeds them to a phase locked loop device which generates a signal to speed up or slow down the local clock. The purpose is to keep the receiver clock synchronized with evidence about the transmitter clock that is recoverable from the signal.

    In short, the way that an ethernet card works is a bag of tricks.

    I am saying that much of what makes human cognition work is also a bag of tricks implemented biologically. And, simply put, that’s my basic disagreement with AI. It wants to credit cognition as arising from the manipulation of immaterial (i.e. abstract) symbols, while I want to give most of the credit to the biological invention of a superior bag of tricks. The AI folk want to put the bags of tricks in peripheral devices that they deem unimportant. I see them as central.

    I’ll still say that keiths is a dualist. Whenever I have tried to bring up issues relating to the bag of tricks, he wants to reject that idea, and instead insist that the important part is in the abstract symbolic processing.

    Back to the ethernet card. There is a range of incoming signals that would all be recognized as a 0 bit. When you lump a bunch of things together, and treat them the same, that’s categorization. But keiths didn’t want to discuss categorization when I raised it as part of how a sensor works.

    AI, with its emphasis on abstract symbols, is solipsistic. The world is not relevant to computation. And I could make the same criticism of analytic philosophy. Human cognition, human intelligence, human consciousness are all about the world. It’s the bag of tricks, not the abstract symbol manipulation, that connects us to the world.

    The particular bag of tricks used in an ethernet card does not matter. And the particulars of the bag of tricks used in biology probably isn’t all that important either. What really matters is that biological systems (i.e. individual organisms) are inventive enough to develop their own bags of trick to solve their own problems. And that’s where the intelligence really lies — in that inventiveness. It is inventiveness with hardware interfaces with reality, rather than inventiveness with methods of computation, that I see as important.

  18. Neil,

    I definitely want to discuss your “bag of tricks” thoughts, but that will probably have to wait until the weekend when I have time to read this whole thread. With respect to the concept of the “immaterial”, though:

    Neil Rickert:
    The way that we use symbols (the way that we talk about symbols) requires that they be abstract an thus immaterial.

    With respect, you’re just repeating your claim. What, exactly, do you mean by “immaterial” in this context?

    My preference is to say that symbols are abstract, but we physically represent those symbols.So I see the symbols as immaterial, but there physical representations aren’t.

    How does that differ from saying “I see the symbols as non-existent until represented physically.”?

    Suppose that I want to copy some data from dynamic RAM to my disk drive.In RAM, that symbolic data is represented as electrostatic charges.If the data is the physical representation, then to copy I would have to copy those electrostatic charges into the disk drive.And that’s a problem, because electrostatic charges don’t do anything useful in a disk drive.

    If, on the other hand, the symbolic data is immaterial and abstract, then I copy it by looking at the electrostatic charges to determine which symbols are represented, and I create new magnetic representations of those same symbols on the disk drive.I can reasonably be said to be copying the immaterial symbols, but not their physical representations.

    I see it as translating a pattern from one form to another. At no point in the process of translation is anything non-physical taking place. The pattern doesn’t exist independently of its representation, either in RAM or on the disk, unless you’re suggesting that Platonic ideals are somehow real.

    I have the distinct impression that I’m missing your point, possibly because we have significantly different definitions of the terms we’re using. If you can explain what you mean by “immaterial”, it might help me communicate more effectively with you.

  19. There is nothing abstract about symbols except that certain arrangements of matter have emergent effects in certain contexts.

    This is not really any different from the argument we had with upright biped and semiosis.

    To say that symbols are always embodied in matter does not mean that knowledge of the properties of matter allow us to to predict the effects of symbols.

    I think of reductionism as something like time. The arrow of reduction only goes one way.

  20. Patrick: What, exactly, do you mean by “immaterial” in this context?

    I would think that obvious enough. And you seem to understand it. Trying to define it would not help.

    Maybe you should ask yourself why you find it so troubling that there could be immaterial things. It’s not as if I am proposing that it is made of some dubious immaterial substance. All I am saying is that what makes something a symbol has no clear physical characterization.

    How does that differ from saying “I see the symbols as non-existent until represented physically.”?

    Symbols exist, even without representation. A mathematician will tell you that the number zero exists even if not represented anywhere. Its existence is not dependent on it being represented.

    I see it as translating a pattern from one form to another.

    You will have a devil of a job coming up with a suitable definition of “pattern”, particularly so if you won’t allow patterns to be immaterial.

    At no point in the process of translation is anything non-physical taking place.

    I think I’ll have to disagree with that, too. In a purely physical description of what is happening, nothing that is happening is copying. It is only in the abstract descriptions that copying occurs.

    Mathematical operations, such as addition, are not physical. We sometimes call them logical operations, and that’s partly to distinguish them from physical operations.

    Some mathematicians say that mathematical objects exist in a platonic reality, which would make them immaterial. Others say that mathematical objects are useful fictions. I’m a fictionalist. And I’m a fictionalist about symbols. If symbols are useful fictions, then why would it matter that they are immaterial?

    I’ve mentioned that I’m a kind of behaviorist. What makes something a symbol is our behavior with respect to that “thing”.

  21. Neil,

    I’m a fictionalist. And I’m a fictionalist about symbols. If symbols are useful fictions, then why would it matter that they are immaterial?

    So to you, symbols are nonexistent immaterial entities that we introduce just for convenience. If so, then they can’t play an actual causal role in any real chain of events, including those involving brains and computers, correct?

  22. keiths: So to you, symbols are nonexistent immaterial entities that we introduce just for convenience.

    Yes, I can agree with that.

    If so, then they can’t play an actual causal role in any real chain of events, including those involving brains and computers, correct?

    I’m inclined to disagree with that.

    We can describe events and their causes in mechanical terms, and the symbols would have no role in that description. But we can also describe the same events and causes in a more intentional language that does involve symbols. These are parallel descriptions. If there is causation according to one, it might be appropriate to say that there is causation in the other. And whether “real” applies depends on what we mean by “real”.

    Generally speaking, people prefer the intentional description. Philosophy is largely built on the use of intentional descriptions. And even our mechanical descriptions are not pure, in that they cannot avoid some use of intentional language.

    I’m not trying to make this seem mysterious. The topic, however, is related to human cognition. Humans do a lot with symbols. And that means that we symbolize. Computation, as used in AI, starts with symbols. It does not symbolize. It leaves the symbolizing to peripheral devices that are treated as unimportant. I am trying to make the point that how we symbolize is important and if we are serious about artificial intelligence then we cannot treat the symbolization as an unimportant implementation detail.

  23. Neil,

    We can describe events and their causes in mechanical terms, and the symbols would have no role in that description. But we can also describe the same events and causes in a more intentional language that does involve symbols. These are parallel descriptions. If there is causation according to one, it might be appropriate to say that there is causation in the other. And whether “real” applies depends on what we mean by “real”.

    That paragraph is actually very close to my view. It’s not that symbols are unreal, immaterial, or acausal — they are quite real, quite physical and they have a definite causal role. It’s just that their presence is more or less obvious depending on the level of abstraction at which you examine the system.

    Consider the ‘light turns red, driver brings car to a stop’ system. At the level of interacting atoms, you could analyze and model that entire system without even realizing that anything symbolic is happening. If you model things at the psychological level, however, it’s obvious that symbols are involved.

    Humans do a lot with symbols. And that means that we symbolize. Computation, as used in AI, starts with symbols. It does not symbolize. It leaves the symbolizing to peripheral devices that are treated as unimportant. I am trying to make the point that how we symbolize is important and if we are serious about artificial intelligence then we cannot treat the symbolization as an unimportant implementation detail.

    This is the part I don’t understand. You seem to be saying that the processing of symbols is relatively unimportant, but that the conversion of real world signals into symbols (and vice-versa) is extremely important. But the latter, in the case of a computer system, is just analog-to-digital conversion (or digital-to-analog in the other direction). We already have that, but it doesn’t confer intelligence on the machines possessing it.

    Do you mean something else by ‘symbolizing’?

  24. I’m okay with saying that the symbols supervene on the physical. But I don’t see that they are actually physical.

    You seem to be saying that the processing of symbols is relatively unimportant, but that the conversion of real world signals into symbols (and vice-versa) is extremely important. But the latter, in the case of a computer system, is just analog-to-digital conversion (or digital-to-analog in the other direction).

    In the case of computers, most of the hard work is done by humans. For us, there is a lot more to it than A2D conversion. When we are discussing artificial persons, that cannot rely on humans to do the heavy lifting.

  25. Neil,

    In the case of computers, most of the hard work is done by humans. For us, there is a lot more to it than A2D conversion. When we are discussing artificial persons, that cannot rely on humans to do the heavy lifting.

    What’s still unclear to me is this: You seem to think that something special, something ‘homeostatic’, is going on in humans, particularly at the point where they interact with the environment.

    What is that special something, and why do you think that it cannot be achieved via computation or an appropriate system of logic gates?

  26. keiths: What’s still unclear to me is this: You seem to think that something special, something ‘homeostatic’, is going on in humans, particularly at the point where they interact with the environment.

    I had thought that I explained that in the OP. Homeostasis provides the basis for a “reward system” that can be used for evaluating decisions. That’s needed to provide some sort of direction to learning, to provide the “error” feedback for trial and error learning.

    A homeostatic system is already, by virtue of its homeostasis, inward looking. It is self-monitoring, and adapting its behavior to maintain stasis. So there’s already a tiny bit of self-awareness there.

    The actions of a homeostatic system can be reasonably seen as meaningful to that system, in that those actions serve to maintain stasis (or, in effect, maintain existence).

    When the environment impinges on the homeostatic system, it may tend to upset the stasis. The homeostat acts to offset that environmental influence. By monitoring its own actions, it can use those actions as a source of information about the way that the environment is impinging on the homeostat. And since its own actions can be reasonably seen as meaningful, the homeostat now has meaningful information about the environment. The idea is that the use of meaninful information grows outward from what is already meaninful. So, if the system can learn more about the world, what it learns will be automatically meaningful. There is no intentionality problem with this form of learning.

    The logic gate, by contrast, is a great tool for the engineer. The logic gate is made to be relatively impervious to what is happening in the environment, with the single exception of its defined inputs. That’s great for allowing the engineer to exert control by means of the logic gate. But the consequence is that the logic gate itself has no external world, other than what the engineer gives it. The actions of the logic gate are meaningful to the engineer, but have no obvious meaning to the logic gate itself. So the logic gate will always seem to be acting mechanically on meaningless data. There’s no basis for meaning there, and the intentionality problem seems insoluble with logic gates.

    Hmm, that’s getting too philosophical. If I look at it in terms of wanting to design a system that could learn, then the “reward system” aspect from my first paragraph above is key. A logic gate can only make the decisions that it is programmed to make. So it seems likely that it can only learn what it is programmed to learn. A homeostat has its own autonomous ability to make decisions, based on its internal reward system that is its homeostasis.

    To say it all differently, I see intelligence, pragmatic decision making, self-adaptive behavior, learning — as all closely related. Logic is mechanical rule following, and does not provide any of those.

  27. The problem is not thermostat vs logic gate.

    It is how numbers of them integrate themselves to form emergent systems, presumably by evolving.

    We may be able to jump start the process, but we are unlikely to program an AI system from first principles.

  28. Yes, I pretty much agree with petrushka here.

    Take a simple example. If we designed a robot with vision, then the distance between the eyes would be a parameter in the determining of the depth of viewed objects. The distance between the eyes changes from child to adult. So there is a continued need for adapting how vision works.

    Similarly, as we put on weight, we have to adapt how we walk and balance. Buy a new pair of shoes, and we must adapt our walking. This continued adaptation to changing circumstances is central to knowledge and learning.

  29. Neil,

    The actions of the logic gate are meaningful to the engineer, but have no obvious meaning to the logic gate itself. So the logic gate will always seem to be acting mechanically on meaningless data. There’s no basis for meaning there, and the intentionality problem seems insoluble with logic gates..

    Neurons also act mechanically on their “data”. Intentionality is a higher-level phenomenon.

    If this is true of neurons, why do you think it cannot be true of logic gates?

    And even more fundamentally, do you believe that neurons themselves can’t be modelled sufficiently accurately using digital circuitry? If so, why?

  30. Neurons also act mechanically on their “data”.

    I take Hebbian learning to be a matter of neurons acting adaptively, rather than mechanically. And Hebbian learning changes how they act on the data.

    And even more fundamentally, do you believe that neurons themselves can’t be modelled sufficiently accurately using digital circuitry?

    What is “sufficiently accurate”? Does the simulation require programming more a priori knowledge into the simulated neuron than the actual neuron will ever learn?

  31. Neil,

    I take Hebbian learning to be a matter of neurons acting adaptively, rather than mechanically.

    But adaptation and mechanism aren’t mutually exclusive. Hebb’s rule and other learning rules can be described in purely syntactic terms. As such, they can be modeled mechanistically — and digitally.

    Do you disagree?

  32. Do you disagree?

    Of course, I disagree.

    You are looking at it very much from a design perspective. Yet you seemed to take offense when I said (in another thread) that philosophers are using a design perspective.

  33. Neil,

    You are looking at it very much from a design perspective.

    I’m looking at it from an information processing perspective, not a design perspective. Neurons process information, and so do logic gates.

    What can a neural network do that is out of reach for digital logic? Do you think neurons perform some sort of non-computable function?

  34. Neurons process information, and so do logic gates.

    That’s an assumption that is being imposed on the neurons. There’s no actual evidence of information processing.

  35. I do.

    Think neurons do something that is non- computable. I’m not sure it is “essential” to AI, but I think it makes artificial humans unlikely.

  36. petrushka: Think neurons do something that is non- computable.

    I am often agreeing with petrushka. In this case, I probably agree with what he meant, but I would not agree with what he said.

    The thing is, “non-computable” is a technical term, and I don’t think it fits. I would prefer to say that what the neuron does is neither computable nor non-computable.

    In a previous comment, keiths said:

    Hebb’s rule and other learning rules can be described in purely syntactic terms. As such, they can be modeled mechanistically — and digitally.

    I instead see the neurons as dealing with semantics, rather than syntax. What I see them doing, is constructing (and sometimes inventing) syntax that can carry the semantics. Or, in other terminology, I see the neurons as categorizing. And categorization is prior to computation.

  37. What I meant is that neurons cannot be exactly emulated by computation. We certainly can make circuits that behave a lot like neurons, but I suspect it will be difficult to scale them up into brains.

    For much the same reason it is difficult to make replicators from scratch. One doesn’t need to invoke magic or vitalism to notice that emergent properties are difficult to predict.

    It’s a GA problem. There doesn’t seem to be any general theory that allows scaling up the complexity of GA systems.

  38. The problem may not be the silicon neuron (google brains in silicon), but the network that is emergent. Al the problem solving networks I am aware of are evolved. One of their characteristics is that their behavior is difficult to analyze. In the case of one evolved electronic circuit, there is no obvious function for some components.

    So my prediction is that even with artificial neurons as a given, AI will be a slow and evolutionary climb, characterized by “random” variation, selection and extinction. I don’t see any “rational” path.

  39. Neil,

    That’s an assumption that is being imposed on the neurons. There’s no actual evidence of information processing.

    There’s tons of evidence, starting with Hubel and Wiesel and snowballing from there.

    What do you think all those neurons are doing, if not processing information?

  40. Neil,

    I instead see the neurons as dealing with semantics, rather than syntax. What I see them doing, is constructing (and sometimes inventing) syntax that can carry the semantics.

    Neurons fire (or don’t fire) according to the laws of physics. They don’t “care” what their inputs mean, and they don’t “care” about what it means when they fire. If you stimulate a neuron in vitro, it will continue to fire even though the “meaning” of the stimulus has changed completely.

    Or, in other terminology, I see the neurons as categorizing. And categorization is prior to computation.

    Categorization is just another form of computation. It’s even built into programming languages. What is a case statement, if not a categorizing directive?

  41. Petrushka:

    I do.

    Think neurons do something that is non- computable.

    Neil:

    I would prefer to say that what the neuron does is neither computable nor non-computable.

    Could both of you be a little more specific? What exactly does a neuron do that cannot be emulated by digital logic or a computational process?

  42. I get the feeling that you aren’t paying attention. What is it that people do that isn’t fully determined by laws of physics?

    Strong reductionism doesn’t account for emergent properties. Logic circuits don’t do what brains do except when networked in ways that can’t be anticipated from first principles.

    Building more logic circuits would be like pouring oil on the outside of an engine. The network that makes neurons into brains must evolve.

  43. I’m paying attention, but I can’t make sense of what the two of you are saying.

    Neil seems to think that neurons do something — ‘categorizing’ — that is beyond the reach of mere information processing, and therefore cannot be accomplished by logic gates. That makes no sense to me, as categorizing is obviously a form of information processing, and information processing is obviously well within the reach of logic gates.

    Similarly, you (Petrushka) claim that neurons do something that is non-computable.

    My question, again, is:

    What exactly does a neuron do that cannot be emulated by digital logic or a computational process?

    And why do you think that it cannot?

  44. That makes no sense to me, as categorizing is obviously a form of information processing, and information processing is obviously well within the reach of logic gates.

    No, it is not a form of information processing.

    The term “categorizing” seems to be used for two quite different things:

    1: Carving up the world;
    2: Grouping things together based on similarity.

    These are much conflated. I am concerned with the first of those. It is used in basic interaction with reality. Categorization is what produces symbols that can be used in computation. It is prior to computation.

  45. Neil,

    Categorization is what produces symbols that can be used in computation. It is prior to computation.

    I could easily design a digital temperature sensor in which logic gates are responsible for generating the symbols that represent the temperature. In fact, the whole thing could be built out of nothing but a temperature-to-voltage transducer, some passive parts for scaling the transducer output relative to the threshold voltage, and logic gates to handle the rest of the job.

    I can’t see how any of this is a problem for computation generally or AI specifically.

  46. keiths: Neil,

    Categorization is what produces symbols that can be used in computation. It is prior to computation.

    I could easily design a digital temperature sensor in which logic gates are responsible for generating the symbols that represent the temperature. In fact, the whole thing could be built out of nothing but a temperature-to-voltage transducer, some passive parts for scaling the transducer output relative to the threshold voltage, and logic gates to handle the rest of the job.

    I’m puzzled. Do you really not understand that a digital thermometer categorizes?

  47. I’m puzzled. Do you really not understand that a digital thermometer categorizes?

    Of course it does. The question is why you think that this isn’t a form of information processing, and why you think that it cannot be achieved by a system based on logic gates.

  48. I take the “information” of “information processing” to be Shannon information. I take Shannon information to be a sequence of symbols (such as bits). Categorization is how we get symbols in the first place. So categorization is prior to information processing.

    Even your basic logic chip does categorization of its input signals to decide whether to treat the inputs as 0 or 1 bits.

Leave a Reply