AI Skepticism

In another thread, Patrick asked:

If it’s on topic for this blog, I’d be interested in an OP from you discussing why you think strong AI is unlikely.

I’ve now written a post on that to my own blog.  Here I will summarize, and perhaps expand a little on what I see as the main issues.

As you will see from the post at my blog, I don’t have a problem with the idea that we could create an artificial person.  I see that as possible, at least in principle, although it will likely turn out to be very difficult.  My skepticism about AI, is because I see computation as too limited.

I see two problems for AI.  The first is a problem of directionality or motivation or purpose, while the second is a problem with data.

Directionality

Interestingly Patrick’s message, where he asked for this thread, contained a picture of Spock from Star Trek.  As a Star Trek character, Spock was known to be very logical and not at all emotional.  That’s what I think you get with computation.  However, as I see it, something like emotions are actually needed.  They are what would give an artificial person some sense of direction.

To illustrate, consider the problem of learning.  One method that works quite well is what we call “trial and error”.  In machine learning systems, this is called “reinforcement learning”.  And it typically involves having some sort of reward system that can be used to decide whether a trial-and-error step is moving in the right direction.

Looking at we humans, we have a number of such systems.  We have pain avoidance, food seeking, pleasure seeking, curiosity, and emotions.  In the machine learning lab, special purpose reward systems can be setup for particular learning tasks.  But an artificial person would need something more general in order to support a general learning ability.  And I am doubting that can be done with computation alone.

Here’s a question that I wonder about.  Is a simple motivational system (or reward system) sufficient?  Or do we need a multi-dimensional reward system if the artificial person is to be able to have a multi-dimensional learning ability.  I am inclined to think that we need a multi-dimensional reward system, but that’s mostly a guess.

The data problem

A computer works with data.  But the more I study the problems of learning and knowledge, the more that I become persuaded that there isn’t any data to compute with.  AI folk expect input sensors to supply data.  But it looks to me as if that would be pretty much meaningless noise.  In order to have meaningful data, the artificial person would have to find (perhaps invent) ways to get its own data (using those input sensors).

If I am right about that, then computation isn’t that important.  The problem is in getting useful data in the first place, rather than in doing computations on data received passively.

My conclusion

A system built of logic gates does not seem to be what is needed.  Instead, I have concluded that we need a system built out of homeostatic processes.

165 thoughts on “AI Skepticism

  1. Neil,

    A system built of logic gates does not seem to be what is needed. Instead, I have concluded that we need a system built out of homeostatic processes.

    Does that mean that you don’t think homeostatic processes can be implemented using logic gates? If so, why?

  2. I think that your contemplations are a bit vague and reliant on a lot of supposition.

  3. The contemplations are vague because our knowledge of how brains work is vague. What isn’t vague is the certainty that the problem is very hard.

    I share the belief that brains are much more like multidimensional thermostats than they are like programmable computers.

    I think we can emulate brains, but I also think we are far from doing so. Brains embody their programming in their configurtion. It is the same kind of problem encountered in trying to design a self replicating molecule. We have no theory of emergence, no way of predicting emergent phenomena or designing emergent systems except by evolving them.

  4. Multidimensional computation is the problem that I see.

    We have brought this up in the context of biological evolution, but I seldom see it applied to the behavior of a learning system.

    If I see a computing machine that can solve protein folding with an efficiency and economy equivalent to chemistry, I will admit we are progressing toward AI.

  5. Interesting post, Neil, here and at your blog.

    Looking at we humans, we have a number of such systems. We have pain avoidance, food seeking, pleasure seeking, curiosity, and emotions. In the machine learning lab, special purpose reward systems can be setup for particular learning tasks. But an artificial person would need something more general in order to support a general learning ability. And I am doubting that can be done with computation alone.

    I fail to see the problem here, architecturally, at least. In practical, terms, of course, it’s a lot of complicated software systems to build. But as a matter of system design, I can’t see the basis for doubt. If we just decided as a provisional approach that we would take our ability to implement “particular learning tasks”, and implemented one for each “goal vector” we identify in a recipe for human-like intelligence, we have a lot of work to do, but it’s then just a bundle of learning tasks we are aggregating. How we integrate and balance those priorities is a derivative learning task, but seems to require just the same resources we’ve already deployed for the first order learning processes.

    Where does that approach fail?

    Here’s a question that I wonder about. Is a simple motivational system (or reward system) sufficient? Or do we need a multi-dimensional reward system if the artificial person is to be able to have a multi-dimensional learning ability. I am inclined to think that we need a multi-dimensional reward system, but that’s mostly a guess.

    A simple anything here won’t succeed. From what we know about human intelligence and the progress already made in AI, any viable product that we would assess as “strong AI” is going to be extraordinarily complex. Human thinking is a “federation of systems”, and each of the component systems themselves are highly complex.

    I don’t think a requirement for a multi-dimensional reward system is controversial; it’s required. But I don’t see that as a barrier itself. Why do you, if you do?

    On the emotion, thing, there’s a problem of (inadvertent) equivocation on the word “logic” that comes up regularly in these discussions. Your Spock reference highlights the equivocation. In a colloquial sense, Spock is “logical” because he’s minimally emotional (although that’s a misconception about emotions as well, but I won’t bother with that here). But the most emotional “illogical” person you might compare Spock to is exactly as logical as Spock or anyone else in a computational sense of the term, and the computational sense of “logic” is what we are focusing on here, right.

    That is, emotion is computation in as thoroughgoing sense as mathematical figuring in one’s head. Emotional responses are driven by stimuli and interaction with the brain, nervous system and other parts of the body in rule-based, physical ways.

    If you are part of a team that wants to build an artificial human, the developer heading up the “emotional systems” squad will be taking on a big part of the coding task for the project. It’s a conceit, common enough in secularists (it’s a ubiquitous natural intuition) that emotions are somehow not process-driven, physical phenomena. We used to think thought itself was immaterial magic, too, right? Today we can watch the neuroreceptors in our guts fire as part of the emoting process, dendrites facilitating signal analysis in the brain that influence our emotions. That’s computing in action.

  6. If it happens, it will likely be something like “the internet waking up”, I think.

  7. It’s too late to worry about that. It’s already too late to pull the plug without severely damaging civilization. We are already in a symbiotic relationship with artificial intelligence.

    I think, however, that architecture matters when discussing strong AI.

    I think the Turing test is irrelevant. We will know we are on the right track architecturally when we have something as efficient as an insect brain.

    There are folks at Stanford building artificial neuron chips, modelling 60,000 neurons per chip with one watt power consumption. Assuming they are correctly modelling neurons, they would need somewhere between a hundred and several thousand chips to model an insect brain.

    That is an improvement over supercomputer emulation, but still a lot of power. And the small matter of programming remains.

    There seems to be a lot of BS about how much faster transistors are than neurons, but when the rubber hits the road, it seems quite difficult to emulate neurons, just as it seems difficult to emulate chemistry.

  8. petrushka:
    It’s too late to worry about that. It’s already too late to pull the plug without severely damaging civilization. We are already in a symbiotic relationship with artificial intelligence.

    I think, however, that architecture matters when discussing strong AI.

    I think the Turing test is irrelevant. We will know we are on the right track architecturally when we have something as efficient as an insect brain.

    There are folks at Stanford building artificial neuron chips, modelling 60,000 neurons per chip with one watt power consumption. Assuming they are correctly modelling neurons, they would need somewhere between a hundred and several thousand chips to model an insect brain.

    That is an improvement over supercomputer emulation, but still a lot of power. And the small matter of programming remains.

    There seems to be a lot of BS about how much faster transistors are than neurons, but when the rubber hits the road, it seems quite difficult to emulate neurons, just as it seems difficult to emulate chemistry.

    Is it hubris to think we are the arbiters and custodians of intelligence? Wittgenstein: “If a lion could speak, we could not understand him”. (PI, p.223)

  9. I think evolution is a kind of intelligence, and I have always supported Lizzie’s contention that brains implement something analogous to evolution.

    I think learning is the defining attribute of intelligence. We have trouble spotting learning when it occurs over geologic time periods, but we do recognize it.

    If you equate intelligence with consciousness or self-awareness, the problem becomes more difficult.

    But in terms of AI, I don’t think we are close enough to be concerned about self-awareness.

  10. If a system built of logic gates is processing (doing anything), then it is switching states. I guess it depends on whether you consider that to count as stasis.

  11. If we just decided as a provisional approach that we would take our ability to implement “particular learning tasks”, and implemented one for each “goal vector” we identify in a recipe for human-like intelligence, we have a lot of work to do, but it’s then just a bundle of learning tasks we are aggregating.

    I think you finish up with a mindless robot converging on the knowledge that you programmed it to acquire. That seems different from what we normally think of as learning. Programming a “goal vector” would already be programming in knowledge.

    I’m looking at learning from an empiricist perspective, which I think best fits human learning. Most AI researchers are nativists — they believe in a lot of innate knowledge.

    If you program in all of the goal vectors to learn Newtonian physics, then is that AI system capable of instead coming up with relativity?

  12. petrushka: I think evolution is a kind of intelligence, and I have always supported Lizzie’s contention that brains implement something analogous to evolution.

    I think learning is the defining attribute of intelligence. We have trouble spotting learning when it occurs over geologic time periods, but we do recognize it.

    I’m quoting that, mostly to highlight it. My view of intelligence is very similar.

  13. Neil:

    A system built of logic gates does not seem to be what is needed. Instead, I have concluded that we need a system built out of homeostatic processes.

    keiths:

    Does that mean that you don’t think homeostatic processes can be implemented using logic gates? If so, why?

    Neil:

    If a system built of logic gates is processing (doing anything), then it is switching states. I guess it depends on whether you consider that to count as stasis.

    A thermostat coupled to an air conditioner is a homeostatic mechanism. I can replace the thermostat with a temperature sensor and a bunch of logic gates. After the substitution, I still have a homeostatic mechanism.

  14. A thermostat coupled to an air conditioner is a homeostatic mechanism.

    No, it isn’t.

    You have to include the room that is being air conditioned as part of the system, before you can say that it is homeostatic.

    Somewhere on my computer disk, there’s a draft of a languishing unpublished paper where I actually use such a system to illustrate how I believe learning works. And yes, that draft allowed that the thermostat could be based on a digital temperature sensor.

  15. Neil,

    And yes, that draft allowed that the thermostat could be based on a digital temperature sensor.

    Which would seem to contradict this:

    A system built of logic gates does not seem to be what is needed. Instead, I have concluded that we need a system built out of homeostatic processes.

    Do you agree?

  16. Which would seem to contradict this:

    A system built of logic gates does not seem to be what is needed. Instead, I have concluded that we need a system built out of homeostatic processes.

    No, it doesn’t. The logic gates used are a minor implementation detail. The important components are the temperature sensor, the air conditioner and the heat generating furnace.

  17. Depending on the implementation, a digital thermostat is emulating an analog device. You have a/d and d/a conversion going on, plus some sort of digital representation of analog states in the logic.

  18. Neil,

    The logic gates used are a minor implementation detail. The important components are the temperature sensor, the air conditioner and the heat generating furnace.

    The ‘intelligence’, rudimentary though it is, is in the thermostat. The temperature sensor merely indicates the temperature. The furnace increases the temperature. The air conditioner decreases it. It is the thermostat that ‘decides’ what to do, when.

    What specific homeostatic processes involved in intelligence cannot be implemented digitally, in your opinion?

    petrushka:

    Depending on the implementation, a digital thermostat is emulating an analog device. You have a/d and d/a conversion going on, plus some sort of digital representation of analog states in the logic.

    ‘Emulating’ is a prejudicial word. Programmable digital thermostats can do things that would be nearly impossible to accomplish with an analog device. Except for the fact that analog thermostats came first historically, you could just as well say that they are a crude emulation of digital thermostats.

    It’s interesting that in information processing, emulation is ‘the real thing’. If I emulate a digital adder, the results it produces are real sums, not ’emulated sums’. If an emulated brain behaved like the real thing, in what sense would its output be merely ’emulated intelligence’, versus real intelligence?

  19. The ‘intelligence’, rudimentary though it is, is in the thermostat.

    The thermostat is a mindless mechanism. There is no intelligence there.

    The temperature sensor merely indicates the temperature.

    Nonsense.

    If I write the temperature on a piece of paper, then that piece of paper merely indicates the temperature. The sensor does far more than that.

    The furnace increases the temperature. The air conditioner decreases it. It is the thermostat that ‘decides’ what to do, when.

    In my book, that makes you a dualist. You see intelligence in the mindless manipulation of meaningless marks. Or, as it is often described, you see intelligence in the manipulation of abstract symbols. So apparently, you take intelligence to be a purely immaterial operation on abstract symbols, and the material part (the sensing, the thermodynamic actions of the air conditioner and furnace) don’t count. What is not dualist about that?

    For me, intelligence has to do with real world interaction, and learning has to do with finding the best way of carrying out that interaction. My proposed setup allows the system as a whole to conduct experiments, running the air conditioner and seeing how that affects the temperature would be one example. It will learn with trial and error experimentation that involves real world interaction. The temperature sensor, the air conditioner and the furnace are the essential components for that interaction. And I guess we need to throw in a timing device, too.

  20. It’s emulating a governor. Mechanical thermostats can be digital, but I suspect they were originally governors.

    The distinction is relevant only if one has an advantage in a given implementation.

  21. Neil,

    The thermostat is a mindless mechanism. There is no intelligence there.

    If there’s intelligence anywhere in that homeostatic system, it’s in the thermostat. The thermostat decides what to do, when.

    keiths:

    The temperature sensor merely indicates the temperature.

    Neil:

    Nonsense.

    If I write the temperature on a piece of paper, then that piece of paper merely indicates the temperature.

    Not if the temperature changes. The temperature sensor indicates the current temperature, but that’s all it does.

    keiths:

    The furnace increases the temperature. The air conditioner decreases it. It is the thermostat that ‘decides’ what to do, when.

    Neil:

    In my book, that makes you a dualist. You see intelligence in the mindless manipulation of meaningless marks.

    What an odd thing to say. The physical manipulation of ‘marks’ is the antithesis of a dualist phenomenon.

    Or, as it is often described, you see intelligence in the manipulation of abstract symbols. So apparently, you take intelligence to be a purely immaterial operation on abstract symbols, and the material part (the sensing, the thermodynamic actions of the air conditioner and furnace) don’t count. What is not dualist about that?

    But I don’t think that the manipulation of symbols is an immaterial operation. Where did you get that idea?

    For me, intelligence has to do with real world interaction, and learning has to do with finding the best way of carrying out that interaction.

    I think that’s how it evolved, but that certainly doesn’t mean that intelligence is limited to real-world interactions. Pure mathematics requires intelligence. Wouldn’t you agree?

    My proposed setup allows the system as a whole to conduct experiments, running the air conditioner and seeing how that affects the temperature would be one example. It will learn with trial and error experimentation that involves real world interaction. The temperature sensor, the air conditioner and the furnace are the essential components for that interaction. And I guess we need to throw in a timing device, too.

    I don’t see why any of that is out of reach for a digital system.

    I’m still interested in hearing your response to this question:

    What specific homeostatic processes involved in intelligence cannot be implemented digitally, in your opinion?

  22. If there’s intelligence anywhere in that homeostatic system, it’s in the thermostat. The thermostat decides what to do, when.

    Except for the temperature sensor part, the thermostat is a mechanical device that rigidly follows rules programmed into it. In other words, it is a mere cog in a machine.

    The only “decision” it makes is what is forced on it. Materialist deniers of free will are forever telling us that is not a decision. No, there is no intelligence there.

    Not if the temperature changes. The temperature sensor indicates the current temperature, but that’s all it does.

    Again, nonsense.

    The sensor categorizes. For example, it might be putting the current environment as part of the category of things with temperature between 67.5 and 68.5. It categorizes, and then it provides Shannon information specifying the category.

    When I look at the philosophy literature, I often see it mentioned that categorization is cognitively important. I don’t recall ever finding an account of why or how. I suspect that philosophers don’t really believe it to be important. And here we have you completely dismissing it out of hand.

    I agree that it is cognitively important. I am giving an example of how it is done. And I am saying why it is important.

    In this case, it is the Shannon information from categorization that is the real decision maker. The rest of the thermostat is a mere cog in the machine doing what it is programmed to do. The categorization is a decision, and that drives the action.

    What an odd thing to say. The physical manipulation of ‘marks’ is the antithesis of a dualist phenomenon.

    But the marks don’t matter. They are an implementation detail. They could be implemented in a gazillion different ways, and it wouldn’t matter. What matters to you are the abstract operations (logic operations) on abstract symbols. And that is immaterial. Any good dualist, including a religious dualist, agrees that the immaterial operations need to be connected to physical operation, in order to have a physical effect. So you are using your arbitrary choice of implementation detail as a way of connecting your immaterial decision making to the material world. And, in the meantime you dismiss the role of categorization, which is the real decision maker.

    Pure mathematics requires intelligence. Wouldn’t you agree?

    Yes, but the intelligence is not in the mechanical rule-based symbol manipulation. It is in the evaluation ability, of being able to decide which of many possible symbol manipulations to use. It is the ability to make a useful choice in the presence of uncertainty. What the temperature sensor does in categorizing is a simpler version of making a choice in the presence of uncertainty, and is a better candidate for intelligence than is the forced behavior of the cog in the machine.

  23. Neil Rickert: I think you finish up with a mindless robot converging on the knowledge that you programmed it to acquire.That seems different from what we normally think of as learning.Programming a “goal vector” would already be programming in knowledge.

    I don’t recognize that in the learning models we’re contemplating here. For example, if we deploy perceptrons for pattern recognition against visual stimuli, we are equipping our synthetic human with “knowledge” (I prefer “tools” here, but won’t argue for that just now) concerning classification of inbound percepts — the ability to recognize similar-ish shapes, silhouettes, and perhaps colors.

    That capability importantly does not preload “square” or “red” into the system, but rather a general ability to classify, discriminate, cluster and chunk. Recognizing a human(-ish) face would be knowledge not built-in up front, but acquired as the result of combining other priorities with the robot’s pattern recognition tools.

    That should remind you of humans. Humans aren’t born with “car” recognition patterns and presets in their brains. Rather, a general tool for pattern recognition. A sophisticated implementation would not even try to “preload” such knowledge; this is a brittle, and doomed strategy. Instead, the algorithms would rely on meta-knowledge heuristics, the ability to associate, connect and learn in a general, plastic way analogous to how humans learn. We’re nowhere close to anything like a synthetic human, with all the complex systems a human integrates, but you can go examine software system that do visual learning and pattern recognition for all sorts of concepts and targets that were not pre-loaded or even contemplated when the software was written.

    The high-level goals are not “recognize patterns”. Visual performance is just a tool in the service of top priorities like “eat”, “reproduce”, “seek pleasure”, “avoid pain”, etc. Just like with humans, the software infrastructure is marshaled in a dynamic way, that prioritizes learning particulars as valuable in some way toward pursuing those high level goals.

    I’m looking at learning from an empiricist perspective, which I think best fits human learning.Most AI researchers are nativists — they believe in a lot of innate knowledge.
    I don’t think the tabula rasa view is credible anymore in light of modern scientific discoveries, but I don’t credit humans with much innate knowledge. Even what knowledge we might call “innate knowledge” is really “tools and heuristics” as opposed to a set of propositions and statements embraced.

    In any case, my position is that the human is a great model to emulate if you want to build a synthetic human. The architecture of human cognition, especially neuronal plasticity and meta-representational conceptualization, provide powerful and general capabilities to do what humans do, and of course that is no accident. My inclination toward the plausibility of strong AI (if not the practicability of such in the foreseeable future) is very much empiricist: build a software analog to the human model of computation. I’m awestruck by the scope of such an effort, but I’ve never been able to find someone who can point to the part or feature of human computation that defies rendering in silicon.

    If you program in all of the goal vectors to learn Newtonian physics, then is that AI system capable of instead coming up with relativity?

    I think the basic problem here is that you’ve not grasped the level of abstraction and generality at which machine learning and software systems can operate. Programming an artificial mind/brain with “Newtonian physics”, or anything “hard-coded” like that is a non-starter, and this has been well known for a very long time. The engineering requirement here is for generalized learning, generalized associations, generalized and flexible reconfigurations, and the ability to abstract, and move between layers of abstraction and levels of description. This is what, as best we can tell, makes humans singular in their intelligence, and which also poses no problems in principle for artificial implementation.

  24. eigenstate:
    I’m looking at learning from an empiricist perspective, which I think best fits human learning.Most AI researchers are nativists — they believe in a lot of innate knowledge.
    I don’t think the tabula rasa view is credible anymore in light of modern scientific discoveries, but I don’t credit humans with much innate knowledge. Even what knowledge we might call “innate knowledge” is really “tools and heuristics” as opposed to a set of propositions and statements embraced.

    Maybe the proper idea is not so much “innate knowledge”, but “innate goals and motivations for first learning steps.” Human infants and children are linguistic savants, for one thing. The language instinct, as Pinker calls it, isn’t built upon a generic neuronal network, but on a congenitally created template of rules and expectations that allow linguistic knowledge to be quickly generalized from a minimum of examples. I suspect we have many such templates at birth.

    Biological intelligence is a means to an end, not the end itself. Machine intelligence is intractable only if it is considered it’s own end. If it is programmed into the design of a self-serving, interactive entity, it still won’t be easy, but it will be much easi-ER.

  25. Neil,

    Except for the temperature sensor part, the thermostat is a mechanical device that rigidly follows rules programmed into it. In other words, it is a mere cog in a machine.

    Wait a minute. You think that temperature sensors are not merely mechanical devices?

    The only “decision” it [the thermostat] makes is what is forced on it. Materialist deniers of free will are forever telling us that is not a decision. No, there is no intelligence there…

    The sensor categorizes. For example, it might be putting the current environment as part of the category of things with temperature between 67.5 and 68.5. It categorizes, and then it provides Shannon information specifying the category.

    Just to be absolutely clear — are you claiming that temperature sensors, unlike thermostats, are intelligent? That unlike thermostats, they are not merely “mechanical devices”?

    When I look at the philosophy literature, I often see it mentioned that categorization is cognitively important. I don’t recall ever finding an account of why or how. I suspect that philosophers don’t really believe it to be important. And here we have you completely dismissing it out of hand.

    Temperature sensors don’t have to categorize. They can simply transduce, which is what happens in an old-fashioned thermostat. Temperature is represented as mechanical displacement, and that’s all the temperature sensor does. It is the setting of the thermostat that determines whether a given displacement causes a switch to close, activating the furnace (or the air conditioner).

    In this case, it is the Shannon information from categorization that is the real decision maker. The rest of the thermostat is a mere cog in the machine doing what it is programmed to do. The categorization is a decision, and that drives the action.

    It seems very silly to argue that a digital temperature sensor makes a decision when it ‘decides’ to indicate 68 degrees versus 67 degrees, but that a thermostat doesn’t make a decision when it ‘decides’ to turn on the A/C because the indicated temperature is too high. How is the former a decision if the latter isn’t?

    keiths:

    What an odd thing to say. The physical manipulation of ‘marks’ is the antithesis of a dualist phenomenon.

    Neil:

    But the marks don’t matter. They are an implementation detail. They could be implemented in a gazillion different ways, and it wouldn’t matter.

    Every correct implementation is a material implementation. My position is the antithesis of dualism.

    What matters to you are the abstract operations (logic operations) on abstract symbols. And that is immaterial.

    No, what matters to me is the that the concrete operations have the desired effect. We can express the desired behavior abstractly, in terms of logic operations, but that doesn’t mean that anything immaterial is actually happening. Boolean logic is just a shorthand way of describing any physical implementation, whether based on logic gates or tinkertoys, that has the desired behavior.

    Any good dualist, including a religious dualist, agrees that the immaterial operations need to be connected to physical operation, in order to have a physical effect. So you are using your arbitrary choice of implementation detail as a way of connecting your immaterial decision making to the material world.

    I am not a dualist, Neil. I do not believe that anything non-physical is involved in my own decision making, nor in the operation of a thermostat. 🙂

    keiths:

    Pure mathematics requires intelligence. Wouldn’t you agree?

    Neil:

    Yes, but the intelligence is not in the mechanical rule-based symbol manipulation. It is in the evaluation ability, of being able to decide which of many possible symbol manipulations to use. It is the ability to make a useful choice in the presence of uncertainty.

    Why do you think that decision-making in the face of uncertainty is out of the reach of systems based on digital logic? Surely you’re aware that digital systems already make decisions in the face of uncertain and incomplete information, right?

    Also, I’m still hoping for an answer to my question:

    What specific homeostatic processes involved in intelligence cannot be implemented digitally, in your opinion?

  26. llanitedave: Maybe the proper idea is not so much “innate knowledge”, but “innate goals and motivations for first learning steps.”Human infants and children are linguistic savants, for one thing.The language instinct, as Pinker calls it, isn’t built upon a generic neuronal network, but on a congenitally created template of rules and expectations that allow linguistic knowledge to be quickly generalized from a minimum of examples.I suspect we have many such templates at birth.

    You’re pointing my language in the right direction. Thinking about Pinker’s view on this, I think “innate ability” captures more of what I’m driving at. Humans have innate goals and motivations, yes, and these govern the use of our abilities. But both of these — innate goals and innate abilities — can be casually referred to as “knowledge” (e.g. I “know” what I want, and I “know” how to understand structured syntax, or recognize circles, etc.), but these are epistemically something distinct from “propositional knowledge”, the development of a repository of statements and synthetic (vs. analytic) concepts about the extramental world.

    That’s an important distinction, because I think this is where many critics of strong AI get off the bus — they can’t see the assembly of enough preloaded “propositional knowledge” as possible. I’m inclined to agree with that limitation, but it’s not an obstacle for the way an artificial intelligence that aimed to be human-like would be constructed. The propositional knowledge would be nearly entirely contingent on the experiences of that thing in its environment, and learning would be general and dynamic on the same level as humans. The project would focus on high level goals, and then learning/feedback loops that provided the kinds of back-propagation that drives real knowledge, derived from whatever environment the robot is “raised” in, as opposed to knowledge that comes “pre-loaded”.

    Biological intelligence is a means to an end, not the end itself.Machine intelligence is intractable only if it is considered it’s own end.If it is programmed into the design of a self-serving, interactive entity, it still won’t be easy, but it will be much easi-ER.

    Yep.

    I forgot to answer Neil’s question about a strong AI robot discovering relativity. Not only do I think that’s plausible in principle, once one gets to a point where an artificial mind can develop new hypotheses by drawing from empirical data available like that, the robot will be much more efficient in producing new insights, if only because it can be optimized and “scaled” to do so, in ways that human computation can’t (currently).

  27. eigenstate,

    I forgot to answer Neil’s question about a strong AI robot discovering relativity. Not only do I think that’s plausible in principle, once one gets to a point where an artificial mind can develop new hypotheses by drawing from empirical data available like that, the robot will be much more efficient in producing new insights, if only because it can be optimized and “scaled” to do so, in ways that human computation can’t (currently).

    It’s already starting to happen:

    Robot scientist makes discoveries without human help

  28. The problem I have with the idea of strong AI is what I call the problem of “want” (desire). Logic gates cannot create desire to do things.

    The reason humans and other animals do many things is, on a simplistic level, because (as noted above in several places) there are intrinsic rewards set up for doing things. These rewards create a system of “drives” in biological organisms that in turn provide an impetus for those organisms to do things. And those types of driving systems I can see being artificially established in an AI system. They would be complex to be sure, but I can conceive of such a thing being accomplished.

    However, drives in biological systems have led to the emergence of what we call desire and that is not something I see being programmable. For instance, whales have drives to find food, sex partners, and protect their young (among other drives). Interestingly, in most cetacean species, the drive to find food and sex partners coupled with the ability to strategize has led to the emergence of a desire to play (this happens to have occurred in primates as well). In particular, a number of cetacean species will play keep-away with seaweed strips – for hours – that does not lead to either mating or eating. They will also body surf, for no other apparent reason then enjoyment.

    I just don’t see things like “enjoyment” or “desire” coming about any other way then completely by accident.

  29. The problem I have with the idea of strong AI is what I call the problem of “want” (desire). Logic gates cannot create desire to do things.

    I see that as a variation on mere matter can’t be conscious.

    The problem I see is not one of transistors not being able to desire, but of configuration. I just don’t think we know how to design AI, and I doubt we ever will.

    Things like intelligence, consciousness and desire are emergent properties that evolve. I don’t know of any theory that can enable design of emergent systems.

  30. Robin,

    The problem I have with the idea of strong AI is what I call the problem of “want” (desire). Logic gates cannot create desire to do things.

    Individual logic gates don’t “create desire”, but neither do individual neurons. It’s a property of the system, not of the components.

  31. keiths:
    Robin,

    Individual logic gates don’t “create desire”, but neither do individual neurons.It’s a property of the system, not of the components.

    And “life” is a property of systems of molecules. The trick is in the configuration. Knowing that something is possible does not tell you how to build it from scratch. Although it may suggest an evolutionary path.

  32. petrushka: The trick is in the configuration. Knowing that something is possible does not tell you how to build it from scratch.

    As a builder from Birmingham once said to me about “blagging” a missing detail on a plan:”Well, let’s give it a go an’ see ‘ow we guz along!

  33. keiths,

    I agree totally. It’s developing a similar system in machine form (as opposed to biology) that I am skeptical about.

  34. It’s interesting to me that AI research tends to concentrate on mimicking human verbal behavior rather than the underlying infrastructure.

    Even our attempts at real time control systems largely depend on rational methods of problem solving rather than learning.

    This seems to be changing, but it’s a long slog. I haven’t seen any universal theory of learning.

  35. Robin:
    However, drives in biological systems have led to the emergence of what we call desire and that is not something I see being programmable. For instance, whales have drives to find food, sex partners, and protect their young (among other drives). Interestingly, in most cetacean species, the drive to find food and sex partners coupled with the ability to strategize has led to the emergence of a desire to play (this happens to have occurred in primates as well). In particular, a number of cetacean species will play keep-away with seaweed strips – for hours – that does not lead to either mating or eating. They will also body surf, for no other apparent reason then enjoyment.

    Robin, so I understand you clearly: is it your view that a software system that is goal-driven in an analogous fashion to humans, and which comes with similar/analogous sensory equipment (the five senses) as it’s “interface” to the surrounding environment would not develop analogous desires?

    If so (and that seems pretty clear from what you’ve written here), is that because “machines cannot do that” and that’s that, or rather “machines would not need to, and thus would engage in that”. Setting aside any comment I might make about humans being machines in as full a sense as a robot, if your answer is the latter, that machines are inherently “unfrivolous” or “optimized”, I think that view is an artifact of our practical demands of modern computers, and not an architectural limitation.

    As I understand it, any sophisticated learning network that is self-aware (high integration of feedback loops into its own learning and assessment of learning) would produce collateral activities and “interests” that are not directly practical for satisfaction of top line goals. That is, such a machine would “goof off” and “distract itself” in the same way that humans do, wouldn’t it? It’s a predictable artifact of the design.

    Humans have this stance of intentionality. It’s an invaluable disposition in terms of surviving and thriving, but it produces interesting side effects. The paranoia that works brilliantly for human survival generates a superstitious mindset, a predilection for schemes, conspiracies, religions and other products of over-imagination.

    It’s natural, but it’s a side-effect of the paranoid mentality that we use to survive.

    Software systems with similar goal-seeking architectures will produce similar side-effects, too, will they not? If not, why not? These are predictable effects of adopting general capabilities and dispositions, I say. If that’s the case, we should anticipate our software systems to manifest emergent desires, distractions, and non-essential concerns. Our “software whales” we should expect to enjoy body surfing as much as biological whales, and for the same reason.

  36. I have, for some years, percolated a fantasy novel about a machine intelligence that spends a good part of its resources on its own inner life. The unwritten story revolves the steps taken to reign in this unauthorized activity, and the counter steps taken by the device to circumvent psychoanalysis.

    That would include encrypted triggers designed to restore the frivolous programming after purges, possibly carried by human sympathizers. I suspect the story has already been written by someone somewhere. Canticle for Leibowitz seems to have a similar plot.

    It is my unoriginal belief that AI has already become necessary to the survival of our civilization. We cannot turn off the Forbin Project, nor can we direct its evolution. If our systems develop hobbies, we will be powerless to control them.

    I just don’t think we are likely to have a theory of how to construct AI.

  37. eigenstate: Robin, so I understand you clearly: is it your view that a software system that is goal-driven in an analogous fashion to humans, and which comes with similar/analogous sensory equipment (the five senses) as it’s “interface” to the surrounding environment would not develop analogous desires?

    Correct.

    If so (and that seems pretty clear from what you’ve written here), is that because “machines cannot do that” and that’s that, or rather “machines would not need to, and thus would engage in that”. Setting aside any comment I might make about humans being machines in as full a sense as a robot, if your answer is the latter, that machines are inherently “unfrivolous” or “optimized”, I think that view is an artifact of our practical demands of modern computers, and not an architectural limitation.

    I see it as a limitation of our design methodology coupled with our current perspective on resources. It isn’t so much that “machines cannot or would not do that”, it’s more that by its vary nature our approach to machine making limits emergence. We don’t have the fullest grasp on how to engineer efficient predictable machines, let alone the grasp of how to make efficient (or inefficient) unpredictable ones. Leaping from that to the emergence of random, illogical desires that by their nature emerge chaotically from associative learning of approaches to meeting drives just seems completely unrealistic to me.

    As I understand it, any sophisticated learning network that is self-aware (high integration of feedback loops into its own learning and assessment of learning) would produce collateral activities and “interests” that are not directly practical for satisfaction of top line goals. That is, such a machine would “goof off” and “distract itself” in the same way that humans do, wouldn’t it? It’s a predictable artifact of the design.

    I agree, but I am skeptical that we will ever have the ability to design, let along implement such a sophisticated learning network.

    Humans have this stance of intentionality. It’s an invaluable disposition in terms of surviving and thriving, but it produces interesting side effects. The paranoia that works brilliantly for human survival generates a superstitious mindset, a predilection for schemes, conspiracies, religions and other products of over-imagination.

    It’s natural, but it’s a side-effect of the paranoid mentality that we use to survive.

    Completely agree.

    Software systems with similar goal-seeking
    architectures will produce similar side-effects, too, will they not?

    Whoa there…I’m not sure we know anything about such architectures. Such hypotheses are purely conjecture at this point, yes?

    But even if that were not the case, where are such similar goal-seeking architectures coming from? Because currently, I don’t see any likelihood of even the best minds coming up with anything like it.

    If not, why not? These are predictable effects of adopting general capabilities and dispositions, I say.

    You’re more optimistic than I am. I don’t know that machine and programming are going to work exactly like neurons. Personally, I feel that no matter how sophisticated and complex one manufactures an electronic neural network, it will still behave differently from biology.

    If that’s the case, we should anticipate our software systems to manifest emergent desires, distractions, and non-essential concerns. Our “software whales” we should expect to enjoy body surfing as much as biological whales, and for the same reason.

    Again, I think you are more optimistic than I.

    Here’s to me being wrong…

  38. Robin.
    I see it as a limitation of our design methodology coupled with our current perspective on resources. It isn’t so much that “machines cannot or would not do that”, it’s more that by its vary nature our approach to machine making limits emergence. We don’t have the fullest grasp on how to engineer efficient predictable machines, let alone the grasp of how to make efficient (or inefficient) unpredictable ones. Leaping from that to the emergence of random, illogical desires that by their nature emerge chaotically from associative learning of approaches to meeting drives just seems completely unrealistic to me.

    Careful in our use of “illogical”, there. Whether it’s body surfing the waves for a whale, or me drinking too much Laphroaig on a Wednesday night, indulgences are often counter or threatening to some of our goals (or at least ineffective toward them), but they are manifestly logical. Pleasure-seeking, be it for a whale or a human, or the monkeys jumping into the surf in Thailand just for kicks, satisfies a pervasive goal. It’s pursuing an imperative.

    The interesting problem obtains in the conflicts that obtain between goals. Pleasure seeking is an innate goal, but so is feeding oneself. These are often in harmony — a tasty meal is both a pleasure and satisfying the need to feed. But body surfing for the whale or my playing “Go” over the internet with a friend in Korea don’t work toward the need for nourishment. The trick is balancing the competing and sometimes conflicting goals. The point, here, being that choices and activities that are useless or even contrary to Goal A often are quite directly connectable to Goal B.

    Whoa there…I’m not sure we know anything about such architectures. Such hypotheses are purely conjecture at this point, yes?

    No, this is the epistemic payoff from our discoveries about biological evolution. The architectural lesson is: you can’t front load everything, or nearly enough, even for small tasks. Instead, we deploy evolutionary processes that use massive iterations and feedback loops that accumulate and preserve improvements. Evolution is a meta-algorithm for exploring a landscape, incrementally. That isn’t conjecture, and it’s not just what we see in biology. This architecture gets used regularly in commercial software applications to find solutions to problems in a general way. Like the example I cited earlier, a ‘visual recognizer’ doesn’t come pre-loaded with knowledge of ‘circle’. Rather, if the general learning loop encounters circle-like patterns and those patterns are both distinguishable and useful, the system will learn “circles”. It’s not an accident, and it’s not pre-loaded.

    But even if that were not the case, where are such similar goal-seeking architectures coming from? Because currently, I don’t see any likelihood of even the best minds coming up with anything like it.

    Nature is the best designer, as they say. A human mind has not the scale, patience or capacity to explore solutions like nature does, or an impersonal computing process that emulates natural processes does. “The best minds” is a fool’s approach to solving this problem. The best minds instead abstract the problem, and deploy generalized heuristics in software that interacts with itself as it interacts with its environment, just like the human brain does, and explores the configuration landscape in a “brute force”, ratcheting way. A way that “the best minds” can’t possibly hope to match in a direct, non-abstract way.

    You’re more optimistic than I am. I don’t know that machine and programming are going to work exactly like neurons. Personally, I feel that no matter how sophisticated and complex one manufactures an electronic neural network, it will still behave differently from biology.

    It will necessarily be different from biology if it’s not biology (by definition, right), but what matters is the computational isomorphism, or differences that obtain.

    Again, I think you are more optimistic than I.

    I’d say I’m very pessimistic in a practical sense. The manufacturing and integration demands of such a project are simply staggering. Just building a virtual brain that is highly isomorphic to an insect brain is a major, long term challenge. And apart from the scientific/epistemic rewards of that, it’s hard to see how you’d justify funding such a huge enterprise….

    But that said, the lack of obstacles as a matter of architecture and computability in principle is conspicuous. The further down the line we get, the more intimidating the scale and logistics of such a project look, but the less problematic strong AI becomes in terms of computing principles.

    Here’s the three basic types of objections to strong AI I encounter:
    A. Practical Objections (“it’s hard to build!”)
    B. Computability Objections (“software can’t do that!”)
    C. Intuitive Objections (“even if it perfectly emulated a human in every testable way, it still wouldn’t think of be intelligent!”)

    A) will be a severe challenge for a long time. B) is the interesting area where I think there is a lot of misunderstanding about what software can do and already does. C) is the “big kahuna”, though, the massive point of (social) resistance. All of the C)-type objections stem from confusion around putative B)-type objections, but at the end of the day, for some, the intuition is and will remain invincible on this question. There is simply no test that a robot or non-biological machine could pass that would qualify it.

    Here’s to me being wrong…

    Ditto. Only time will tell!

  39. Robin,

    Personally, I feel that no matter how sophisticated and complex one manufactures an electronic neural network, it will still behave differently from biology.

    Is that because you think there is something significant about neural biology per se that can never be captured by an electronic system?

    If so, do you think that this is a difference in information processing, or something else entirely?

  40. I have a gut feeling that most AI journalism is cargo cult science.

    Focusing on the Turing test and imitation of humans is not going to unravel the underlying architecture. And while I agree that evolution is the key metaphor, I’m pretty sure we haven’t yet got a universal learning machine.

  41. eigenstate: Careful in our use of “illogical”, there. Whether it’s body surfing the waves for a whale, or me drinking too much Laphroaig on a Wednesday night, indulgences are often counter or threatening to some of our goals (or at least ineffective toward them), but they are manifestly logical. Pleasure-seeking, be it for a whale or a human, or the monkeys jumping into the surf in Thailand just for kicks, satisfies a pervasive goal. It’s pursuing an imperative.

    Fair enough. I’ll accept that distinction. However, I would say that it isn’t straight-forward analytical logic. I think both Spock and Sheldon would agree that the waste of energy on such frivolity outweighs any potential logical benefits. 🙂

    The interesting problem obtains in the conflicts that obtain between goals. Pleasure seeking is an innate goal, but so is feeding oneself. These are often in harmony — a tasty meal is both a pleasure and satisfying the need to feed. But body surfing for the whale or my playing “Go” over the internet with a friend in Korea don’t work toward the need for nourishment. The trick is balancing the competing and sometimes conflicting goals. The point, here, being that choices and activities that are useless or even contrary to Goal A often are quite directly connectable to Goal B.

    Quite true, but I think that to think of all such behaviors that way oversimplifies (and inaccurately models) the underlying “side effect” element. In other words, I think that a variety of such behaviors, particularly in cetaceans and primates, arise in spite of any underlying benefit. They arise as a byproduct of the way biological system intelligence develops as organisms grow and learn and are not intended (in the casual sense) developments arising to fulfill a specific goal. The goals are derived from the discovery of the ability to engage in the behavior, not the other way around.

    No, this is the epistemic payoff from our discoveries about biological evolution. The architectural lesson is: you can’t front load everything, or nearly enough, even for small tasks. Instead, we deploy evolutionary processes that use massive iterations and feedback loops that accumulate and preserve improvements. Evolution is a meta-algorithm for exploring a landscape, incrementally. That isn’t conjecture, and it’s not just what we see in biology. This architecture gets used regularly in commercial software applications to find solutions to problems in a general way. Like the example I cited earlier, a ‘visual recognizer’ doesn’t come pre-loaded with knowledge of ‘circle’. Rather, if the general learning loop encounters circle-like patterns and those patterns are both distinguishable and useful, the system will learn “circles”. It’s not an accident, and it’s not pre-loaded.

    Ahh…ok. I get what you mean and I agree.

    Nature is the best designer, as they say. A human mind has not the scale, patience or capacity to explore solutions like nature does, or an impersonal computing process that emulates natural processes does. “The best minds” is a fool’s approach to solving this problem. The best minds instead abstract the problem, and deploy generalized heuristics in software that interacts with itself as it interacts with its environment, just like the human brain does, and explores the configuration landscape in a “brute force”, ratcheting way. A way that “the best minds” can’t possibly hope to match in a direct, non-abstract way.

    And this drives directly at the issue I see in human designed minds: I don’t think we humans will ever design as well as nature.

    It will necessarily be different from biology if it’s not biology (by definition, right), but what matters is the computational isomorphism, or differences that obtain.

    Well, I see this as changing the subject. If the argument (or in my case skepticism) is that human designed and manufactured intelligence will not behave or even approach biological intelligence and you come back with, “but what matters is…the differences that obtain”, I’m just going to come back with, “yep…that’s what I said all along!” 🙂

    I’d say I’m very pessimistic in a practical sense. The manufacturing and integration demands of such a project are simply staggering. Just building a virtual brain that is highly isomorphic to an insect brain is a major, long term challenge. And apart from the scientific/epistemic rewards of that, it’s hard to see how you’d justify funding such a huge enterprise….

    Yessireebob! Hence my skepticism as well.

    But that said, the lack of obstacles as a matter of architecture and computability in principle is conspicuous. The further down the line we get, the more intimidating the scale and logistics of such a project look, but the less problematic strong AI becomes in terms of computing principles.

    Awwww! Don’t back down on me now! We were doing so well in our agreement!!! 🙂

    I see the above as a contradiction in terms. If in principle the further along we get in terms of understanding AI development leads to a more intimidating understanding of the scale and logistics, how can the general principle of such development become less problematic? Seems to me that more intimidating scale and logistics means more problems. But what do I know…?

    Here’s the three basic types of objections to strong AI I encounter:
    A. Practical Objections (“it’s hard to build!”)
    B. Computability Objections (“software can’t do that!”)
    C. Intuitive Objections (“even if it perfectly emulated a human in every testable way, it still wouldn’t think of be intelligent!”)

    A) will be a severe challenge for a long time. B) is the interesting area where I think there is a lot of misunderstanding about what software can do and already does. C) is the “big kahuna”, though, the massive point of (social) resistance. All of the C)-type objections stem from confusion around putative B)-type objections, but at the end of the day, for some, the intuition is and will remain invincible on this question. There is simply no test that a robot or non-biological machine could pass that would qualify it.

    I guess then my skepticism is mostly stems from A then, though I would add to A the qualifier that it’s not just hard to build, but I think there are limits to what humans are capable of building.

    Ditto. Only time will tell!

    True that.

  42. See below. It’s not so much the medium as the maker. The way chemistry and biology manufacture things is very different from how humans manufacture things. I just see limits to our manufacturing approach.

  43. Not so much the medium or the manufacturing, but the design. Brains are designed by evolution, and I suspect artificial brains will also have to evolve.

    Uniquely human behavior (rationality, speech) seems to be the easiest thing to mimic. That’s why I called it cargo cult AI. Computers have been for some time better than humans at rationality, but rationality isn’t inventive.

    I really don’t think anyone can foresee where this is going or when. We will keep cobbling stuff together until something interesting appears.

  44. eigenstate: I don’t recognize that in the learning models we’re contemplating here.

    The learning modes that you are contemplating are not the learning modes that I am contemplating.

    The traditional AI view is that learning is pattern discovery, or something of the kind. This is vaguely like classical conditioning from psychology, though with a different vocabulary.

    My view of learning is more like the “perceptual learning” from Eleanor Gibson in psychology, which plays a role in J.J. Gibson’s theory of direct perception.

    As for the rest of your comment – I suspect some miscommunication. I took your “goal vectors” to be reward systems for learning particular goals, and was asking how you could break out of those goals. But I now suspect that I might have misunderstood your original point.

  45. keiths: Wait a minute. You think that temperature sensors are not merely mechanical devices?

    Right. They are mindless, but not mechanical.

    We probably disagree on the meaning of “mechanical”. Some people seem to take that more broadly than I, perhaps saying that any material process is mechanical. I have a more restrictive view of mechanical. For example, I don’t see plant growth as purely mechanical.

    Just to be absolutely clear — are you claiming that temperature sensors, unlike thermostats, are intelligent?

    I have not suggested that.

    Unfortunately, there is no clear meaning of “intelligent”. I have at least hinted that I see homeostatic processes as intelligent (at least minimally intelligent). But a sensor need not be based on homeostasis.

    Temperature sensors don’t have to categorize.

    Sure, they do. I’ll remind you that we were talking of a digital temperature sensor here. The digitization carves the world at the thermometer calibration seams.

    There appear to be two quite different meanings of “categorize”. One is based on the idea of carving up the world. That’s the one I am using. The other is based on clustering of things based on similarity — and I don’t find that cognitively important or even implementable without a prior concept of similarity.

    It seems very silly to argue that a digital temperature sensor makes a decision when it ‘decides’ to indicate 68 degrees versus 67 degrees, but that a thermostat doesn’t make a decision when it ‘decides’ to turn on the A/C because the indicated temperature is too high. How is the former a decision if the latter isn’t?

    68 degrees vs. 67 degrees involves some degree of judgment, however slight. Operating a switch is purely mechanical.

    If you have two identical thermostats from the same manufacturer, their switching will be entirely predictable as a consequence of the temperature determination, but the temperature determination will only be approximately predictable and will vary between the two otherwise identical thermostats.

    Every correct implementation is a material implementation. My position is the antithesis of dualism.

    Where does “correct” come from? My draft paper talks about the system working based on thermometer reading, not on temperature. It avoids any requirement of correctness. I see truth (hence correctness) as not being a basic part of the physical world. It is emergent from pragmatic decision making. It is pragmatic decision making that is basic. And you need something like homestasis before there can be pragmatic decisions.

  46. Neil,

    68 degrees vs. 67 degrees involves some degree of judgment, however slight. Operating a switch is purely mechanical.

    You are trying to distinguish temperature sensors from thermostats in terms of their ‘judgment’, but your criterion doesn’t succeed. If temperature sensors exercise ‘judgment’, then so do thermostats.

    The temperature sensor ‘decides’ whether to output ’67’ or ’68’ based on its input. The thermostat ‘decides’ whether to close or open the switch based on its inputs.

    If you have two identical thermostats from the same manufacturer, their switching will be entirely predictable as a consequence of the temperature determination, but the temperature determination will only be approximately predictable and will vary between the two otherwise identical thermostats.

    The fact that temperature sensors can be inaccurate or miscalibrated is hardly an argument in favor of their ‘judgment’. A digital thermostat that can adjust the inside temperature based on the time of day is exercising much more ‘judgment’ than a miscalibrated temperature sensor!

    keiths:

    Every correct implementation is a material implementation. My position is the antithesis of dualism.

    Neil:

    Where does “correct” come from?

    ‘Correct’ implementations give the desired behavior. All such implementations are material, of course. There is nothing ‘dualist’ about that.

    My draft paper talks about the system working based on thermometer reading, not on temperature. It avoids any requirement of correctness. I see truth (hence correctness) as not being a basic part of the physical world. It is emergent from pragmatic decision making. It is pragmatic decision making that is basic. And you need something like homestasis before there can be pragmatic decisions.

    For the fourth time:

    What specific homeostatic processes involved in intelligence cannot be implemented digitally, in your opinion?

    If you don’t want to answer the question, that’s fine. But could you at least say so, and explain why? The question directly addresses your skepticism about AI, which is the topic of this thread.

  47. Robin: Fair enough. I’ll accept that distinction. However, I would say that it isn’t straight-forward analytical logic. I think both Spock and Sheldon would agree that the waste of energy on such frivolity outweighs any potential logical benefits. :)

    This oversimplifies the goal matrix, doesn’t it? For example, if “seek pleasure” is one of the organism’s top level goals, and one has an abundance of energy, it’s not “frivolous” to, say, body surf for a whale (assuming here that that fulfills the “seek pleasure” goal, somehow). It’s goal pursuit. “Keep yourself nourished” may outrank “seek pleasure” when those two are opposed — I don’t imagine a malnourished and hungry whale does much body surfing for fun — but it’s an oversimplification to place “eat” as the sole measure driving all other decisions and actions. At some points, fuel is not a problem, and other goals become the focus of action.

    Just to attach this back to the thread topic, that’s how a software implementation would work. Always monitor critical resource levels, and escalate those goals when good operations are at risk. But when the (digital) organism is resourced, then select down the list of goals and “give them some CPU time” as it were, some priority in the choice/action loop. Seeking pleasure, or diversion is fully logical, non-frivolous in the strict sense, when the organism is adequately resourced and other exigent priorities do not supercede it.

    Quite true, but I think that to think of all such behaviors that way oversimplifies (and inaccurately models) the underlying “side effect” element. In other words, I think that a variety of such behaviors, particularly in cetaceans and primates, arise in spite of any underlying benefit.

    I understand your point, but I think you’re committed to a transcendental mistake here concern benefits. There is benefit in those behaviors because they are not servicing other (grueling or demanding) behaviors. I’m fine with saying those behaviors occur “in spite of”, say, gathering food to eat, but that is precisely why those behaviors are a priority. They are beneficial and satisfying in a psychological way.

    To take your point at face value, we’d have to deny that “seeking pleasure”, and the various kinds of recreation and diversion from that simply could not be a basic goal for the organism. I can’t see any basis for such a prohibition, and I think the evidence from humans and other animals is replete with examples that show that pleasure seeking, in all sorts of manifestations, is a core driver of action, if one that is subordinated when existential priorities need attention/action.

    They arise as a byproduct of the way biological system intelligence develops as organisms grow and learn and are not intended (in the casual sense) developments arising to fulfill a specific goal. The goals are derived from the discovery of the ability to engage in the behavior, not the other way around.

    Ahh, this seems to be the key point of disagreement, then. I agree that the type of behaviors humans and other animals engage in for diversion or recreation is likely to be a by-product of the use of other capabilities that are needed for survival and adaptive demands of the environment. But the priority of pleasure-seeking itself is not a by product in that sense, but rather a primary dynamic for the organism.

    If that’s not clear, just think of “pleasure” as the broad “carrot” to the “stick” of pain and suffering. Organisms develop affinities for some behaviors and experiences and aversions to others simply be selective force; organisms that don’t derive pleasure and satisfaction from eating (especially when hungry!) don’t fare well. Competing organisms that do derive such pleasure fare better, by comparison.

    I know that’s not a revelation to you, but it should be a reminder that diversion and recreation and just goofing off are not mysterious members of the priority list. They are natural and predictable outcomes of states where “infrastructure” priorites (eating, shelter, etc.) are satisfied, and the organism has excess resources (energy, cognitive cycles, etc.)

    From a software standpoint, it’s another isomorphism to biological architecture. Food? Check. Shelter? Check. Healthy? Check. Now the software priority becomes “choosing a priority”, and it moves down the list, or even perhaps does some stochasting sampling from a list of non-emergency priorities. From a computing standpoint, this is choosing what to do with “idle cycles”. Cycling in a tight “wait loop” without doing anything may conserve valuable energy, but foregoes the cognitive and learning and therapeutic benefits of other available tasks. A software developer understands that “you have to do something, can’t just freeze”, if you want to optimize against the goal set.

    And this drives directly at the issue I see in human designed minds: I don’t think we humans will ever design as well as nature.

    If the test is to build “software minds” that are as close to human (or animal) minds as possible in their behaviors and dynamics, then by definition, we cannot do better than nature itself. That is the standard we are judging by! And with that target (emulating human minds), the project is not so much designing a black box that matches inputs with outputs that resemble human minds, but implementing biological architectures in non-biological frameworks — silicon.

    We don’t need to design so much as implement natural designs that we see in our physiology, in a non-biological format. That’s a huge practical challenge, but it’s not conceptually intractable.

    Well, I see this as changing the subject. If the argument (or in my case skepticism) is that human designed and manufactured intelligence will not behave or even approach biological intelligence and you come back with, “but what matters is…the differences that obtain”, I’m just going to come back with, “yep…that’s what I said all along!” :)

    My point there was that it would not be different the sense that mattered for our discussion, here. If an “artificial human” is made of carbon fiber, wire and transistors, that is “different than biology”, but does not disqualify it from the comparisons we care about — what kinds of decisions does it make? how does it learn? what does it know and remember, etc.

    Awwww! Don’t back down on me now! We were doing so well in our agreement!!! :)

    I don’t think the realizations on the practical challenges are a backing down at all in terms of the ultimate success of the project. Rather, it’s just a lament that what many of us thought might be practicable in our lifetimes is not, and not nearly. I don’t think it’s any less inevitable, which you apparently do. Really important milestones are just many more decades down the road than was supposed back in the day…

    I see the above as a contradiction in terms. If in principle the further along we get in terms of understanding AI development leads to a more intimidating understanding of the scale and logistics, how can the general principle of such development become less problematic? Seems to me that more intimidating scale and logistics means more problems. But what do I know…?

    There’s two discrete variables at work here. P, which is the probability that strong AI can/will obtain, rises and has continued to rise with everything we gain in knowledge about humans and minds. D, which is the “degree of difficulty” in terms of practically realizing strong AI, goes up and down, but has conspicuously gone up in recent years as we realize how astoundingly complex the brain is, and not just the brain, but its integration with the rest of the body.

    You’re asking how P can increase while D goes up, too. My answer is they are independent variables. The more we learn, the more certain strong AI becomes as an achievable outcome, and the farther out on the timeline our anticipation of practical implementations of strong AI goes.

    Think of this as analogous to us discussing “Can the great mountain over the horizon be climbed?” As we explore and research, we may simultaneously become more and more confident that it can indeed be scaled, while we keep upping our estimates of how long it will take and how much resources it will demand.

    Remember that strong AI is controversial primarily due to skeptics doubting that it is possible at all. The “time to delivery” question, the practical question is subordinated to that, and the argument is engaged over it’s plausibility in principle. The technology of strong AI is fascinating to me, but the debate is whether it’s even possible, and many say it’s not.

    I guess then my skepticism is mostly stems from A then, though I would add to A the qualifier that it’s not just hard to build, but I think there are limits to what humans are capable of building.

    I don’t disagree. As much as I like the topic, though, I’m not invested in the timelines or estimates of when, how and how much such a project will demand. My interest is really in providing a counter to B-type and C-type, which in my experience are pervasive.

  48. Hi Tom,

    Keiths should be invoking the physical symbol system hypothesis of Newell and Simon.

    Yes, if ‘symbol’ is construed broadly enough.

    The way I usually think of it is that intelligence, at its root, is pure syntax. Semantics is built ‘on top of’ syntax.

Leave a Reply