In another thread, Patrick asked:
If it’s on topic for this blog, I’d be interested in an OP from you discussing why you think strong AI is unlikely.
I’ve now written a post on that to my own blog. Here I will summarize, and perhaps expand a little on what I see as the main issues.
As you will see from the post at my blog, I don’t have a problem with the idea that we could create an artificial person. I see that as possible, at least in principle, although it will likely turn out to be very difficult. My skepticism about AI, is because I see computation as too limited.
I see two problems for AI. The first is a problem of directionality or motivation or purpose, while the second is a problem with data.
Interestingly Patrick’s message, where he asked for this thread, contained a picture of Spock from Star Trek. As a Star Trek character, Spock was known to be very logical and not at all emotional. That’s what I think you get with computation. However, as I see it, something like emotions are actually needed. They are what would give an artificial person some sense of direction.
To illustrate, consider the problem of learning. One method that works quite well is what we call “trial and error”. In machine learning systems, this is called “reinforcement learning”. And it typically involves having some sort of reward system that can be used to decide whether a trial-and-error step is moving in the right direction.
Looking at we humans, we have a number of such systems. We have pain avoidance, food seeking, pleasure seeking, curiosity, and emotions. In the machine learning lab, special purpose reward systems can be setup for particular learning tasks. But an artificial person would need something more general in order to support a general learning ability. And I am doubting that can be done with computation alone.
Here’s a question that I wonder about. Is a simple motivational system (or reward system) sufficient? Or do we need a multi-dimensional reward system if the artificial person is to be able to have a multi-dimensional learning ability. I am inclined to think that we need a multi-dimensional reward system, but that’s mostly a guess.
The data problem
A computer works with data. But the more I study the problems of learning and knowledge, the more that I become persuaded that there isn’t any data to compute with. AI folk expect input sensors to supply data. But it looks to me as if that would be pretty much meaningless noise. In order to have meaningful data, the artificial person would have to find (perhaps invent) ways to get its own data (using those input sensors).
If I am right about that, then computation isn’t that important. The problem is in getting useful data in the first place, rather than in doing computations on data received passively.
A system built of logic gates does not seem to be what is needed. Instead, I have concluded that we need a system built out of homeostatic processes.