I’m starting a new thread to discuss what I call “the hard problem of intentionality”: what is intentionality, and to what extent can intentionality be reconciled with “naturalism” (however narrowly or loosely construed)?
Here’s my most recent attempt to address these issues:
Consider this passage from Dennett, Consciousness Explained, p. 41: “Dualism, the idea that the brain cannot be a thinking thing so a thinking thing cannot be a brain, is tempting for a variety of reasons, but we must resist temptation . . . Somehow the brain must be the mind”. But a brain cannot be a thinking thing (it is, as Dennett himself remarks, just a syntactic engine). Dualism resides not in the perfectly correct thought that a brain is not a thinking thing, but in postulating some thing immaterial to be the thinking thing that the brain is not, instead of realizing that the thinking thing is the rational animal. Dennett can be comfortable with the thought that the brain must be the mind, in combination with his own awareness that the brain is just a syntactic engine, only because he thinks that in the sense in which the brain is not really a thinking thing, nothing is: the status of possessor of intentional states is conferred by adoption of the intentional stance towards it, and that is no more correct for animals than for brains, or indeed thermostats. But this is a gratuitous addition to the real insight embodied in the invocation of the intentional stance. Rational animals genuinely are “semantic engines”. (“Naturalism in Philosophy of Mind,” 2004)
Elsewhere McDowell has implied that non-rational animals are also semantic engines, and I think this is a view he ought to endorse more forthrightly and boldly than he has. But brains are, of course, syntactic engines.
So it seems quite clear to me that one of the following has to be the case:
(1) neurocomputational processes (‘syntax’) are necessary and sufficient for intentional content (‘semantics’) [Churchland];
(2) intentional content is a convenient fiction for re-describing what can also be described as neurocomputational processes [Dennett] (in which case there really aren’t minds at all; here one could easily push on Dennett’s views to motivate eliminativism);
(3) neurocomputational processes are necessary but not sufficient for intentional content; the brain is merely a syntactic engine, whereas the rational animal is a semantic engine; the rational animal, and not the brain, is the thinking thing; the brain of a rational animal is not the rational animal, since it is a part of the whole and not the whole [McDowell].
I find myself strongly attracted to all three views, actually, but I think that (3) is slightly preferable to (1) and (2). My worry with (1) is that I don’t find Churchland’s response to Searle entirely persuasive (even though I find Searle’s own views completely unhelpful). Is syntax necessary and sufficient for semantics? Searle takes it for granted that this is obviously and intuitively false. In response, Churchland says, “maybe it’s true! we’ll have to see how the cognitive neuroscience turns out — maybe it’s our intuition that’s false!”. Well, sure. But unless I’m missing something really important, we’re not yet at a point in our understanding of the brain where we can understand how semantics emerges from syntax.
My objection to (2) is quite different — I think that the concept of intentionality plays far too central a role in our ordinary self-understanding for us to throw it under the bus as a mere convenient fiction. Of course, our ordinary self-understanding is hardly sacrosanct; we will have to revise it in the future in light of new scientific discoveries, just as we have in the past. But there is a limit to how much revision is conceivable, because if we jettison the very concept of rational agency, we will lose our grip on our ability to understand what science itself is and why it is worth doing. Our ability to do science at all, and to make sense of what we are doing when we do science, presupposes the notion of rational agency, hence intentionality, and abandoning that concept due to modern science would effectively mean that science has shown that we do not know what science is. That would be a fascinating step in the evolution of consciousness, but I’m not sure it’s one I’m prepared to take.
So that leaves (3), or something like it, as the contender: we must on the one hand, retain the mere sanity that we (and other animals) are semantic engines, bearers of intentional content; on the other hand, we accept that our brains are syntactic engines, running parallel neurocomputational processes. This entails that the mind is not the brain after all, but also that rejecting mind-brain identity offers no succor to dualism.
Neil Rickert’s response is here, followed by Petrushka’s here.
What a disreputable bunch of church-burners.
ETA: The comments fortuitously wrapped onto a new page, so now it looks like I’m dissing all of TSZ! 🙂
I highly recommend it. Whether or not I end up agreeing with Dennett on a particular topic, I always find myself thinking more clearly for having read him.
I would drop the “bio” prefix because we’re talking about physical systems in general, not just biological ones. I also prefer “syntactic” to “physical” because it emphasizes that the processes can be described without reference to meanings, a point that many people would overlook if we simply referred to them as “physical processes”.
I see your point, but I’m not sure Neil will.
I see his point. I just don’t agree with it.
“Syntactic” ought to imply solipsistic, while “physical” does not have that implication.
I take (but don’t strictly insist on) the fussy view that computers don’t compute. People compute, and use computers to aid them in that computing. Or, to say it differently, I take the view that computation is very much an intentional activity.
I would prefer to describe robots as doing intricate physical activities, instead of describing them as doing computation. And I do think the brain, as a physical system, is doing intricate physical activities.
For ordinary use of a computer, it is reasonable to say that it does computation. I take that as meaning that we can describe the activities of the computer using our mathematical models of computation. But I think we mislead ourselves when we say that the brain is doing computation. We do not have a mathematical model of computation that fits the brain, and I doubt that we ever will.
Let me reword that a little. We can, as suggested, describe a computer as doing intricate physical activities. What characterizes computation, to the extent that we can say a computer does computation, is that most of those intricate physical activities have no close connection to the external world. A computer is pretty much a solipsistic system. I expect that the intricate physical activities of the brain, by contrast, are very closely connected with what is happening in the external world and in our interactions with the external world.
Why? “Syntactic” processes are purely mechanical and meaning-independent. There’s nothing about “mechanical and meaning-independent” that implies “disconnected from the outside world”. Real-world information can be processed just as mechanically as abstract information.
That’s not only fussy, it’s etymologically suspect. Computers were originally people, and the name stuck when the job was transferred from humans to machines.
That’s especially odd coming from a computer scientist. Turing machines — the very yardstick for what is and isn’t computable — are syntactic machines. They are blind to the meaning of the symbols they are manipulating.
“Intricate physical activities” doesn’t have the same ring to it. Neil, when you fight these definitional battles, you come across as someone who isn’t even trying to communicate. If you have some new ideas about cognition, you’re far better off expressing them in standard language rather than trying to force the rest of the world to use Neil-speak.
That’s not true. We have such models, and they are getting better all the time.
No, they are not. A mechanical process involves cogs and wheels, or other physical things. A syntactic process is abstract. It is perhaps, in some sense, mechanistic. I’ve actually called it “pseudo-mechanistic” elsewhere, as I think that is a better term. Interpreting something as computation is deeply teleological. Syntactic operations are operations on platonic entities (if you are a mathematical platonist) or on fictions (if you are a fictionalist).
Information is abstract (platonic or fictional). Our computers act on representations of information. And, sure, that operation on physical representations of information is mechanical. But syntactic operations are, by definition, operations on abstract entities.
But the meaning of “computer” has changed, and we now apply that word to machines rather than to people. However, computation remains abstract operations on abstract entities.
Turing machines are usually defined as abstract machines operating on abstract symbols. I’ll grant that they are syntactic in the sense in which I am using that word. But they are not mechanical. What a TM does is not independent of meaning. Rather, it depends on the very narrowly constrained meaning involved with the use of abstract symbols.
I am trying to break through the barriers to communication that come from the conventional wisdom.
That doesn’t work. Standard language is strongly tied to dualism.
Your idea that “computers don’t compute; people do” simply doesn’t work.
Suppose I set up my computer to do fast Fourier transforms on hundreds of gigabytes of signal data. I key in the command, hit ‘Enter’, and go to bed. After eight hours, I wake up. According to you, no computation took place during those eight hours. Computers can’t compute, after all, and I was asleep.
Yet when I get up and look at my monitor, I see a beautiful and elaborate display of the processed data. It sure looks like some computation happened, but when? Who did it, if computers can’t compute and I was asleep the entire time? Did I “do” the computation myself by hitting ‘Enter’, or by looking at the screen when I woke up?
It makes no sense.
The job remained the same. Humans stopped doing it, and computers took over. If humans were computing before the switch, and the job remained the same, then computers were computing after the switch.
You’re not improving communication by idiosyncratically redefining words.
The same thing happened in an earlier thread. You claimed to be expressing a nontraditional idea about cognition, when in reality you were merely expressing a traditional and widely accepted idea using nontraditional language.
It’s easy to redefine words. It’s not so easy to come up with new and useful ideas regarding cognition.
Other nondualists are able to express themselves using standard language. Why not you?
We live in a wonderful time where technology makes outreach and dialogue with non-specialists easy. As true of philosophy as science. YouTube, blogs, fora… check them out. As with science, some of the most distinguished are more available to the public than ever before.
Of course, there are problems that arise with this.
Your view of philosophers as elitist is not uncommon. They are, to an extent. Why? Because philosophy is a technical, specialist discipline! The conversation is at an advanced stage.
That doesn’t mean you or I can’t do philosophy. We certainly can. After all, philosophers do a lot of their talking in books.
We can see that the waggle dance is an abstraction, but I’m not sure the bees do. Computers also can focus and interpret, but does that mean they have original intentionality? Chinese room, with different examples.
You’ve given me a great idea for a party.
You have completely misunderstood the point.
The computers are doing only the busy work aspects of the computation. If computation is the manipulation of abstract symbols, such as is implied by the Turing computation, then the problem for “computers took over” is that there are no abstract symbols in the computer.
I did admit that this is a fussy point. But here’s the underlying point. The kind of busy work done by a computer, is the wrong kind of busy work for human cognition. That’s my more important objection to computationalism.
People who claim to be non-dualists are able to express themselves in standard language. However, their implicit dualism is still shines through very clearly.
I’m not anti-philosophy. I was educated via the English Grammar school and university system, chose mainly science subjects and “majored” in biochemistry. I wasn’t dedicated (or good) enough to progress into a career in research so had a life unconnected (in the main) to my chosen field of education. In all that time up until beginning to read KN”s comments at Uncommon Descent (first round, using another handle) I had received no formal or informal courses in philosophy and never noticed the lack. That’s fifty years plus! The ommission may be to my detriment, and I’m certainly open to the idea that philosophy can make valid and useful contributions to current human existence. (Though why current human existence deserves to be paramount needs justifying).
However I don’t have enough time left to read all philosophy from Socrates on. I do have some penguin classics, inherited from an old friend, with translations of Plato and Aristotle, and the bits that I have dipped into make a sort of enclosed sense but I would hope there is more recent stuff that is more relevant to today. I like the little I’ve read of Richard Rorty.
The (I’m sure, invented) story of Socrates’ lecture on horses teeth illustrates my difficulty with some abstruse philosophical writing. Rather than build on, modify or refute long-irrelevant arguments from the past, it would make more sense to start from observed reality.
I find Piccinini thoughts on computation in physical systems to be helpful.
In his SEP article on Physical Computation, Piccinini defines physical computational systems as a kind of mechanism, that is a system of organized components which perform a function.
The function is “processing vehicles according to rules that are sensitive to certain vehicle properties”. A key point is that the mechanism must not depend on all of the physical properties of the vehicles; only on properties that are relevant to the computational rules. Such properties can be abstracted away from a particular physical vehicle. Any physical vehicles which have sufficient physical degrees of freedom of the appropriate type can be used; hence this physical model makes computation medium independent.
Digital computers process vehicles which can be viewed as strings of discrete states; analog computers process continuous vehicles; quantum computers process qubits.
In the other article of his that I linked earlier, he considers neurons. He claims they are mechanisms that perform computations with the vehicle being the pulse train of synaptic spikes. He also says that this computation is neither digital nor analog. It is not digital because there is no reliable way to divide the pulse train into strings. It is not analog because, for the mechanism of the computation, the pulse trains are not continuous.
As best I can make out (which is admittedly pretty limited), the paper KeithS posted earlier as a sample of a computational model of the brain is modelling the pulse trains in a neural network.
In the SEP article, Puccini discounts a syntactical model of computation. But he is referring to what he says is the philosophers’ usual definition of “syntax”. He says it is a natural-language, linguistics-based definition which assumes some kind of sentence structure which is then modeled by the rules of the syntax. In particular, he says this is not the mathematical concept of formal languages and their syntax; the mathematical approach is not constrained by concepts from linguistic sentences. I was not aware of this distinction and I don’t know which way Dennett is using the word “syntax” when he talks about the brain as a syntactical engine.
I would say that if one includes “abstract symbols” in the definition of computation, one is following what Piccinini calls the semantic approach to defining physical computation, summarized by some philosophers as “no computation without representation”. He argues against this definition as it seems too restricted by working only for (some schools of thought on) philosophy of mind.
I would say the dance is a representation. Further, since the dancer shows others whether the pollen is without actually leading them there, in some sense there must be representations (or equivalent inferential networks for KN!) in all of their brains, which is intentionality as I understand it.
I agree that their ability to form such mental representations or to create new symbols is very limited compared to us. In particular, I agree that they do not see that they are using a representation; I think this requires a representation of a representation, which is a capability which is much more limited in the animal kingdom.
Perhaps the concept of degrees of intentionality which KN mentioned will better define what “limited” intentionality could mean.
I’ve already done enough typing this morning, so rather than get into the CR, I’ll just say I find the arguments against it, as summarized in the SEP, to be convincing, especially the robot reply,
Some people take a view of computation as physical. There the “physical symbol hypothesis” of Newell and Simon. I’m obviously disagreeing with that.
There’s a reason that we say a Turing machine is an abstract machine. As I remind my students, if Turing machines were physical machines, there could be no Halting Problem. All physical machines halt. (I purchased a new computer a few days ago, because one of my physical machines halted).
I see that as a wrong description of what the brain does. That’s the underlying reason that I reject computationalism.
Likewise, I see that as not what the brain is doing.
I’m teaching that class (on formal languages and computation) at present. So I guess I am disagreeing with Piccinini. If anything, I tend to question whether natural languages are really syntactic structures.
He agrees that abstract computation is defined in terms of (eg) Turing machines and he discusses that difference in the same article.
His view is that physical computation needs a different definition for that very reason, ie because it is physical, not abstract.
That makes sense, unless you take the position that computation can only be defined for abstract entities. Of course, if you do take that position, then nothing physical is a computer. including brains.
I don’t have a problem with that, if it is carefully defined.
There is still the problem that physical computation is the wrong thing for brains to be doing. So giving a definition for “physical computation” doesn’t actually solve anything important as far as I can tell.
If you replace biochemistry with math/statistics and education with IT project management, then that’s my story as well. Now I “study” philosophy for the intellectual pleasure.
Lacking any formal philosophical training, I find using secondary sources to be much more helpful than trying to work my way through original papers or books, at least until I have some familiarity with the relevant philosophical jargon.
In terms of online resources, I have found searching Stanford Encyclopedia of Philosophy and Internet Encyclopedia of Philosophy to be much more helpful than just googling the internet. IEP tends to be less entensive and more introductory.
MOOC courses on philosophy seem to be rare. There was a very basic introduction to philosophy put on by University of Edinburgh that I still found useful. They have a course on philosophy of science scheduled for the fall. In addition, Peter Singer’s university course on his book on Practical Ethics has been repackaged at Coursera where it is currently playing.
I have found MOOC stuff to be mostly summaries of excerpts what would be covered in first year or second year university (at least in North America). If you prefer stronger stuff, you can find videos of whole courses at Youtube. I enjoyed Campbell’s Philosophy of Mind, as much for his dry sense of humor and Scottish brogue as for the course content (which you can also find from Searle). I also worked through his Phil of Language, but it is a tougher slog (but afterwards you will be able to distinguish intentional from intensional and even how the former might require the latter).
Assuming we are discussing brains as part of an organism interacting with the world, then that seems to me an scientific question, which could not be answered a priori.
That is a nice example of some of the approaches that are used to deal with complex networks that have short-range and long-range interconnections. This model is specific enough and restricted enough to deal with experimental data that are already known about the visual cortex.
More general models will look at stochastically driven (by thermal noise) networks that are connected to inputs that produce emergent patterns in the network. Those patterns will then be compared with experimentally observed patters in real structures.
Eventually – given enough computing resources – one would like to look at networks connected to hierarchies of memory that also provide and receive input and change state as a result. Comparisons among various memory states would also be phenomena to be explored because these could become the models for complex structures “making choices.”
Thanks for the helpful advice. I’ve already found The Stanford resource to be useful in providing clear and relatively concise chunks of information such that I go there first if offered on Google rather than Wikipedia. 🙂
PS I think I unintentionally misled you with “my chosen field of education”. What I meant was I chose to specialize in biochemistry but did not pursue it after University. I have worked in a variety of (commercial) fields since but never education.
I don’t think so. I showed that your claim doesn’t make sense by drawing out its implications in a thought experiment. Remember, you wrote:
In my scenario, who is doing the computation, and when? It can’t be the computer, because according to you, computers don’t compute. It wasn’t me, because I was asleep — unless you think I “did” the computation by pressing ‘Enter’ or by looking at the screen when I woke up.
Who was it, then? The programmer? No, because he or she has never seen my signal data. You can’t do a computation absent the input data. The computer designer? No, for the same reason. Who, then? And exactly when did the computation happen?
In your next comment you tried to moderate your position by admitting that computers can ‘do’ computation, but only the ‘busy work’ parts:
Your weakened statement doesn’t work, however. If computation is the manipulation of abstract symbols, as you say, then physical computers can’t do any computation, period — not even the ‘busy work’ aspects. That means the computer did no computation at all in my thought experiment.
So again, who is doing the computation in my scenario, and when?
That’s the second time you’ve called me a dualist. I’m as baffled now as I was the first time you leveled that accusation, when you wrote:
My bewildered response then is still applicable:
What is dualistic about my position?
As a computer scientist, surely you realize that the Halting Problem is about programmatic halting, not halting-because-the-computer-died. Don’t you?
It’s a great time to be a curious amateur (in the best sense of that word), isn’t it?
If you’re interested in philosophy of mind, evolution, and the free will debate, then Daniel Dennett is a great place to start. He’s modern, scientifically informed, and his prose style is clear and accessible to the non-philosopher.
Dennett is using the word ‘syntactic’ in the broader sense:
Learning as a youngster that computation was medium-independent was one of the watershed moments in my intellectual life. Until then, I had thought that computation was somehow intextricably bound up with electricity, though I had no idea why it should be so.
The Tinkertoy computer is still my favorite demonstration of computation’s medium-independence.
Makes sense. KN’s use of “proposition” in his paraphrase of Dennett’s argument was what caused my doubts.
I have not read it yet, but Dennett’s contribution to the book KN mentioned on Contemporary Naturalism is available from the recent works section of his web page
I agree Dennett is clear writer, but I still prefer to start with secondary sources with the hope that they will do a better job of presenting multiple points of view.