Philosophy of Mind: A Taxonomy

I consider the following to be a “work in progress,” and will make changes as others here contribute corrections and suggestions.

The so-called “mind-body problem”, as bequeathed to us by Descartes, has invited various solutions over the centuries.  In the classical version, the basic positions were dualism, materialism, and idealism — each of which has its sub-varieties.

What is meant by “mind”?  Well, there are characteristically mental phenomena have been presented as candidates for what is essential to mindedness: rationality, intentionality, subjectivity, volition, or consciousness.  (That these don’t all overlap can be seen by asking, “are there unconscious mental states or processes?”, “what sorts of minds do non-rational animals have?” “are there purely qualitative, non-intentional mental states, e.g. pains?” and so on.)

On the “material” side of the dichotomy, the 17th-century picture was sketched in terms of little bits of matter (Locke’s “corpuscles,” and then “atoms”) that causally interacted according to exception-less laws.  As the bestiary of physicists grew in the 19th and 20th centuries, “matter” looked less and less Epicurean — suddenly there was energy, and space-time, and “dark matter” and quanta and fields and forces.  So today we talk of “physicalism” rather than “materialism,” where the ontology of the physicalist is pretty much “whatever our best physics tells that everything else is made out of”.  Today that would be fermions, bosons, and space-time.

So here’s one way of describing the problem: what’s the relation between consciousness, volition, rationality, subjectivity, or intentionality (on the one hand) and bosons, fermions, and space-time (on the other)?

Dualism, or substance dualism, hold that mental and physical phenomena are just different kinds of basic entities in the overall metaphysics.  Neither is more real or more basic than the other.  (Descartes is classically regarded as the founder of dualism, with good reasons — though his arguments for substance dualism are, I think, more subtle than most people realize.)   Property dualism holds that mental and physical phenomena are just different kinds of properties or features that something can have, with the thing itself being neither essentially mental nor essentially physical.  (I would assume that all property dualists must be neutral monists, but I haven’t seen that spelled out.)

Materialism holds that what is most real or basic is physical stuff, which means that we need to (somehow) account for mental phenomena in physical terms, usually in terms of brain-states.  A good materialist slogan is, “the mind is what the brain does.”  There are two important variations here: reductionism and eliminativism.

Reductive physicalism hold that we can (in principle) and will (in practice) explain everything mental in terms of physical stuff.  A reduction is successful when we can re-describe everything in the to-be-reduced vocabulary in terms of the more basic, what-is-being-reduced-to vocabulary.   For example, we can reduce lightning to electron flows by describing everything that’s going on with lightning in terms of how the charged particles are being exchanged.  Or we can reduce rainbows to refracted spectra by describing everything that’s going on with rainbows in terms of how visible light is refracted and reflected as it enters and exits airborne water bubbles.   Now, it does seem that some mental phenomena can be explained in these sorts of terms — for example, how the brain processes sensory stimuli.  But it is far from clear that all mental phenomena can be thus explained.  The question as whether consciousness can be explained in physical terms is called “the hard problem” because we don’t even understand how it could be solved.  (A big question, too, is whether non-reductive physicalism is a plausible — or even coherent — position.)

Philosophers of science have noted that successful inter-theoretic reduction is extremely rare in the history of science.  Much more common is that a previous theory is simply eliminated, and we come to recognize the putative entities posited by the old theory simply don’t exist and never did.  Examples: the four humors of medieval medicine; ether; phlogiston.  Eliminative materialism holds that it is at least possible that some mental phenomena will be eliminated as neuroscience advances.  The most well-known proponents of eliminative materialism — Paul and Patricia Churchland — argue that, in particular, what we call “propositional attitudes” — beliefs and desires — will be eliminated from our vocabulary as neuroscience advances.   (Note: the Churchlands are not eliminativists about consciousness or rationality — that’s a common misunderstanding of their view.)

Finally, there’s idealism, which holds that it’s mental phenomena which are really and ultimately real, and everything physical has to be explained in terms of what is mental.   (Here too there “reductive idealism” and “eliminative idealism”.  I would consider Leibniz to be a reductive idealist and Berkeley to be an eliminative idealist, though Leibniz and Berkeley have really important differences.)   Generally speaking, I prefer to restrict the term “idealism” to Kant and the post-Kantian German Idealists, but the term is generally used to refer to any view in which the mental is what is ultimately or basically real, and the physical is not.

 

 

 

 

 

 

 

 

 

65 thoughts on “Philosophy of Mind: A Taxonomy

  1. I guess I’ll throw in my two cents. I have non-standard views on the subject.

    What is meant by “mind”? Well, there are characteristically mental phenomena have been presented as candidates for what is essential to mindedness: rationality, intentionality, subjectivity, or consciousness.

    I am inclined to deny that there are mental phenomena. I am not denying rationality, intentionality, subjectivity or consciousness. I am suggesting that thinking of those as mental phenomena is misguided.

    So here’s one way of describing the problem: what’s the relation between consciousness, rationality, subjectivity, or intentionality (on the one hand) and bosons, fermions, and space-time (on the other)?

    We can rationally discuss bosons, etc, and we are conscious of space time. But reductionism does not seem to fit here.

    The most well-known proponents of eliminative materialism — Paul and Patricia Churchland — argue that, in particular, what we call “propositional attitudes” — beliefs and desires — will be eliminated from our vocabulary as neuroscience advances.

    I tend to agree with the Churchlands on that, though I don’t agree with them on everything.

    I tend to favor J.J. Gibson’s direct realism. I’m not sure where you would want to fit that into your taxonomy.

  2. Neil Rickert:

    I am inclined to deny that there are mental phenomena.I am not denying rationality, intentionality, subjectivity or consciousness.I am suggesting that thinking of those as mental phenomena is misguided.

    Do you mean that these diverse phenomena are not a “natural kind,” or do you mean something else?

    I tend to agree with the Churchlands on that, though I don’t agree with them on everything.

    Likewise. My last intellectually productive discussion at Uncommon Descent (don’t laugh, I’ve had a few) was about Paul Churchland’s response to Plantinga. It really turns on whether or not neurophysiological processes are the right way to begin constructing a naturalistic theory of semantics. I’m strongly inclined to agree with Churchland’s approach to what he calls “neurosemantics” — though there could well be problems here I haven’t thought through all the way.

    I tend to favor J.J. Gibson’s direct realism.I’m not sure where you would want to fit that into your taxonomy.

    Well . . . direct realism seems right to me, at least about spatio-temporal sensible particulars, so then the question would be, what theory of mind would we need in order to accommodate direct realism?

  3. Well . . . direct realism seems right to me…

    Direct realism seems wrong to me. 🙂 Perhaps you, Neil and I can discuss this on another thread.

  4. It is a mistake to mix reductionism with “physicalism.” A little thought about the properties of matter would convince most people who pay attention to properties of material objects that most properties of increasingly complex systems are NOT predictable from or reducible to the properties of their constituents.

    This begins at even the simplest levels; e.g., the properties of a water molecule or a salt molecule are nothing like the properties of the atoms from which they are comprised. Properties of systems emerge exponentially and unpredictably as systems increase in complexity.

    Furthermore, these emergent properties depend on temperature as well as the environment in which these systems are immersed.

    When it comes to phenomena such as intelligence, it is clearly obvious that these are temperature dependent – consider hypothermia and hyperthermia. They are affected by chemicals in the environment. Just the temperature dependence alone is one of the clearest indications of the fact that physical processes are behind the processes of thinking. Anybody who has experienced the effects of temperature on the ability to think knows what this means. Nitrogen narcosis, hallucinogenic drugs, nerve gases, poisons, flashing lights, and other nervous system disruptors are all evidence of the physical nature of the “mind.”

    I suspect that many people who are puzzled by the emergence of properties like intelligence are also people who take for granted the billions of properties of all the things around them without reflecting on just how rapidly these properties emerge in even the simplest things.

    How many properties of lead – a solid made up of only one kind of atom – can you think of? How many are you even aware of?

    We have entire industries built on silicon, doped with small amounts of elements from columns III and V of the periodic table. An amount of doping of only one part in 10^9 has enormous effects that are also highly temperature dependent.

    And these are extremely simple solid state systems with enormous ranges of properties. Imagine what complex soft matter systems can do.

    I would suggest that, before one goes into an endless labyrinth of philosophical musings about the nature of the mind, one should first get some intimate feeling for the emergence of the properties of all the things that one takes for granted in one’s environment. Most people apparently don’t think about just how marvelous even the simple things are.

    It helps to get outside one’s head and immerse oneself in and enjoy the complexity around us. To begin to realize that matter can do all this is to also realize that this also points to a path toward the emergence of mind.

  5. Mike Elzinga: This begins at even the simplest levels; e.g., the properties of a water molecule or a salt molecule are nothing like the properties of the atoms from which they are comprised. Properties of systems emerge exponentially and unpredictably as systems increase in complexity.

    Absolutely. That’s why I think the word “reductionism” is so misleading/unhelpful. We do not, in science “reduce” things to their constituent parts. We figure out the properties of systems. And people are systems. There’s no reason to think that just because we know that we are made of nothing except baryons that we are therefore nothing more than a bunch of baryons.

    Animals are decision-making systems, and people are actually moral decision-making systems.

  6. The reason the word “reductionism” seems misleading is that it has different meanings. What Mike and Lizzie mean by reductionism is not what KN meant by reductionism in the OP. KN was talking about theoretical reductionism; I can’t think of a way to put it better than he did, so I’ll just quote the OP in case this slipped your attention:

    A reduction is successful when we can re-describe everything in the to-be-reduced vocabulary in terms of the more basic, what-is-being-reduced-to vocabulary.

    This does not mean looking at the properties of isolated parts to explain the behavior of the whole. Notice that the above definition doesn’t even refer to parts or scales. Generally speaking, you have two theories, A and B. The of task theoretical reduction is to take something that A models and re-describe it using B. For example, the flow of gas around airfoil can be described using continuous fluid dynamics theories (Navier-Stokes equations, etc.). Alternatively, it can be described in terms of statistical dynamics, or molecular dynamics theories. The reduction is successful when two theories produce similar results given the same inputs.

    As KN noted, such successes are rare in the history of science, but they do exist. An example of a somewhat successful reduction that should be familiar to many here is Avida simulation of digital “organisms”. Here a macro-theory (Darwinian evolution) predicts the emergence of complex behaviors as a result of selective pressures, and a micro-theory (genetic algorithm) reproduces this result.

    I was a little coy when I said that theoretical reduction is not defined in terms of parts or scales. The very term ‘reduction’ betrays the fact that in practice we usually consider reduction of a large-scale (coarse-grained) theory to a small-scale (fine-grained) one. But theoretical reduction does not amount to linear scaling of local behavior, except in the simplest situations, where there is no emergent behavior. The reason successful reductions are rare is that it can be very difficult to scale up fine-grained theories without losing essential detail due to practical constraints. At least that’s the reductionists’ excuse 🙂

  7. It is a mistake to mix reductionism with “physicalism.”

    I’m never sure what people mean by “reductionism”. It seems to depend on who you are talking to.

    The idea of reductionism seems to mostly come from a misunderstanding of science by philosophers.

  8. Do you mean that these diverse phenomena are not a “natural kind,” or do you mean something else?

    Something else. Perhaps I don’t understand how “mental” and “phenomena” are being used by philosophers.

    It seems entirely appropriate to apply “mental” to thinking, but not to rationality. I take Chalmers to think of qualia as phenomena and as part of consciousness, but that whole way of looking at perception seems confused.

    what theory of mind would we need in order to accommodate direct realism?

    The conventional view seems to be that we are rational agents, and that perception provides us with facts that we then use in our reasoning. Gibson’s view is that perception provides us with something like a menu of opportunities (he called them affordances) and that we choose between them. That seems radically different.

  9. Thank you, SophistiCat. I think reductionism has gotten a bad rap here at TSZ, so I’m glad to see you defending it.

    I see reductionism not as a prescription for how to do science, nor as a claim about what is pragmatically possible. It’s a philosophical claim about the laws of nature.

    To a reductionist, there is nothing about the behavior of a system that is not implicit in the behavior of its parts, interacting with each other and the environment. Thus, any “laws” that come into play at a higher level of description are really just restatements of the lower-level laws in a different context.

    For example, a computer can be viewed at different levels of abstraction — as a set of interacting modules, or as a collection of interconnected logic gates, or as a circuit containing billions of transistors, etc. However, there is nothing about the behavior at the higher levels of abstraction that is not implicit in the laws governing the behavior at lower levels.

    The behavior of a transistor can be predicted by modelling the interactions of regions of doped silicon, polysilicon, and metal. The behavior of a logic gate can be predicted by modelling the interactions of transistors. The behavior of a module can be predicted by modelling the interactions of logic gates, and so on. In principle, you could skip the intermediate levels and model the behavior of an entire computer in terms of solid-state physics.

    We may codify new “laws” to help us describe the behavior at the higher levels, but this is a matter of convenience, not of necessity. The new laws aren’t genuinely new, but are implicit in the already-known lower level laws.

    Applying this to Lizzie’s example, humans really are “just” bunches of baryons, but the word “just” is misplaced. Enormous collections of baryons can exhibit interesting behaviors of staggering complexity, depending on how they are arranged — but according to reductionism, there is nothing in the behavior of a human that isn’t already implicit in the laws of physics.

  10. keiths,

    To be fair, ‘reductionism’, like ‘materialism’, is used in different senses, and it is often unclear (even to the speaker) which sense is being used.

    Some SEP articles on the subject (in the sense used here):

    The Unity of Science

    Reductionism in Biology

    As you can see, it’s not as straightforward and uncontroversial as one might suppose.

  11. If reductionism means that a phenomenon can be explained in principle, if not in practice, by referring to the properties of simpler, underlying phenomena, then I am not even sure that “theoretical” reductionism has any useful meaning.

    While it is true that some phenomena– the fluid dynamics example is a pretty good case – can be derived from simpler, underlying phenomena, such cases tend to be relatively simple themselves.

    In fact, in the fluid dynamics case, it is actually harder to model and describe the behaviors of fluids from the microscopic perspective. There are far more calculations that have to be done and far more interactions that have to be accounted for. The modeling of fluid flows was taking place long before there were computers that could handle all the digital data of a microscopic simulation.

    Both this case and the case of the modeling with Avida illustrate something that is not expected in the evolution of complex systems; namely, that the higher level system exhibits emergent regularities that are actually simpler to describe than all the interactions among the underlying systems. Darwin and others already noted the fundamental ideas behind evolution long before computer simulations like Avida, or even the differential equations describing simple predator-prey systems.

    One of the main reasons we do such simulations is to connect the regularities of complex systems with our theoretical understandings of their microscopic constituents. This is just one of many methods we use to confirm our understanding of the microscopic world. We do these simulations to try to flesh out the details of that microscopic picture.

    However, it is quite another thing to start with a microscopic model and predict what will emerge in a simulation leading to a more complex system when we have no examples of those more complex systems. How would we know we are right? Round-off and precision limitations in calculations can introduce complexities and “emergent” properties that are artifacts of the limits of precision.

    Furthermore, such limitations in precision are already implicit in the interactions of simple systems all the way down to the quantum mechanical level. There are the ultimate limits of precision due to quantum mechanics, but one doesn’t even have to go down to that level before one runs into complications due to the inherent nonlinear nature of interactions at even the classical level.

    Charge redistribution among interacting, neutral molecules is not just unpredictable due to quantum mechanics; it happens with macroscopic objects interacting among themselves and creating charge due to “friction.” All sorts of unpredictable phenomena emerge. Induced charge and induced magnetism are highly non-linear phenomena that produce electromagnetic radiation of energy into the surrounding environment. That is energy leaving the system.

    Thus, I am not sure that “theoretical” reductionism has any practical meaning other than an acknowledgement of the fact that the properties of complex systems are a consequence of the laws of physics and chemistry and not due of the intrusion of some non-physical entity that would imply dualism of some sort. New phenomena emerge so rapidly and so unexpectedly; and they are extremely sensitive to small perturbations in just about everything occurring within the system and its environment. Most of the time we work back and forth between our knowledge of already existing complex systems and our models of the microscopic systems from which they are comprised.

    Thus, the only meaning I can get out of “theoretical” reductionism is that we reject non-physical entities and dualism. The second that one introduces such a non-physical entity, one immediately runs into all the problems of just how such an entity can interact with systems in the physical universe. As near as I can understand it, “theoretical” reductionism is just another way of stating that there are no non-physical entities or homunculi producing phenomena like the mind.

    Perhaps this is all just a quibble; but I suspect the term “theoretical” reductionism introduces misunderstandings of its own. Emergent properties generally aren’t “reducible”.

  12. I tried specifying the notion of reduction in terms of “successful theoretical reduction,” because that’s how philosophers of science and philosophers of mind present the issue. If one theory can be successfully reduced to another, then the objects or properties posited in the first theory are identified with the objects or properties posited in the second. And that’s a very high bar to leap over.

    As expressed by a few people here already, I myself am firmly committed to emergentism. (As if my admiration for Kauffman hadn’t made that clear already!) And I would say that it is because there is emergence in reality that successful intertheoretic reduction is going to be very rare.

  13. Weak or strong, there is no royal road to emergent properties. Not yet.

    What we seem able to do is determine if the properties of a system are fully consistent with the properties of the constituent parts.

    But emergence is what makes both brains and evolution interesting.

  14. Thanks to KN (and Sophisticat) for that very nice definition of reductionism.

    Something else I may have missed (RL is a little distracting right now!) – on your list of attributes of mind, I don’t see volition. Is it missing, or is it disguised as something else?

  15. SophistiCat:
    Kantian Naturalist,

    Are you a “weak” emergentist or a “strong” emergentist? (Chalmers)

    That’s a very interesting distinction! Thank you for pointing it our way! In those senses, I would say that I’m a strong emergentist about life and a weak emergrentist about consciousness.

  16. Lizzie:
    Thanks to KN (and Sophisticat) for that very nice definition of reductionism.

    Something else I may have missed (RL is a little distracting right now!) – on your list of attributes of mind, I don’t see volition.Is it missing, or is it disguised as something else?

    I missed it entirely! My bad! Will edit!

  17. First, it seems that a colourblind scientist given complete physical knowledge about brains could nevertheless not deduce what it is like to have a conscious experience of red.

    There’s an interesting case of a painter who became colorblind as a result of brain injury. He became incapable of seeing color, but he also became incapable of remembering color or of forming the concept of color.

    http://www.csh.rit.edu/~oguns/school/psychology/Articles/colorblindpainter.pdf

  18. SophistiCat: Kantian Naturalist, Are you a “weak” emergentist or a “strong” emergentist? (Chalmers)

    Chalmers says he doesn’t know of “downward” causation in the actual world.

    I’m sure a little thought would produce many examples. For example, the size of a system in a gravitational field will eventually have some effect on how it develops further. It is not possible to scale up a spider to the size of an elephant and retain the same spindly features of the spider.

    The weight of an object scales as its volume; and its load bearing capability scales as its cross-sectional area. This means that proportions must change as an object gets bigger. This is a clear example of the effects of the interaction of a system with its environment. Down at the molecular realm, viscous forces and intermolecular forces are dominant; gravity is very nearly irrelevant.

    However, the increasing influence of gravitational effects interacting with intermolecular forces begins to change the course of the evolution of an organism. Elephants have thick legs; spiders have spindly legs.

    So we have competition between electromagnetic interactions and gravitational interactions. Electromagnetic interactions predominate until structures get large enough for gravitational interactions to start influencing further development. Planets are round because gravitational interactions overwhelm electromagnetic interactions and cause clumping solid matter to melt just enough to pull itself into a sphere. It the system is spinning, it is not even a sphere, but an oblate spheroid. More condensation from surrounding matter can change the rate of rotation.

    Chemical reactions and rates are influenced by surrounding pressure; but the reaction itself can change surrounding pressure. It is the same with ion concentrations in the presence of membranes.

    The general term “feedback” complicates the distinctions between upward and downward causation. I don’t see any reason at the moment to require Chalmers’ invocation of “strong” emergence for the case of consciousness.

  19. And it’s impossible to even begin to understand the relationship between body and mind, IMO, without the concept of feedback. And feedforward, of course.

    Also what people forget, I think, in discussion of mind vs brain, is that brains are just part of a whole organism, specifically, an organism that can move. I think it’s no coincident that things with brains tend to be mobile (plants tend to stand still, and not have brains).

    One of the things that mobile organisms can do is collect more data. In other words, faced with an impasse -“what to do, given this input” – it can collect more input.

  20. SophistiCat,

    To be fair, ‘reductionism’, like ‘materialism’, is used in different senses, and it is often unclear (even to the speaker) which sense is being used.

    Sure, especially among philosophers of science. But when scientists and educated laypersons label themselves ‘reductionists’, they are almost always talking about the kind of reductionism I sketched out in my comment above.

    That is, they think that a sufficiently detailed low-level model of any system can (in principle) predict its behavior completely (modulo quantum indeterminacy), and that downward causation is therefore a fiction.

  21. Lately I’ve been thinking that “the mind-body problem” is not even the right problem — that the right problem is “the person/brain problem” — which I understand to mean, what’s the relation between the vocabularies of persons, as the sorts of living animals that can judge, infer, evaluate, and act on the basis of reasons, and what their/our brains are doing?

    On the one hand, what brains do must play some role in causally explaining how persons are able to do what they do. But on the other hand, we don’t want to account for brains as if each brain were itself a sort of miniature person (what Dennett and others call “the homunculus fallacy”). And if we’re going to avoid the homunculus fallacy, then we’re going to have to be really, really careful in not ascribing full-blown psychological states and processes — those of persons — to the brains.

    One deep worry I have about Paul Churchland’s neurosemantics is that it might be concealing a homunculus fallacy about meaning. But at the same time, I don’t see how the brain could just be a “syntactic engine.” What I want to say is that the homomorphic relation between neurophysiological processes and spatio-temporal particulars in the organism’s environment is a “quasi-semantic” relation. But I still haven’t figured out what I really mean by “quasi-” here, and I fear that it doesn’t really contribute to the discussion besides point out where the problem is.

  22. Mike,

    Chalmers says he doesn’t know of “downward” causation in the actual world.

    He’s talking about strong downward causation, and I agree with him that it doesn’t seem to exist in the real world.

    I’m sure a little thought would produce many examples. For example, the size of a system in a gravitational field will eventually have some effect on how it develops further.

    Sure, but that’s weak downward causation. The low-level laws are never violated, and they are complete. No new laws have to be introduced at the low level in order to explain the system’s behavior.

  23. One deep worry I have about Paul Churchland’s neurosemantics is that it might be concealing a homunculus fallacy about meaning. But at the same time, I don’t see how the brain could just be a “syntactic engine.”

    I haven’t read Churchland on this topic, but in general, worries about the difficulty of bridging syntax and semantics hinge on the existence of “original intentionality”. Have you read Dennett’s arguments on the subject?

  24. Kantian Naturalist:
    Lately I’ve been thinking that “the mind-body problem” is not even the right problem — that the right problem is “the person/brain problem” — which I understand to mean, what’s the relation between the vocabularies of persons, as the sorts of living animals that can judge, infer, evaluate, and act on the basis of reasons, and what their/our brains are doing?

    On the one hand, what brains do must play some role in causally explaining how persons are able to do what they do.But on the other hand, we don’t want to account for brains as if each brain were itself a sort of miniature person (what Dennett and others call “the homunculus fallacy”).And if we’re going to avoid the homunculus fallacy, then we’re going to have to be really, really careful in not ascribing full-blown psychological states and processes — those of persons — to the brains.

    One deep worry I have about Paul Churchland’s neurosemantics is that it might be concealing a homunculus fallacy about meaning.But at the same time, I don’t see how the brain could just be a “syntactic engine.” What I want to say is that the homomorphic relation between neurophysiological processes and spatio-temporal particulars in the organism’s environment is a “quasi-semantic” relation.But I still haven’t figured out what I really mean by “quasi-” here, and I fear that it doesn’t really contribute to the discussion besides point out where the problem is.

    My gut says no to this! I think trying to separate brains from bodies is a sure way to miss the connection between what brains do and what people think. Bodies aren’t there to carry our brains around – our brains are there to make sure our bodies carry themselves around.

    But admittedly, my background is in motor control, so I may be biased 🙂

  25. Lately I’ve been thinking that “the mind-body problem” is not even the right problem — that the right problem is “the person/brain problem” — which I understand to mean, what’s the relation between the vocabularies of persons, as the sorts of living animals that can judge, infer, evaluate, and act on the basis of reasons, and what their/our brains are doing?

    Then it is not really a “person/brain” problem. It is just a “person” problem.

    As I like to put it: Brains don’t think; people think, and use their brains while thinking.

    But at the same time, I don’t see how the brain could just be a “syntactic engine.”

    It probably makes more sense to think of it as a semantic engine, though looking at the brain separate from the person as a whole is surely a mistake.

    But I still haven’t figured out what I really mean by “quasi-” here, …

    It’s probably the right term. Quasi-semantics may be as close as you will ever get to semantics. That is to say, there is probably no such thing as semantics.

  26. Lizzie, I was talking about persons, not bodies — although I’m enough of a naturalist to think that all persons are embodied.

    But, OK, then the contrast would be between the vocabulary of persons — the vocabulary in which we use notions like judgment, inference, choice, responsibility, commitment, entitlement, and all the associated normative terms — and the descriptive vocabulary of brain-body-environment causal interactions. (But perhaps even that distinction is not quite the right one . . . )

  27. keiths: Mike,He’s talking about strong downward causation, and I agree with him that it doesn’t seem to exist in the real world.Sure, but that’s weak downward causation. The low-level laws are never violated, and they are complete. No new laws have to be introduced at the low level in order to explain the system’s behavior.

    “Downward causation” is not a very useful term in science. The only places I have seen it used have been in philosophical discussions.

    Downward causation, whether weak or strong, seems to suggest some kind of overarching phenomena that is outside the physical world; and examples that follow from this line of thinking include “intelligences” or deities influencing or constraining physical law.

    Within physics and chemistry – and biology is pretty much the result of physics and chemistry of organisms interacting with a wider environment – we are more inclined to talk about constraints within an environmental setting.

    Thus, in those examples I gave, when enough mass is involved so that gravitation becomes a factor in further evolution of a system, we would often say that the evolution due to atomic/molecular interactions becomes constrained by gravity.

    There are other ways of stating the role that gravity plays that simply include it from the beginning, and then speaking of the system coming to equilibrium in the presence of gravitational and electromagnetic interactions. This points to a deeper phenomenon we call the second law of thermodynamics in which matter interactions result in condensations to more complex systems because energy can be released and allow binding. The set of interactions thus includes all forces that pertain.

    However, that is not a convenient way to discuss particular systems like biological systems. We focus on the biological systems evolving within the constraints of a larger environment.

    So speaking in terms of any form of downward causation is not necessary, and in fact, tends to sound just a bit spooky to folks in science. I am not sure what purpose it serves in philosophy.

  28. “Downward causation” is not a very useful term in science. The only places I have seen it used have been in philosophical discussions.

    Interestingly, Lizzie’s elsewhere-linked Denis Noble uses it to counter Dawkins’s perceived ‘reductionist’ approach to genetics. I don’t really find it useful there, either. More conventional notions such as ‘feedback’ and ’emergence’ seem sufficient, without needing appeal to some ‘up-down’ arrow to causation.

  29. Kantian Naturalist: In those senses, I would say that I’m a strong emergentist about life and a weak emergrentist about consciousness.

    That’s interesting. To remind, we are talking about “strong” and “weak” emergence as defined by Chalmers:

    “We can say that a high-level phenomenon is strongly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principle from truths in the low-level domain…

    “We can say that a high-level phenomenon is weakly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are unexpected given the principles governing the low-level domain.”

    It follows that when reduction obtains, we can expect at most a weak emergence. On the other hand, strong emergence implies impossibility of prediction from a lower-level theory. If life is strongly emergent, no theory other than biology can account for it.

    Downward causation is a strange metaphysical beast. Here too there can be confusion over terminology: causation in science is usually understood as event causation. Clearly though theories cannot cause one another in the way events can cause one another. I would rather not get into this at this point

  30. If life is strongly emergent, no theory other than biology can account for it.

    Yes, which raises the question: KN, are you a vitalist?

    I would have expected KN’s answers to be the opposite: that life is weakly emergent and consciousness strongly emergent.

  31. Chemistry appears experimentally to be strongly emergent. Can anyone propose a way to counter this hypothesis?

    Can anyone suggest a way in principle to predict the properts of new, non-trivial molecules? Is there anything that suggests such solutions are possible in principle?

  32. Where folks often feel the need for downward causation is in attributing causal efficacy to the semantic content of mental states.

    KN alludes to that worry above when he says that he doesn’t see “how the brain could just be a ‘syntactic engine.’ ”

    William Murray seems to be expressing a similar worry when he writes:

    Under Darwinism, love and hate, kindness and violence are categorically equal manifestations of interacting molecules – all just physical stuff, bumping around, causing other physical stuff to happen – including emotional reactions.

  33. Petrushka,

    Chemistry appears experimentally to be strongly emergent. Can anyone propose a way to counter this hypothesis?

    To demonstrate strong emergence, you would need to be able to predict the behavior of the system using low-level laws and show that it is incorrect or incomplete compared to a higher-level prediction.

    Since we can’t solve the Schrödinger equations for complex molecules, we don’t have an exact low-level prediction. That means we can’t yet say whether strong or weak emergence is involved in the chemistry of complex molecules.

    I would bet on weak emergence, simply because strong emergence hasn’t yet been observed in nature.

    Can anyone suggest a way in principle to predict the properts of new, non-trivial molecules?

    Figure out a way to solve the Schrödinger equations for them. 🙂

  34. I’m not a physicist, so take this with a grain of salt, but my understanding is that while the equations do not always have analytic solutions, they can (in principle) be solved numerically to an arbitrary degree of accuracy.

  35. keiths: I haven’t read Churchland on this topic, but in general, worries about the difficulty of bridging syntax and semantics hinge on the existence of “original intentionality”.Have you read Dennett’s arguments on the subject?

    I have read a bit of Dennett, but I’m no expert on his philosophy of mind. Unlike Dennett, I do think that there is “original intentionality”. I think that Dennett’s arguments work well against Searle’s position, which attributes original intentionality to the causal powers of the brain. But I don’t know if they would work well against the position of Maurice Merleau-Ponty, for whom there is “original intentionality” in the basic spatial and temporal orientations of lived embodiment.

    My considered view is that there are (at least) two different kinds of “original intentionality”, what I call “somatic intentionality” — the intentionality of living bodies — and “discursive intentionality” — the intentionality of language-using social animals. (These correspond to non-apperceptive or pre-subjective consciousness and apperceptive or subjective consciousness. There is also a parallel in the intentional objects that corresponds to Heidegger’s distinction between “environment” and “world”.)

    As for the question whether I am a “vitalist” — well, quite frankly, it’s embarrassing, isn’t it? Because vitalism was abandoned by all right-thinking people by the middle of the 20th-century due to the extraordinary success of molecular biology. But I’m afraid that I probably I am a vitalist, at least in the following sense: I think that autopoeitic systems, as conceptualized by Maturana and Varela, are strongly emergent with respect to the laws of physics. That is, relative to physics, life is (as Hans Jonas beautifully put it) an “ontological surprise”. What would dissuade me from being a strong emergentist about life is if Kauffman is right in thinking that life is weakly emergent from the laws of physics, but that the laws of physics are not (quite) what we think they are. But in that case, I’d still be a weak emergentist about consciousness — I think that once we’ve got living things on the table, some very low-grade of consciousness is pretty much par for the course. (Here though I’d want to distinguish between the many different gradations of consciousness — e.g. spider-consciousness has got to be quite radically different from cat- or dog-consciousness — and also I’d want to distinguish between mere consciousness and subjectivity.)

  36. keiths: Where folks often feel the need for downward causation is in attributing causal efficacy to the semantic content of mental states.

    KN alludes to that worry above when he says that he doesn’t see “how the brain could just be a ‘syntactic engine.’ ”

    I would put my own worry a bit differently — I would say that it is large-scale neurophysiological processes which are both causally efficacious and semantically contentful. While neuron-to-neuron interactions might be “purely” syntactical, it is really just a metaphysical article of faith to assert (as, for example, Searle does) that semantics cannot emerge from syntax.

  37. The disconcerting thing is that as a neuroscientist, I find this conversation escruciatingly hard to follow!

    Can someone explain to a non-philosophically literate, but not unintelligent neuroscientist what the problem is supposed to be?

    I thought I knew. Clearly I don’t 🙂

  38. Can someone explain to a non-philosophically literate, but not unintelligent neuroscientist what the problem is supposed to be?

    I’m not all that philosophically literate myself, but I am able to read the discussion. So I’ll try.

    On the face of it, the way that biological organisms behave is very different from the ways that non-living things (clocks, cars, computers, robots, etc) behave. That leads one to suspect that there is something different about life.

    Adding to that, we can talk about non-living things using a barebones objective vocabulary. But we seem to need an intentional or teleological vocabulary to talk about biology.

    Dennett more-or-less dismisses this. His “The Intentional Stance” tries to argue that there is nothing to intentionality. AI (artificial intelligence) proponents argue similarly.

    The typical view of AI folk is that the brain is a computer, and computation is primary. Semantics, intentionality, etc, is secondary, and derived from the computation (the syntax).

    The alternative view, which I favor (and KN seems to favor), is that semantics is fundamental, and syntax only exists as a construct to communicate the semantics.

    Hmm, I have also said that there is no such thing as semantics. But when I say that, I am only denying that there a “thingness” to it.

    The Chalmers Hard Problem is to reduce subjectivity and intentionality to objectivity and mechanism. My view, and I suspect KN might agree, is that this is backwards. Everything works well because subjectivity and intentionality are a natural part of being a biological organism, and we manage to reduce the objective to the subjective which is what allows us to think and talk about objective things.

    The reference to John Searle is probably to his famous (or infamous) “Chinese Room” argument. That argument claims to prove that you cannot get semantics from syntax. I see Searle’s argument as a complete failure. It proves only what he initially assumes. It does not prove what he claims it to prove. Nevertheless, I suspect that Searle’s conclusion is correct. Searle seems to have good intuitions but terrible answers.

    I’m not sure what it was that puzzled you. But I hope I have addressed some of it.

  39. Can someone explain to a non-philosophically literate, but not unintelligent neuroscientist what the problem is supposed to be?

    I’ll give it a shot.

    The question, in a nutshell, is this: How can a brain state — a purely physical thing — be about something entirely different? (‘Intentionality’ is really just a misleading philosophical term for ‘aboutness’).

    I’m thinking about my cat’s upcoming trip to the vet. In what sense is the sequence of states my brain is traversing about the cat, the trip, and the vet?

    The problem isn’t limited to brains. My smartphone can guide me to my friend’s house, but it is just a physical object (though a complicated one). In what sense are the physical states of my smartphone about my current location, the layout of the streets and roads in my area, and the location of my friend’s house?

    In the case of the smartphone, it’s tempting to say that its states aren’t really about anything, per se. They are the consequence of blind, meaningless physical processes unfolding in the circuitry. It’s just that humans have designed smartphones (and the navigation programs that run on them) so that these blind processes give results that are useful when interpreted as navigation instructions.

    Thus, any ‘aboutness’ the smartphone has is not inherent — it derives from the humans who designed it and use it. This is known as ‘derived intentionality’.

    Anything possessing derived intentionality must get it from some other source. Unless the regress is infinite, there must be an ultimate source for any given instance of derived intentionality. This ultimate source is said to possess ‘original intentionality’, or ‘intrinsic intentionality’.

    Back to the original question: How can a physical brain state be about something entirely different?

    Most dualists would say that brain states have only derived intentionality, and that it derives from the ‘soul’ that observes and interprets the brain states. Original intentionality resides only with the soul itself.

    Materialists can’t invoke an immaterial soul, of course, so they have two main options:

    1) argue that there really is no such thing as original intentionality (Dennett’s position); or

    2) argue that there is something special about brains that gives rise to original intentionality (this is Searle’s position, though he thinks that computers do not and never will have original intentionality).

    Where the syntax/semantics distinction comes into play is in discussing the emergence of original intentionality.

    A computer can successfully manipulate symbols without understanding what they mean. It just follows the symbol manipulation rules that are built into it. In other words, it operates by pure syntax, without semantics, which is why Searle argues that it lacks original intentionality. This was the point of his Chinese Room thought experiment.

    Likewise, a brain is an interconnected collection of neurons operating according to physical law. The neurons don’t need to understand what the brain states “mean” in order to function correctly. They just blindly follow the laws of physics. As in the case of the computer, this is pure syntax — no semantics.

    The challenge for Searle, then, is to explain how you get from pure syntax to full-fledged semantics — original intentionality.

    If the brain operates purely syntactically, then the semantic content of brain states plays no role in their causal evolution. This makes some people nervous, so they invoke downward causation as a way of giving semantics the causal role they intuitively feel it must have.

    I hope that helps.

  40. Neil Rickert: I’m not all that philosophically literate myself, but I am able to read the discussion.So I’ll try.

    Thanks!

    On the face of it, the way that biological organisms behave is very different from the ways that non-living things (clocks, cars, computers, robots, etc) behave.That leads one to suspect that there is something different about life.

    Well, there is – they reproduce. But it seems to me that with regard to mind, the really interesting difference is between things with brains and things without, a division that goes right down the middle of the the living world (lots of pretty impressive living things, trees, for instance, are not usually assumed to have minds) and may eventually separate some human-made things from others too (robots, for example, from spoons).

    Adding to that, we can talk about non-living things using a barebones objective vocabulary.But we seem to need an intentional or teleological vocabulary to talk about biology.

    I guess. But to say that “the purpose of a bacterial flagellum is motility” is a very different kind of teleological statement than “my purpose in writing this post is to try to explain my confusion”. In the first, the bacterium isn’t the purposive agent; in fact, it’s not clear there is one. In the second, clearly I am.

    Dennett more-or-less dismisses this.His “The Intentional Stance” tries to argue that there is nothing to intentionality.AI (artificial intelligence) proponents argue similarly.

    I didn’t read him that way. Isn’t he saying that “intention” is an explanatory stance we (as human beings, with minds) take, rather than not existing? And I’m almost positive he doesn’t think that human beings don’t have their own intentions! He says, most explicitly, that we do!

    The typical view of AI folk is that the brain is a computer, and computation is primary.Semantics, intentionality, etc, is secondary, and derived from the computation (the syntax).

    I think that word “intentionality” might be tripping me up. I’m not sure it means what it looks as though it means. But in any case, I don’t think the brain is a computer (although it computes), and I think if we ever want to make an AI machine, we will need to think in terms of making an organism, not a brain.

    The alternative view, which I favor (and KN seems to favor), is that semantics is fundamental, and syntax only exists as a construct to communicate the semantics.

    OK, I’m lost. I’m not sure what “fundamental” means, in this context.

    Hmm, I have also said that there is no such thing as semantics.But when I say that, I am only denying that there a “thingness” to it.

    Heh. I deny that there is a “thingness” to consciousness. Maybe there is a parallel.

    The Chalmers Hard Problem is to reduce subjectivity and intentionality to objectivity and mechanism.My view, and I suspect KN might agree, is that this is backwards.Everything works well because subjectivity and intentionality are a natural part of being a biological organism, and we manage to reduce the objective to the subjective which is what allows us to think and talk about objective things.

    hmmm.

    The reference to John Searle is probably to his famous (or infamous) “Chinese Room” argument.That argument claims to prove that you cannot get semantics from syntax.I see Searle’s argument as a complete failure.It proves only what he initially assumes.It does not prove what he claims it to prove.Nevertheless, I suspect that Searle’s conclusion is correct.Searle seems to have good intuitions but terrible answers.

    I’m not sure what it was that puzzled you.But I hope I have addressed some of it.

    OK, I hadn’t thought of the problem as being about semantics and syntax at all. I guess partly because I tend to think of minds as being possessed by non-linguistic species.

    But thanks!

  41. Thanks, keiths, too.

    Well, it does seem to me that what is missing from at least some of these conceptualisations of The Problem, is the situating of brains in organisms – organisms, moreover, that move around their environment, catching things, avoiding things, and finding mates. In other words, with the relationship of intention with action.

    My own view is that if we think of an intention as as planned action, we sidestep Searle’s silly room, and find ourselves with an organism with a perceptual system that parses the world into various classes of object (is that where semantics come in? Although I count molluscs in this) that require appropriate action, some of which are themselves organisms with perceptual systems that also are on the look out for other objects that require appropriate action.

    And so subjective and objective are automatically now on the table.

  42. Lizzie: But it seems to me that with regard to mind, the really interesting difference is between things with brains and things without, a division that goes right down the middle of the the living world (lots of pretty impressive living things, trees, for instance, are not usually assumed to have minds) and may eventually separate some human-made things from others too (robots, for example, from spoons).

    I don’t agree with that.

    In my opinion, you will find more intentionality in an amoeba or a tree than you will ever find in a computer. Sure, brains make a big difference, but even without brains, there’s a difference.

    With a standard computer, the only place that I can find intentionality is the system clock. It terms of “aboutness”, the clock activity is about time. In terms of intention, we could say that the clock intends to start its next clock cycle as soon as it has finished this one. In just about all other respects, a computer is carefully designed to suppress anything that might look like intentionality, and to act in a way that is more mechanistic than ordinary mechanical things.

  43. It is not clear to me how any discussion of intelligence can omit temperature dependence. The phenomena of hypothermia and hyperthermia are big hints about what is going on in the nervous systems of animals.

    The rate of cricket chirping as a function of temperature seems to confirm some very basic chemistry and physics of nervous systems. There is a very basic formula in physics and chemistry that tells us something about the probability of a transition to another state in a molecule or system of molecules.

    That probability is proportional to exp(- φ/kT), where φ is a potential barrier between states, k is Boltzmann’s constant, and T is the absolute temperature.

    The reason this is so important is because this factor shows up at all levels of complexity. Even the simplest systems, consisting of atoms of just one kind, have transport properties that are temperature dependent. Even more striking is the fact that even simple systems made of one kind of atom can exhibit remarkable properties once the system enters a given temperature range; for example, lead becomes a superconductor below 7.3 kelvin. And that is just an extremely simple system made up of one kind of atom.

    Moving up to more complex soft-matter systems placed within a narrow temperature range gets us into systems that have so many things going on that it is hard to separate them; they all interact and synchronize, producing some very remarkable behaviors. Lower the temperature just a little bit, and everything slows down and changes behavior.

    Temperature is just one hint about the physical phenomena taking place in nervous systems. Tiny amounts of some chemicals can radically disrupt processes in the nervous system. So can intense periodic stimuli such as light or sound or pressure.

    There are dozens of examples of stimuli that can confuse and disrupt the way a nervous system behaves; and such stimuli produce things as harmless as illusions to things as dangerous as complete disorientation and inability to even think or function.

    I think many in the physics and chemistry community – and, I suspect, also the biological community – get a little impatient with “philosophical” discussions about consciousness that avoid considerations of those real, measurable physical effects that point clearly to the physical nature of consciousness and intelligence as emergent phenomena in complex systems. Once the systems become complex enough to become hierarchical, there can easily be feedback from various levels to other levels.

  44. Lizzie, Neil,

    It sounds like both of you may be confusing “having intentionality” with “having intentions”.

    Let me stress that intentionality, as a philosophical term, refers strictly to ‘aboutness’, as in the case of a representation being ‘about’ the thing represented.

    Here’s the Free Dictionary’s second definition of ‘intentionality’:

    2. Philosophy The property of being about or directed toward a subject, as inherent in conscious states, beliefs, or creations of the mind, such as sentences or books.

  45. keiths:
    Lizzie, Neil,

    It sounds like both of you may be confusing “having intentionality” with “having intentions”.

    That’s why I didn’t use the word.

    Let me stress that intentionality, as a philosophical term, refers strictly to ‘aboutness’, as in the case of a representation being ‘about’ the thing represented.

    Here’s the Free Dictionary’s second definition of ‘intentionality’:

    Sure. But in that case are we restricting the discussion of minds only to the minds of creatures capable of representing things symbolically?

    That would seem rather arbitrary to me.

  46. Sure. But in that case are we restricting the discussion of minds only to the minds of creatures capable of representing things symbolically?

    I think you’re confusing two different levels of representation: the representation of reality by brain states and the conscious use of symbols to represent other things, as in writing.

    A fly is not capable of using symbols, but a fly brain is certainly capable of representing aspects of reality, such as the presence of a potential mate.

  47. Mike Elzinga,

    Hi Mike.
    I agree with your comment- “…measurable physical effects that point clearly to the physical nature of consciousness..”

    Temprature, pH, [Na], [K], [glucose] and on…

    Then you can add in what happens to nerve conduction in the presence of a local anesthetic like lidocaine. If you block sodium channels nerves don’t carry a signal. What about a high enough IV dose of sodium pentothal or propofol? If you give enough to make the EEG flat the neurons are not sending or relaying signals. Inhalation anesthetics probably act by a different mechanism from the IV agents above but when you give a high enough dose of diethyl ether (50 years ago) or the newer halogenated ethers the EEG will go nearly flat. There is no dreaming, no awareness of time, no out of body floating around, no astral projection. Consciousness is a product of the brain, no ghost.

    I always enjoy your posts.
    Thanks to everyone for making this place. Cheers.

Leave a Reply