Philosophy of Mind: A Taxonomy

I consider the following to be a “work in progress,” and will make changes as others here contribute corrections and suggestions.

The so-called “mind-body problem”, as bequeathed to us by Descartes, has invited various solutions over the centuries.  In the classical version, the basic positions were dualism, materialism, and idealism — each of which has its sub-varieties.

What is meant by “mind”?  Well, there are characteristically mental phenomena have been presented as candidates for what is essential to mindedness: rationality, intentionality, subjectivity, volition, or consciousness.  (That these don’t all overlap can be seen by asking, “are there unconscious mental states or processes?”, “what sorts of minds do non-rational animals have?” “are there purely qualitative, non-intentional mental states, e.g. pains?” and so on.)

On the “material” side of the dichotomy, the 17th-century picture was sketched in terms of little bits of matter (Locke’s “corpuscles,” and then “atoms”) that causally interacted according to exception-less laws.  As the bestiary of physicists grew in the 19th and 20th centuries, “matter” looked less and less Epicurean — suddenly there was energy, and space-time, and “dark matter” and quanta and fields and forces.  So today we talk of “physicalism” rather than “materialism,” where the ontology of the physicalist is pretty much “whatever our best physics tells that everything else is made out of”.  Today that would be fermions, bosons, and space-time.

So here’s one way of describing the problem: what’s the relation between consciousness, volition, rationality, subjectivity, or intentionality (on the one hand) and bosons, fermions, and space-time (on the other)?

Dualism, or substance dualism, hold that mental and physical phenomena are just different kinds of basic entities in the overall metaphysics.  Neither is more real or more basic than the other.  (Descartes is classically regarded as the founder of dualism, with good reasons — though his arguments for substance dualism are, I think, more subtle than most people realize.)   Property dualism holds that mental and physical phenomena are just different kinds of properties or features that something can have, with the thing itself being neither essentially mental nor essentially physical.  (I would assume that all property dualists must be neutral monists, but I haven’t seen that spelled out.)

Materialism holds that what is most real or basic is physical stuff, which means that we need to (somehow) account for mental phenomena in physical terms, usually in terms of brain-states.  A good materialist slogan is, “the mind is what the brain does.”  There are two important variations here: reductionism and eliminativism.

Reductive physicalism hold that we can (in principle) and will (in practice) explain everything mental in terms of physical stuff.  A reduction is successful when we can re-describe everything in the to-be-reduced vocabulary in terms of the more basic, what-is-being-reduced-to vocabulary.   For example, we can reduce lightning to electron flows by describing everything that’s going on with lightning in terms of how the charged particles are being exchanged.  Or we can reduce rainbows to refracted spectra by describing everything that’s going on with rainbows in terms of how visible light is refracted and reflected as it enters and exits airborne water bubbles.   Now, it does seem that some mental phenomena can be explained in these sorts of terms — for example, how the brain processes sensory stimuli.  But it is far from clear that all mental phenomena can be thus explained.  The question as whether consciousness can be explained in physical terms is called “the hard problem” because we don’t even understand how it could be solved.  (A big question, too, is whether non-reductive physicalism is a plausible — or even coherent — position.)

Philosophers of science have noted that successful inter-theoretic reduction is extremely rare in the history of science.  Much more common is that a previous theory is simply eliminated, and we come to recognize the putative entities posited by the old theory simply don’t exist and never did.  Examples: the four humors of medieval medicine; ether; phlogiston.  Eliminative materialism holds that it is at least possible that some mental phenomena will be eliminated as neuroscience advances.  The most well-known proponents of eliminative materialism — Paul and Patricia Churchland — argue that, in particular, what we call “propositional attitudes” — beliefs and desires — will be eliminated from our vocabulary as neuroscience advances.   (Note: the Churchlands are not eliminativists about consciousness or rationality — that’s a common misunderstanding of their view.)

Finally, there’s idealism, which holds that it’s mental phenomena which are really and ultimately real, and everything physical has to be explained in terms of what is mental.   (Here too there “reductive idealism” and “eliminative idealism”.  I would consider Leibniz to be a reductive idealist and Berkeley to be an eliminative idealist, though Leibniz and Berkeley have really important differences.)   Generally speaking, I prefer to restrict the term “idealism” to Kant and the post-Kantian German Idealists, but the term is generally used to refer to any view in which the mental is what is ultimately or basically real, and the physical is not.

 

 

 

 

 

 

 

 

 

65 thoughts on “Philosophy of Mind: A Taxonomy

  1. keiths: I think you’re confusing two different levels of representation:the representation of reality by brain states and the conscious use of symbols to represent other things, as in writing.

    A fly is not capable of using symbols, but a fly brain is certainly capable of representing aspects of reality, such as the presence of a potential mate.

    Ah. Well, that’s certainly one source of confusion. I don’t think myself that brain states “represent” reality. Or rather, I don’t think that’s a very fruitful way of thinking about the relationship between brain states and perception, and may turn out to be at the root of the controversy.

    I think that when a fly encounters a potential mate, it receives various signals from the environment (airborne molecules, reflected light, vibrations) that result in a cascade of brain states, that put the fly (not the brain) into a state of readiness for the actions involved in mating.

    In other words, the organism “represents” reality to itself not as a series of brain states, each “representing” a facet of reality, but as dynamically competing states of readiness for alternative courses of action, full of feedback loops including states that would be induced were a given course of action to be excuted, and new inputs resulting from executed action (i.e more environmental sampling).

  2. Lizzie,

    By the way, Lizzie, you might be familiar with these papers:

    O’Regan JK, Noë A. (2001). A sensorimotor account of vision and visual consciousness.

    Clark, A. (2012). Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science

    Is that close to your view?

  3. That Clark article looks really good! I’ll read it as soon as I can!

    I’ve thought a bit about different notions of “representation”, particularly the contexts of philosophy of cognitive science and philosophy of language. There are different kinds of representationalism and anti-representationalism, and many intermediary positions.

  4. Mike,

    It is not clear to me how any discussion of intelligence can omit temperature dependence.

    It’s not because temperature dependence isn’t important. It is, as are lots of other physical factors. It’s just that people take it for granted, because observation makes it obvious. Even a committed substance dualist will acknowledge that high fever causes delirium, for example (though they have a harder time than materialists in explaining why).

    Debates focus on areas of disagreement. The ability of temperature changes to disrupt cognition is not in dispute.

    Once the systems become complex enough to become hierarchical, there can easily be feedback from various levels to other levels.

    As a weak emergentist, I would argue that feedback from one hierarchical level to another can equivalently be viewed as feedback within either of those levels. In other words, the choice to view it as feedback between levels is a matter of descriptive convenience, not of ontology.

    It sounds like you’re leaning toward the idea that feedback between levels is real and irreducible.

  5. I encountered one creationist online several years ago who insisted that due to an accident or illness ( I forget which) he was unable to communicate coherently, although to himself he was quite lucid and can recall what others said to him without distortion.

    He used this as an argument for the radio mind that remains unaffected by physical trauma.

    I didn’t find his argument particularly compelling. Oliver Sachs has a bucket of stories about people who can communicate clearly, but who report perceptions that are affected by brain injury.

    It is sort of interesting to contemplate a disjunction between what someone says and their inner experience. I’m pretty sure physiologists already consider this.

  6. Yeah, the ‘radio mind’ model falls apart pretty quickly. It can’t explain Alzheimer’s, for instance, or why alcohol affects judgment as well as perception.

  7. Lizzie,

    I don’t think myself that brain states “represent” reality. Or rather, I don’t think that’s a very fruitful way of thinking about the relationship between brain states and perception, and may turn out to be at the root of the controversy.

    I think that when a fly encounters a potential mate, it receives various signals from the environment (airborne molecules, reflected light, vibrations) that result in a cascade of brain states, that put the fly (not the brain) into a state of readiness for the actions involved in mating.

    Another example will show why that idea doesn’t work in general. Suppose we run an experiment in which a child is seated at a table, on which lies an inverted bowl.

    a) the child sees the bowl;

    b) out of curiosity, the child picks up the bowl and finds nothing underneath;

    c) the experimenter comes in and, in full view of the child, places an interesting toy under the bowl, then leaves;

    d) the child picks up the bowl and begins playing with the toy.

    The child’s eyes receive the same input between b) and c) as they do between c) and d) — that is, the sight of an inverted bowl on the table. The difference is that after c), the child’s brain states represent the fact (or the belief, at least) that the toy is under the bowl.

    And it’s not merely that the experimenter’s action has put the child into a state where he or she is inclined to look under inverted bowls. The child will look under that inverted bowl, and if you ask why, will express his or her belief that the toy is under it.

    Given these facts, it’s hard for me to see why you would deny that representation is happening, or why you would think that representation isn’t a useful concept in describing how brains work.

  8. keiths:

    As a weak emergentist, I would argue that feedback from one hierarchical level to another can equivalently be viewed as feedback within either of those levels. In other words, the choice to view it as feedback between levels is a matter of descriptive convenience, not of ontology.

    It sounds like you’re leaning toward the idea that feedback between levels is real and irreducible.

    The notions of “strong emergence” and “weak emergence,” as they appear to be used here, are not part of the vocabulary of physics and chemistry as far as I know; I have never heard or seen these terms used in the discussions of emergent phenomena in physics.

    The hierarchical nature of emergent properties is better understood in physics. The sudden onset of a coherent set of properties that emerge from the underlying structures of a system, or that arises because of changes in energy and matter flow, are studied routinely in condensed matter physics.

    What I meant by “feedback between hierarchies” is that emergent phenomena due to complexity can suddenly affect what patterns and organization take place in the simpler parts of a complex system.

    In trying to describe such phenomena to laypersons or in a forum such as this, I always try to find the simplest systems I can think of that illustrate the ideas. The occurrence of these phenomena in simple systems just occurs much more dramatically in more complex systems.

    So a simple example of emergent complexity affecting underlying simpler phenomena could be the emergence of a logjam in a river. The beginnings of the formation of the logjam change the flow of water such that continued formation of the logjam becomes more rapid. This is an example of a positive feedback mechanism from a higher level emerging phenomenon onto an underlying level that was responsible for the original flow pattern. All it takes is a small perturbation to start the logjam, and the feedback cascades the process from there.

    One can also come up with examples of negative feedback between levels that tend to stabilize a system that is developing.

    Even more interesting are examples where systems develop feedback that oscillates between positive and negative. In such systems, periodic or pulsating behavior emerges that causes the emergence of other patterns in the simpler substructure of a system. There are chemical systems that behave like this within certain temperature ranges.

    Density waves propagating through an evolving structure become possible only at a certain level of complexity, but that emergence has huge effects on the evolution of the underlying structure. An example is in the star formation in galaxies. Density waves become possible at a given level of complexity, but the formation of density waves leads to more rapid condensation and star formation, which, in turn affects the density waves.

    The complex neural systems in animals are an ideal environment for the emergence of hierarchical phenomena. The recursive nature of memory appears to have something to do with the emergence of “free will.”

    Internal memories of patterns learned from environmental input become nearly equal to current external input in the formation of future behaviors and memories. There is enough complexity and contingency in the environment of a sentient animal that such hierarchies of memory structure become the means by which behavioral responses to the environment can be “evened out” or made “more rational,” thereby increasing the probability of survival.

    Such rational choices depend on an even higher level of complexity that allows the comparison of memories with current input, weighing their relative “merits,” and making decisions about which are the ones to which a response is more appropriate.

    Mechanical behaviors that are learned responses to the environment may work most of the time; but eventually such mechanical responses to the environment may turn out to be “wrong” in a given particular instance, thereby leading to the death of the organism.

  9. I haven’t looked at the O’Regan/Noë paper, but the Clark paper is definitely not close to Lizzie’s view.

    Clark is discussing brains as “prediction machines” that model reality, evolve the model forward in time in order to make predictions, compare the predictions to sensory input, and adapt the model to reduce the discrepancy between them.

    Representation in the form of modeling is at the heart of the scheme.

  10. keiths,

    Well, of course the mind “represents” the world in a very general sense – how could we function otherwise? Similarly, a genome “represents” the current and past environment in which the organism evolved. What it apparently doesn’t do is any sort of static symbolic mapping, akin to a verbal or visual representation that we use for our, er… external representations.

    Although in the Clark paper there is a comparison to sophisticated compression schemes used to encode multimedia: those are unambiguously “representations.”

  11. SophistiCat,

    You asked if the Clark paper was close to Lizzie’s view. My point is that it can’t be.

    The paper is about the brain’s use of predictive modeling in the process of perception. Modeling is the epitome of representation, so Clark’s paper is all about representation. Lizzie’s view is close to the opposite:

    I don’t think myself that brain states “represent” reality. Or rather, I don’t think that’s a very fruitful way of thinking about the relationship between brain states and perception, and may turn out to be at the root of the [intentionality] controversy.

    She’s right that if there’s no representation, there’s no intentionality, and if there’s no intentionality, there’s no debate over whether it’s ‘original’.

    However, I think she’s throwing the baby out with the bathwater. Many of the neuroscientific advances of the last few decades have had representation at their core, from Hubel and Wiesel onwards. It has been a tremendously useful concept, and Clark’s paper builds on that.

  12. I’m confused as to what is meant by representation. Are we talking about some sort of code mapping?

  13. Code mappings are instances of representation, but the idea is much more general than that.

    A representation is really just anything that ‘stands for’ something else. A portrait is a representation of a person. This juxtaposition of letters — ‘cat’ — represents a particular kind of domestic animal. It also represents the sound of the word. The ‘occupied’ light on an airplane lavatory represents the fact that someone is inside.

    In my inverted bowl example, some aspect of the child’s brain state represents the presence of the toy under the bowl.

  14. keiths:
    Lizzie,

    Another example will show why that idea doesn’t work in general.Suppose we run an experiment in which a child is seated at a table, on which lies an inverted bowl.

    a) the child sees the bowl;

    b) out of curiosity, the child picks up the bowl and finds nothing underneath;

    c) the experimenter comes in and, in full view of the child, places an interesting toy under the bowl, then leaves;

    d) the child picks up the bowl and begins playing with the toy.

    The child’s eyes receive the same input between b) and c) as they do between c) and d) — that is, the sight of an inverted bowl on the table.The difference is that after c), the child’s brain states represent the fact (or the belief, at least) that the toy is under the bowl.

    And it’s not merely that the experimenter’s action has put the child into a state where he or she is inclined to look under inverted bowls.The child will look under that inverted bowl, and if you ask why, will express his or her belief that the toy is under it.

    Given these facts, it’s hard for me to see why you would deny that representation is happening, or why you would think that representation isn’t a useful concept in describing how brains work.

    Because the word “representation” suggests that one agent is presenting a representation of something to another agent (or even reflexively to the same agent). Who are the agents, and what is the representation in your example of the child with the bowl?

  15. Agents need not be involved. The stream of packets flowing from YouTube’s server to your computer represents the video information that will be displayed on your monitor. The grooves on a record represent the sound of the music being played.

    In the case of the child and the bowl, the representation is whatever it is about the child’s brain state that encodes the child’s belief that a toy is under the bowl.

Leave a Reply