There’s a nice little discussion going on at Uncommon Descent (see here) about whether concepts are consistent with naturalism (broadly conceived). Here I want to say a bit about what theories of concepts seem to me to be most promising, and to what extent (if any) they are compatible with naturalism (broadly conceived).
The dominant position in philosophy of language treats concepts as representations: I have a concept of *dog* insofar as I am able to correctly represent all dogs as dogs. It is crucial that concepts have the right kind of generality — that I am able to classify all particular dogs as exemplifying the same general property — in order to properly credit me with having the concept. (If I only applied the term “dog” to my dog, it would be right to say that I don’t really have the concept *dog*.)
On the representationalist paradigm, rational thought has a bottom-up structure: terms are applied to particulars, terms are combined to form judgments about particulars, and judgments are combined to form arguments, explanations, and other forms of reasoning.
The contrary position — a fairly minor one, but with prominent and forceful advocates — is an inferentialist semantics: a concept is a node in an inferential nexus. My grasp of the concept *dog* consists in two distinct abilities: (1) being able to make correct inferences: “that is a dog, therefore it is not a cat; that is a dog, therefore it is an animal”, and so on, and (2) being able to correctly apply the concept to sensed particulars. But the conditions of correct application aren’t constitutive of the very meaning of *dog*: it is the inferential role that constitutes the sense of the concept.
One important feature of inferential semantics is that it goes hand-in-hand with normative pragmatics: the criteria of correct inference (and correct application, for empirical concepts) are intersubjective or social. What can tell me whether I’m using the term “dog” correctly? Only my fellow language-users! The layout of reality can certainly constrain how I apply concepts, but the layout of reality cannot itself tell me whether my concepts are correctly applied to it. (Whether this kind of pragmatism is inconsistent with objectivity is hotly contested. My own view is that it is, but this is controversial.)
The inferentialist view of concepts has a number of significant and wide-ranging implications. One is that concepts turn out to be “non-relational”, in the following sense: if concepts are nodes in an inferential nexus, then there is no mapping from the concept to the things it picks out. There is no direct relation from words to the world. The inferential nexus as a whole is put into play by being used to do things with objects (properties, relations, etc.).
Another important implication is that the familiar discourse of metaphysics — particulars, universals, generals, etc. — all gets ‘linguistified’ by being treated as metalinguistic sortals. They make explicit what we are doing when we classify good and bad inferences. The categorical structure of language does not represent the categorical structure of reality, if reality has a categorical structure at all.
There is a further question lurking in the background here concerning non-linguistic concepts and non-discursive thoughts. What I’ve sketched here is a theory of language, and it is an open question what kinds of concepts are found in non-language-using animals, and also what role non-linguistic concepts play in our own rational cognition. It is almost certainly the case that focusing solely on language will not give us a firm grip on what it is for something to count as a mind, and that we should build up to an account of a rational or sapient mind from an account of non-sapient, merely sentient minds. (For more on this, see “Thinking about the mind: an anti-linguistic turn“.)
Eventually, I suspect, we will need to abandon the belief that all concepts are of one kind. Instead we will need at least two different kinds of concepts: “simple concepts”, at work in the thoughts, beliefs, and desires of animals and infants, and “complex concepts,” at work in language.
Finally, as this bears on “naturalism”: I am not interested in defending naturalism. (Recently I’ve begun to have grave doubts as to whether naturalism is defensible.) I’m interested in figuring out the best account of concepts, and on the account presented here, “simple concepts” depend upon the properly functioning brains of living animals, and “complex concepts” depend upon both the properly functioning brains of living animals that are also members of a linguistic community. So I don’t think there is any route from our best theory of concepts to any ontological commitments to abstract objects or immaterial minds.