Philosophy In An Age of Cognitive Science

Since the publication of The Embodied Mind (1991), the cognitive sciences have been turning away from the mind-as-program analogy that dominated early cognitivism towards a conception of cognitive functioning as embodied in a living organism and embedded in an environment. In the past few years, important contributions to embodied-embedded cognitive science can be found in Noe (Action in Perception), Chemero (Radical Embodied Cognitive Scie Rnce), Thompson (Mind in Life), Clark (Being There and Surfing Uncertainty), and Wheeler (Reconstructing the Cognitive World).

[A note on terminology: the new cognitive science was initially called “enactivism” because of how the cognitive functions of an organism enact or call forth its world-for-it. This lead to the rise of “4E cognitive science — cognition as extended, embedded, embodied, and enacted. At present the debate hinges on whether embodied-embedded cognitive science should dispense with the concept of representation in explaining cognitive function. Wheeler and Clark drop “enaction” because they retain an explanatory role for representation, even though representations are action-oriented and context-sensitive.]

The deeper philosophical background to “the new cognitive sciences” includes Hubert Dreyfus, Merleau-Ponty, Heidegger, Dewey, Wittgenstein, and J. J. Gibson (who was taught by one of William James’s students). It is a striking fact that embodied-embedded cognitive science promises to put an anti-Cartesian, anti-Kantian critique of intellectualism on an scientific (empirical and naturalistic) basis. Embodied-embedded cognitive science is a fruitful place where contemporary cognitive science meets with the best (in my view) of 19th- and 20th-century Eurocentric philosophy.

That’s important for anyone who thinks, with Peirce, that science has some uniquely epistemic position because scientific practices allow the world to get a vote in what we say about it (Peirce contra Rorty).

The philosophical implications of embodied-embedded cognitive science are quite fascinating and complicated. Here’s one I’ve been thinking about the past few days: embodied-embedded cognitive science can strengthen Kant’s critique of both rationalist metaphysics and empiricist epistemology.

Kant argues that objectively valid judgments (statements that can have a truth-value in some but not all possible worlds) require that concepts (rules of possible judgment) be combined with items in a spatio-temporal framework. But Kant was never able to explain how this “combination” happened; and as a result subsequent philosophers were tempted to either reduce concepts to intuitions (as in Mill’s psychologistic treatment of logic) or reduce intuitions to concepts (as in the absolute idealism of Fichte and Hegel). As C. I. Lewis and Sellars rightly saw, however, neither Mill nor Hegel could be right. Somehow, receptivity and spontaneity are both required and they must somehow be combined (at least some degree). But how?

Andy Clark’s “predictive processing” model of cognition (in Surfing Uncertainty) offers a promising option. According to Clark, we should not think of the senses as passively transmitting information to the brain; rather, the brain is constantly signaling to the senses what to expect from the play of energies across receptors (including not only exteroceptive but also interoceptive and proprioceptive receptors). The task of the senses is to convey prediction errors — to indicate how off the predictions were so that the predictions can be updated.

And this bidirectional flow of information takes place between any different levels of neuronal organization — there’s top-down and sideways propagation from the ‘higher’ neuronal levels and also bottom-up propagation from the ‘lower’ neuronal levels (including, most distally, the receptors themselves).

Now, here’s the key move: the bidirectional multilevel hierarchy of neuronal assemblies matches (but also replaces) the Kantian distinction between the understanding (concepts) and the sensibility (intuitions). And it explains the one major thing that Kant couldn’t explain: how concepts and intuitions can be combined in judgment. They are combinable in judgment (at the personal level) because they have as their neurocomputational correlates different directions of signal propagation (at the subpersonal level).

But if embodied-embedded cognitive science allows to see what was right in Kant’s high-altitude sketch of our cognitive capacities, and also allows us to vindicate that sketch in terms of empirical, naturalistic science, it also thereby strengthens both Kant’s critique of empiricism (because top-down signal propagation is necessary for sense receptors to extract any usable information about causal structure from energetic flux), and his critique of rationalism (because the proper functioning of top-down signal propagation is geared towards successful actions, and our only source of information about whether our predictions are correct are not is the bottom-up prediction errors).

And because we can understand, now, both spontaneity and receptivity in neurocomputational terms as two directions of information flow across a multilevel hierarchy, we can see that Kant, C. I. Lewis, and Sellars were correct to insist on a distinction between spontaneity and receptivity, but wrong about how to understand that distinction — and we can also see that Hegel and neo-Hegelians like Brandom and McDowell are wrong to deny that distinction.

 

 

 

 

324 thoughts on “Philosophy In An Age of Cognitive Science

  1. Evolved systems are just different from human designed systems. That’s a clue that evolved systems, including brains and genetic codes are not designed.

    Main clues:
    1. We don’t understand how they work (another way of saying we can’t design them from scratch and can’t predict from first principles how they will react to modifications).
    2. We can design them. See clue number one.
    3. They can’t be emulated (yet) outside their native substrate. The substrate itself seems to be an integral aspect of function.

    I think we will have self driving cars in a couple of decades. That means to me that we will have mastered a small percentage of insect behavior.

  2. Bruce,

    But could you create your private names without
    (1) being a member of a species which has evolved with skills for social living which include communicating with others according to agreed norms…

    That ability, provided by natural selection, is not itself a norm.

    …and
    (2) having learned a language by being a member of a linguistic community which trains its members in the norms of that language?

    If intentionality depended on having an example to follow, it never could have gotten off the ground. Someone had to go first!

    Of course, that’s related to Wittgenstein’s PLA.

    The private language argument is about intrinsically private languages, incapable of being understood by others. You now know that ‘Scroopy’ refers to my cat, so my usage is not private in Wittgenstein’s sense.

    Here’s the man himself:

    243. A human being can encourage himself, give himself orders, obey, blame and punish himself; he can ask himself a question and answer it. We could even imagine human beings who spoke only in monologue; who accompanied their activities by talking to themselves. —An explorer who watched them and listened to their talk might succeed in translating their language into ours. (This would enable him to predict these people’s actions correctly, for he also hears them making resolutions and decisions.)

    But could we also imagine a language in which a person could write down or give vocal expression to his inner experiences—his feelings, moods, and the rest—for his private use?——Well, can’t we do so in our ordinary language?—But that is not what I mean. The individual words of this language are to refer to what can only be known to the person speaking; to his immediate private sensations. So another person cannot understand the language.

    keiths:

    If I envision a boulder tumbling down a hillside, my thought has intentionality, but what norm(s) does it depend on?

    Bruce:

    As I understand it, according to philosophers, the very concept of intentionality as specifically apply to mental representation involves norms. So just by that definition, your thought does depend on norms. Now your example would seem to also involve a fictional boulder, as opposing to seeing a real one, so that combines the two puzzles of non-existent contents with the ability to misrepresent.

    I think there is an analogy to rule obeying versus rule following.

    Planets obey the rules of physics when moving in their orbits. But people follow rules in using physics to calculate the orbits. Part of the difference between obeying and following a rule is that planets cannot make mistakes but people can.

    But I’m not calculating anything. I’m just imagining a boulder rolling downhill. What specific social norms does this mental image depend on?

    So it you want to have a causal (syntactic) explanation for rule following, you have to do more work than you have to do for providing a causal explanation for rule obeying. Namely, you have to explain how mistakes happen. Mistakes involve norms for correctness.

    Mistakes don’t exist at the syntactic level, because meaning and rule-following don’t exist at that level. It’s just physical systems evolving according to the laws of physics.

    At the “as if” semantic level, mistakes are easy to explain. Since the underlying engine is purely syntactic, it isn’t surprising that its behavior diverges from that of an ideal, true semantic engine.

  3. petrushka:
    Evolved systems are just different from human designed systems. That’s a clue that evolved systems, including brains and genetic codes are not designed.

    Main clues:
    1. We don’t understand how they work (another way of saying we can’t design them from scratch and can’t predict from first principles how they will react to modifications).
    2. We can design them. See clue number one.
    3. They can’t be emulated (yet) outside their native substrate. The substrate itself seems to be an integral aspect of function.

    I think we will have self driving cars in a couple of decades. That means to me that we will have mastered a small percentage of insect behavior.

    I did not say we’d be able to design brains or simulate them, only that we could read them via the neural patterns, and that only after have spent some time reading one and relating the readings to behavior in very controlled circumstances in order to learn about its individuality.

    On the other hand, forever is a long time. If we are around for that long, or even just 1000 more years, who knows what we will be able to do. Any opinion on that, even an informed one, is idle chit chat. Not that I am criticizing idle chit chat, especially if it lets me practice my philosophy, such at it is.

    I see you already knew about the mathbabe article.

  4. KN,

    I’ve been absent from this conversation as the week’s teaching and writing wore away at my time, but I see that BruceS has already said most of the things I would have said, anyway.

    Bruce passed on one of my key questions, which was:

    Original intentionality is intrinsic intentionality, and intrinsic intentionality involves intrinsic meaning. Neurons operate syntactically, so their firings lack intrinsic meaning. Thus the operation of the brain and nervous system can be described without reference to meanings.

    If so, then what causal role does meaning play? Where does semantics enter the causal picture?

  5. petrushka,

    You’re making a huge and unjustified leap from “our current knowledge of the brain is poor” to “there is nothing in a brain that could be successfully downloaded or uploaded except by the usual means of speaking, hearing, seeing, etc.”

    The latter is a categorical statement about in-principle possibility. What justifies it?

  6. walto: Cool.

    I’ll tell you something else you could consider doing. For my birthday, earlier this week, in celebration/recognition of a “late-life crisis” I got a tattoo(!)The artist told me I was the third college professor he’d inked in a week.

    Funny you should mention that, because with all the reading about predictive processing I’ve been doing, I was thinking of having Bayes theorem tattooed on my posterior. However, it is not a high priority for me, so not much likelihood for now.

  7. keiths:
    Since the underlying engine is purely syntactic, it isn’t surprising that its behavior diverges from that of an ideal, true semantic engine.

    Thanks for the reply Keith. I don’t have anything more to add to what I’ve said and the details in the articles I’ve linked.

    The article KN mentioned on animal intentionality and error is available for download though JStor if you have access (my local library does give such access) so I think I will spend some time with that at some point.

    ETA: appealing to an “ideal” semantic engine (which we define with our norms) seems to be circular if you are trying to provide a causal explanation without hidden use of norms. But I have not thought about in detail.

  8. petrushka:

    Bruce: Two words that might make you reconsider: neural CODE

    Petrushka: .Not even close to making me reconsider.

    By the way, my comment about code was supposed to be ironically humorous, given all the traffic Mung’s posts have generated about that other so-called code.

    I’ve noticed Moran employs that type of humor at times as well. Probably another one of those Canadian bad habits.

    And when I say “bad”, I mean “Un-American”, of course.

  9. Bruce,

    The article KN mentioned on animal intentionality and error is available for download though JStor if you have access (my local library does give such access) so I think I will spend some time with that at some point.

    Those who don’t have library access can sign up for a free MyJSTOR account which gives access to three free articles every two weeks.

    ETA: Beisecker has a PDF of the paper on his UNLV webpage.

    ETA: appealing to an “ideal” semantic engine (which we define with our norms) seems to be circular.

    I’m arguing that such an engine can’t exist, rather than appealing to it.

  10. keiths: Original intentionality is intrinsic intentionality, and intrinsic intentionality involves intrinsic meaning. Neurons operate syntactically, so their firings lack intrinsic meaning. Thus the operation of the brain and nervous system can be described without reference to meanings.

    If so, then what causal role does meaning play? Where does semantics enter the causal picture?

    We’re getting closer to discovering the point of our disagreement now!

    On my view, semantics does not “enter the causal picture”. Meaning or intentionality are not elements of the scientific image. They are elements of the manifest image. Semantical concepts — meaning, reference, ‘aboutness’ — are ‘at home’ in describing and interpreting what we do and what we say. They are central to how we experience ourselves, each other, and many ‘higher’ animals.

    When we take up an objective or third-person standpoint in natural (and social?) science, we step outside of the hermeneutic circle through which meanings are attributed and evaluated. We are then playing a different game, and (I think) one with a better grip on objective reality.

    We’re not going to find the manifest image concepts lying in wait when we do neuroscience or physics. But what we can do, with both good neuroscience and good phenomenology, is shrink the descriptive gap between the “levels” or “images” of reality-as-experienced. That’s why I said, in the OP, that I want to find subpersonal neurocomputational correlates of concepts grounded in transcendental reflection at the person-level. We’re not going to find meanings in the brain. That’s not the point.

  11. KN,

    On my view, semantics does not “enter the causal picture”. Meaning or intentionality are not elements of the scientific image. They are elements of the manifest image.

    Causality is assuredly a part of the manifest image, so how can you claim that semantics doesn’t enter the causal picture?

  12. Kantian Naturalist:

    BruceS, what was that paper on mental representations and fictionalism? I missed that. Sounds really interesting.

    It’s Sprevak (2013) in Clark and he references in his brief discussion of whether brains use representations. You can find it at the top of this page.

    One question I’m still troubled by is whether computational neuroscience rests on a misguided metaphor. Churchland, Clark, and a few others seem happy to say that what brains do is compute. But one might worry that brains aren’t computers anymore than tornadoes.are. Just because we can build a simulation of both doesn’t mean that either is a computer.

    Piccinini confronts this issue by defining concrete computation to involve
    – a mechanism (in the Bechtel sense) which
    – follows rules to manipulate
    – vehicles which possess sufficient degrees of freedom so that
    – the rules can be sensitive only to differences between vehicles along certain degrees of variation, and the rules can be hence be
    – medium independent by being abstracted from any specific medium.

    So you need a mechanism which follows rules (meaning the mechanism must possibly malfunction) to manipulate vehicles in a manner which can be abstracted from the physical realization.

    I would surmise that there’s some neurophysiological difference between human brains and chimp brains that correlates with our ability to imitate and cooperate. But I have no idea how to go about testing for it. No doubt there’s already a large body of research on this question.

    Based on a Coursera course I watched, I believe neuroeconomics includes this type of research; that is, looking at animal and human brains while engaging in various cooperative games.

  13. keiths:
    Bruce,

    I’m arguing that such an engine can’t exist, rather than appealing to it.

    I understood the linked sentence “Since the underlying engine is purely syntactic, it isn’t surprising that its behavior diverges from that of an ideal, true semantic engine.” as an appeal to our norms in order to define the meaning of “diverges from an ideal”.

    How do we know what an ideal semantic engine is and when a syntactic engine diverges from it without norms?

    Sorry if I misunderstood your point, (ETA) which I took to be an explanation of how a syntactic engine could be seen to make errors without appealing to norms.

  14. Kantian Naturalist: On my view, semantics does not “enter the causal picture”. Meaning or intentionality are not elements of the scientific image. They are elements of the manifest image.

    Could you relate that to your views on biosemiosis?

  15. If we’re operating with the ordinary-language explication of everyday experience, the relation between semantics and causation is not mysterious.

    A: “Why didn’t you want to talk with her?”
    B: “Because I’m still angry with her from our last conversation.”

    Here’s a case where the propositional contents, in their normative contexts, explain why someone does what they do. B’s beliefs and desires cause him to act as he does, and his beliefs have semantic content. We understand his actions in light of the beliefs that cause him to act as he does.

    When we turn to a neuroscientific explanation of B’s actions, we will not find beliefs or desires inside his brain, but we might find complex dynamics of neuronal activity that we philosophical cognitive scientists can correlate with the beliefs and desires that we attribute to B as a person.

  16. KN,

    B’s beliefs and desires cause him to act as he does, and his beliefs have semantic content. We understand his actions in light of the beliefs that cause him to act as he does.

    Which contradicts your earlier assertion that

    On my view, semantics does not “enter the causal picture”.

    KN:

    When we turn to a neuroscientific explanation of B’s actions, we will not find beliefs or desires inside his brain, but we might find complex dynamics of neuronal activity that we philosophical cognitive scientists can correlate with the beliefs and desires that we attribute to B as a person.

    The neuronal activity proceeds purely syntactically, but the system is arranged such that its purely syntactic evolution approximates what you would expect if it were semantic — that is, if it were actually sensitive to meanings.

    It’s “as if” intentionality.

  17. keiths: Bruce,

    But could you create your private names without
    (1) being a member of a species which has evolved with skills for social living which include communicating with others according to agreed norms…

    That ability, provided by natural selection, is not itself a norm.

    …and
    (2) having learned a language by being a member of a linguistic community which trains its members in the norms of that language?

    If intentionality depended on having an example to follow, it never could have gotten off the ground. Someone had to go first!

    Of course, that’s related to Wittgenstein’s PLA.

    The private language argument is about intrinsically private languages, incapable of being understood by others. You now know that ‘Scroopy’ refers to my cat, so my usage is not private in Wittgenstein’s sense.

    Here’s the man himself:

    243. A human being can encourage himself, give himself orders, obey, blame and punish himself; he can ask himself a question and answer it. We could even imagine human beings who spoke only in monologue; who accompanied their activities by talking to themselves. —An explorer who watched them and listened to their talk might succeed in translating their language into ours. (This would enable him to predict these people’s actions correctly, for he also hears them making resolutions and decisions.)

    But could we also imagine a language in which a person could write down or give vocal expression to his inner experiences—his feelings, moods, and the rest—for his private use?——Well, can’t we do so in our ordinary language?—But that is not what I mean. The individual words of this language are to refer to what can only be known to the person speaking; to his immediate private sensations. So another person cannot understand the language.

    keiths:

    If I envision a boulder tumbling down a hillside, my thought has intentionality, but what norm(s) does it depend on?

    Bruce:

    As I understand it, according to philosophers, the very concept of intentionality as specifically apply to mental representation involves norms. So just by that definition, your thought does depend on norms. Now your example would seem to also involve a fictional boulder, as opposing to seeing a real one, so that combines the two puzzles of non-existent contents with the ability to misrepresent.

    I think there is an analogy to rule obeying versus rule following.

    Planets obey the rules of physics when moving in their orbits. But people follow rules in using physics to calculate the orbits. Part of the difference between obeying and following a rule is that planets cannot make mistakes but people can.

    But I’m not calculating anything. I’m just imagining a boulder rolling downhill. What specific social norms does this mental image depend on?

    So it you want to have a causal (syntactic) explanation for rule following, you have to do more work than you have to do for providing a causal explanation for rule obeying. Namely, you have to explain how mistakes happen. Mistakes involve norms for correctness.

    Mistakes don’t exist at the syntactic level, because meaning and rule-following don’t exist at that level. It’s just physical systems evolving according to the laws of physics.

    At the “as if” semantic level, mistakes are easy to explain. Since the underlying engine is purely syntactic, it isn’t surprising that its behavior diverges from that of an ideal, true semantic engine.

    What a great dialogue you two have going here. Thanks!

  18. BruceS: By the way, my comment about code was supposed to be ironically humorous, given all the traffic Mung’s posts have generated about that other so-called code.

    It was amusing, while it lasted, watching you pretend you weren’t a code denialist.

    I don’t suppose we can say it was just pure coincidence:

    Spikes: Exploring the Neural Code

    But there’s no code there. Not really.

  19. Mung: It was amusing, while it lasted, watching you pretend you weren’t a code denialist.

    Glad I was able to keep you amused.

    But I am not interested in responding to such baiting posts, other than to say you have not understood my posts if you think that is my position.

    No doubt part of the reason for that is my Canadian sense of irony and my refusal to use emoticons to give clues as to when I am trying to be ironic, which I admit probably leaves me open to charges of obscurity.

    But internet forums are partly for fun, too, so so be it.

  20. Mung:

    Spikes: Exploring the Neural Code

    But there’s no code there. Not really.

    I did want to point out that, if you take the neural code as real, then it provides a clear counter-example to anyone who claims real codes cannot be created by purely natural processes. (I made this point in a very early post to one of your threads).

    Note that the coding in any individual brain will be unique and will reflect that brains development and interaction with its body and the world.

  21. BruceS: Now I am having trouble deciding what is ironic!

    I wasn’t being ironic. A lot of interesting stuff on this thread. I particularly liked the stuff on rule following, obeying, mistakes, and the causal world.

  22. walto: I wasn’t being ironic. A lot of interesting stuff on this thread. I particularly liked the stuff on rule following, obeying, mistakes, and the causal world.

    It’s probably just me, but I sometimes feel Keith and I are talking at cross purposes, hence I wondered if you were using “dialog” ironically.

    Specifically, when I wrote about why some philosophers think social norms are needed to create the type of private language Keith outlined in a previous post, I linked an IEP article which talked about Wittgenstein’s PLA and rule following both and then how philosophers have taken those ideas as a starting point to discuss the possibility of private languages.

    So I did not understand why Keith responded to that, as well as the two points in my post on evolution and language learning, mainly by a long quote from Wittgenstein on the PLA as an argument about private sensations.

    Probably my fault for mentioning in passing that the points were related to the PLA. At a minimum, I should have also mentioned Kripke’s reading of Wittgenstein on rule following as well.

  23. I’ve finished reading Surfing Uncertainty. It’s a dense book and I don’t know enough neuroscience to evaluate it fully. I’ll look at the discussion in BBS that BruceS pointed out and see what criticisms have been raised.

    I’ve been trying to feel my way through keiths criticism of original intentionality, in part because he and I have very different pictures of what original intentionality would even look like. His criticism relies on the thought that physics is purely syntactical, and I wonder if that is even true. I worry that there’s an implicit commitment to extensionalist semantic as having privileged ontological status at work here.

    But even physics is purely syntactical it is, it doesn’t follow that biology is purely syntactical. That would follow only if biology were in-principle reducible to physics, and I think we simply don’t know whether it is or isn’t — but that the considerations raised against reduction are formidable.

    Keiths has also urged that neuroscience is purely syntactical. I don’t even know what that means, though I do accept that some kinds of neural networks are good models for how brains work. Is a neural network purely syntactical? I don’t know how to think through that question. A formal system might be purely syntactical, though here we get at very hard issues in philosophy of logic, and some logicians will insist that even formal systems are semantical.

    Nevertheless the prospects strike me as actually rather good for thinking of semantics as part of animal biology, in that embodied cognitive systems do find their environments as structured by regions of significance and insignificance. Semantics could be quite central to a good theory of cognitive ethology!

    In that light, the distinction between syntactical engines and semantic engines seems to turn on whether we are doing neuroscience or cognitive ethology. But then, to say that there is no real intentionality, and all intentionality is mere “as if” intentionality, seems to require an implicit claim that neuroscience cleaves closer to the nature of reality than cognitive ethology does. And when we make that claim explicit, I really don’t see what could justify it.

  24. Kantian Naturalist: His criticism relies on the thought that physics is purely syntactical, and I wonder if that is even true.

    I don’t think it is true.

    From my perspective, syntactic pretty much implies mathematical. So a mathematical platonist should see “syntactic” as implying something platonist, while as a fictionalist, I see it as implying useful fictions.

    But physicists study observable reality rather than the world of platonic forms.

  25. BruceS: I did want to point out that, if you take the neural code as real, then it provides a clear counter-example to anyone who claims real codes cannot be created by purely natural processes.

    Have you figured out yet how little that concerns me? Maybe if more people caught on we could actually discuss the codes that are there rather than argue over whether or not they exist in the first place.

  26. Mung,

    Maybe if more people caught on we could actually discuss the codes that are there rather than argue over whether or not they exist in the first place.

    Why are you more interested in discussing definitions than the actual biochemistry? Unless you’re going to make an argument that depends on equivocation, which you’ve said you are not, whether it’s called a code or not doesn’t really matter. All that’s required is that we agree on the vocabulary necessary to discuss the topic.

    What is it you really want to say about DNA transcription?

    ETA: Agreeing on definitions is important. The specific words used are not.

  27. Patrick: Why are you more interested in discussing definitions than the actual biochemistry?

    I think I have a simple answer to that.

    If I say the genetic code is not the only biological code, and people don’t even think the genetic code is a code, how far am i likely to get?

    And then if I want to demonstrate the existence of other biological codes, how would I do that when people can’t even agree on a definition of what a code is?

    IOW, if we want to resolve the question, are there in fact other biological codes, other than the genetic code, how do we go about addressing that question if no one knows what a code is?

    Does that help? Does it make sense?

  28. Mung: If I say the genetic code is not the only biological code, and people don’t even think the genetic code is a code, how far am i likely to get?

    If someone says that the genetic code is not the only biological code, then I know what they mean even though I don’t actually consider the genetic code to be a code. My fussiness over what I mean by “code” doesn’t prevent me from being able to try to understand what others are saying.

  29. Mung: Have you figured out yet how little that concerns me? Maybe if more people caught on we could actually discuss the codes that are there rather than argue over whether or not they exist in the first place.

    Yes, Mung, I think think you’ve made that clear.

    Hence the conditional “if you take” at the start of that sentence. I simply thought it made an interesting point.

    Have you figured out that I am more interested trying to make reasoned arguments than in simply arguing with you or anyone else just because we differ in religious beliefs?

    (Although perhaps that last sentence strays from my professed goal).

  30. Kantian Naturalist:
    Beisecker has an intriguing article on this, “The Importance of Being Erroneous: Prospects for Animal Intentionality” that I’ll read this weekend.

    I read that but it did not convince me.

    Unless I missed something, his argument against Millikan misses her concept of derived proper function entirely. This is a key part of how Millikan tries to explain the workings underlying, eg, belief and expectation, which Beisecker thinks needs something new. So he was arguing against a strawman, I think.

    Millikan’s explanation seems better worked out than Beisecker’s approach, which I thought involved some hand waving, eg in claiming it achievable by evolution.

  31. Kantian Naturalist: But I don’t think that the Cartesian picture of mind could possibly be right,

    I read Grush’s In Defense of Some Cartesian Assumptions Concerning the Brain (pdf) that I linked earlier in the thread.

    I think you’d enjoy it.

    He takes on two claims of the radical enactivists: (1) world/body/brain cannot be split into subsystems for modelling and (2) the brain does not use representations.

    He takes these claims on from the position of control theory, which is itself a type of DST, and which uses representations similar in spirit to those claimed by Clark for PP.

    But after making his counter arguments for subsystems and representations, in the end he notes that in fact he has five key things in common with the radical enactivists. He is definitely not a GOFAI type of guy.

    Section 4 of the paper is somewhat technical but can be skipped if you are comfortable with the efferent copy approach to motor control that Clark describes.

    My understanding is that the control theory model is mathematically equivalent to some formulations of Bayesian models used in PP (eg I think the equivalence depends on choice of prior distributions).

    Grush does call out Clark for some of the things Clark says in an 1989 paper, but I read Clark position in Surfing as close to Grush’s in this paper.

  32. Mung,

    If I say the genetic code is not the only biological code, and people don’t even think the genetic code is a code, how far am i likely to get?

    And then if I want to demonstrate the existence of other biological codes, how would I do that when people can’t even agree on a definition of what a code is?

    IOW, if we want to resolve the question, are there in fact other biological codes, other than the genetic code, how do we go about addressing that question if no one knows what a code is?

    Does that help? Does it make sense?

    Yes and yes.

    If I may make a suggestion it would be for you to write down the definition of “code” that you are using, as clearly and unambiguously as possible, then list the biological artifacts that you think can be ascribed that definition. That would give us actual referents for discussion rather than endless definitional games.

  33. Kantian Naturalist:
    I’ve finished reading Surfing Uncertainty. It’s a dense book and I don’t know enough neuroscience to evaluate it fully. I’ll look at the discussion in BBS that BruceS pointed out and see what criticisms have been raised.

    I still am working my way slowly through sections 8 and 9.

    I find the writing style hard to spend a lot of time with. Too many small sections which try to show how PP could help explain some aspect of human behavior. But, because there are so many, there is not a lot of depth in each.

    Instead of detailed analysis of each topic, Clark does provide many references, which is good, except that that he has chosen spell these out in the text rather than footnote them, which I also find distracting when reading.

    I wonder what Dr Liddle would think of it the book. Or even the BBS article She has linked to some work of hers that involved a bit of Bayesian logic at one point, but I don’t think she has commented the approach as central for modelling the brain at a computation mid-level.

  34. Kantian Naturalist:

    physics is purely syntactical, and I wonder if that is even true.
    […]
    But even physics is purely syntactical it is, it doesn’t follow that biology is purely syntactical.

    In Intuition Pumps (IP), I understand Dennett to say that by “syntactical” he means caused by physiochemical forces, which would include biochemistry, I think (p178).

    In that light, the distinction between syntactical engines and semantic engines seems to turn on whether we are doing neuroscience or cognitive ethology. But then, to say that there is no real intentionality, and all intentionality is mere “as if” intentionality, seems to require an implicit claim that neuroscience cleaves closer to the nature of reality than cognitive ethology does. And when we make that claim explicit, I really don’t see what could justify it.

    I don’t recall seeing Dennett use the phrase “as if intentionality”, but I think he might otherwise be read as being somewhat in agreement with that paragraph, if we include taking the intentional stance as part of ethology.

    In IP, he makes it clear he is not an eliminativist about meaning. For example, he says: “semantic properties — such as truth and meaning and reference — play an ineliminable role in some causal processes.” (p 179). He also creates an example — two back boxes — to illustrate this point. Of its design, he says (p 193):

    When two different syntactic systems, A and B, have been designed to mirror the same semantic engine, the only way of accounting the remarkable regularity they reveal is to ascend to the semantic-engine level…

    And in the same two black boxes example, he says that his account of content in Real Patterns is not meant as epiphenomialism about content and points to the fact that micro-causal path which explains the content can be traced out. (p 191)

    From the above, I think you can read Dennett as saying that
    – a semantic engine emerges from the syntactic engine that realizes it;
    – the emergent semantic engine has real causal powers;
    – their reality is separable from the microcausal powers of the realizers because the semantic engine is multiply realizable on different syntactic engines.

  35. Patrick: If I may make a suggestion it would be for you to write down the definition of “code” that you are using, as clearly and unambiguously as possible, then list the biological artifacts that you think can be ascribed that definition.

    I’ve already done this. I don’t for the life of me know why people keep asking me to do it after I’ve already done so. Do you?

  36. What’s the harm in providing that definition again, mung? It’s not that much trouble really, is it? I have been unable to find it, anyhow. I’d think you’d want to put it on every page, so people would stop saying that you refuse to define your terms.

    And while you’re at it, define “denialist.” Thanks.

  37. Mung,

    If I may make a suggestion it would be for you to write down the definition of “code” that you are using, as clearly and unambiguously as possible, then list the biological artifacts that you think can be ascribed that definition.

    I’ve already done this. I don’t for the life of me know why people keep asking me to do it after I’ve already done so. Do you?

    If you’ve done it, just provide a link when people ask for it.

  38. Mung: I’ve already done this. I don’t for the life of me know why people keep asking me to do it after I’ve already done so. Do you?

    Mung: I’m happy with the definition of code you provide at the bottom of your OP on Code denialism Part 3 where you quote a definition of a code as a mathematical mapping between two sets of symbols.

    But I agree with Walt that if would be interesting to understand what you mean by “Denialism”.

    I don’t think you can mean that the phrase “genetic code” is meaningless.

    So I think you might mean that denialists deny that that the genetic code is metaphysically real, that is, they deny it is “ontologically independent of our conceptual schemes, perceptions, linguistic practices, beliefs, etc”.

    Instead, denialists might believe the code is fictional (in the philosophical sense); that is, although such things as the genetic code ‘appear to be descriptions of the world, [they] should not be construed as such, but should instead be understood as cases of “make believe”, of pretending to treat something as literally true (a “useful fiction”).’

    If that is your understanding of denialism, then, when scientists say “the genetic code is real” are they making a scientific claim or a philosophical claim?

    My position on these issues has always been that denialism is a claim about realism versus fictionalism and hence must be evaluated philosophically. I’ve tried not to take a position on the answer, but it is true that some of my posts could be read as leaning towards fictionalism; however, this is not the position I mean to take.

  39. Mung: I’ve already done this. I don’t for the life of me know why people keep asking me to do it after I’ve already done so. Do you?

    If a code is a mathematical mapping between symbol sets, then I would agree it is not a natural language (a point you have made in at least some of your posts as I recall), since natural languages are about referring to the world, not mapping to a set of abstract symbols.

    It might be a very simple formal language with a model in biochemistry.

  40. BruceS: Mung: I’m happy with the definition of code you provide at the bottom of your OP on Code denialism Part 3 where you quote a definition of a code as a mathematical mapping between two sets of symbols.

    So you’re saying mung endorses that 1963 Abramson definition himself? (I’m not sure how you can tell from that post myself. Doesn’t ever say anything like “That sounds right to me,” does he?) Anyhow, can you confirm this, mung? And do you mean by “denialist” anyone who believes that DNA connection patterns (if that’s the right term) are not codes according to that definition?

    These are two yes or no questions–not a ton of time or trouble required to answer them. And I think it would be helpful. Thanks.

  41. BruceS: If a code is a mathematical mapping between symbol sets, then I would agree it is not a natural language

    It’s not an artificial language either. Is it a language at all?

  42. walto: It’s not an artificial language either.Is it a language at all?

    Perhaps a formal language, as defined this way (dredging up some old memories here, so likely some errors):

    <letter> ::=U|C|A|G
    <codon>::=<letter><letter><letter>
    <aminoacid>::=Phenylalanine|…|Glycine
    <verb>=::= maps-to
    <sentence>::=<codon><verb><aminoacid>

    In English, what I think that says is
    1. a <letter> is one of the listed four (ie one letter names for bases)
    A <codon> is a triplet of bases.
    An amino acid is one of those lists (I omitted some indicated by the …).
    A sentence is “codon maps-to aminoacid”

    Then an interpretation of the language would be the biochemistry for DNA replication, with the true sentences corresponding the the biochemically working replications, and the false ones not doing so.

  43. BruceS: A sentence is “codon maps-to aminoacid”

    Leaving only the small matter of mapping to phenotype and then to reproductive success.

  44. walto: So you’re saying mung endorses that 1963 Abramson definition himself?(I’m not sure how you can tell from that post myself. Doesn’t ever say anything like “That sounds right to me,” does he?)Anyhow, can you confirm this, mung?And do you mean by “denialist” anyone who believes that DNA connection patterns (if that’s the right term) are not codes according to that definition?

    These are two yes or no questions–not a ton of time or trouble required to answer them. And I think it would be helpful. Thanks

    Yes to first sentence, since the title of the section in which it appears is “What a Code Is”.

    I also think you have to be careful to distinguish the genetic code from its implementation in biochemistry: “DNA connection patterns” might be ambigous on that separation.

    A lot of the posts in the other threads don’t make that distinction although I believe Mung and Frankie have done so.

    Small quibble: please don’t encourage Mung to just give simple yes/no answers. I find many of his posts enigmatic, to say the least, because he settles for a quick sentence or even just a quotation, rather than explaining in more detail what he means.

    Of course, the “driving traffic to the site” theory does explain that tendency of his, since it encourages followup questions from other posters. But I don’t really think that that conspiracy theory is true. Or at least, on balance, I think it is improbable.

  45. petrushka: Leaving only the small matter of mapping to phenotype and then to reproductive success.

    Yes, but that is out of scope for what I was doing as an interpretation of Mung.

    Note: “out-of-scope”, not “unimportant to understanding evolution”.

  46. BruceS: Yes, but that is out of scope for what I was doing as an interpretation of Mung.
    Note: “out-of-scope”, not “unimportant to understanding evolution”.

    Elsewhere Mung implies that the code and translator system must exist together to make evolution possible. For the putative designer (not Mung’s of course) there is a problem of mapping sequences to biological function.

    The code itself is just chemistry. The interesting thing is the sequences that make the assemblage alive. It’s a nice chicken and egg problem. I wonder what discipline is most likely to find a solution. Biochemistry, or scholasticism?

Leave a Reply