Philosophy In An Age of Cognitive Science

Since the publication of The Embodied Mind (1991), the cognitive sciences have been turning away from the mind-as-program analogy that dominated early cognitivism towards a conception of cognitive functioning as embodied in a living organism and embedded in an environment. In the past few years, important contributions to embodied-embedded cognitive science can be found in Noe (Action in Perception), Chemero (Radical Embodied Cognitive Scie Rnce), Thompson (Mind in Life), Clark (Being There and Surfing Uncertainty), and Wheeler (Reconstructing the Cognitive World).

[A note on terminology: the new cognitive science was initially called “enactivism” because of how the cognitive functions of an organism enact or call forth its world-for-it. This lead to the rise of “4E cognitive science — cognition as extended, embedded, embodied, and enacted. At present the debate hinges on whether embodied-embedded cognitive science should dispense with the concept of representation in explaining cognitive function. Wheeler and Clark drop “enaction” because they retain an explanatory role for representation, even though representations are action-oriented and context-sensitive.]

The deeper philosophical background to “the new cognitive sciences” includes Hubert Dreyfus, Merleau-Ponty, Heidegger, Dewey, Wittgenstein, and J. J. Gibson (who was taught by one of William James’s students). It is a striking fact that embodied-embedded cognitive science promises to put an anti-Cartesian, anti-Kantian critique of intellectualism on an scientific (empirical and naturalistic) basis. Embodied-embedded cognitive science is a fruitful place where contemporary cognitive science meets with the best (in my view) of 19th- and 20th-century Eurocentric philosophy.

That’s important for anyone who thinks, with Peirce, that science has some uniquely epistemic position because scientific practices allow the world to get a vote in what we say about it (Peirce contra Rorty).

The philosophical implications of embodied-embedded cognitive science are quite fascinating and complicated. Here’s one I’ve been thinking about the past few days: embodied-embedded cognitive science can strengthen Kant’s critique of both rationalist metaphysics and empiricist epistemology.

Kant argues that objectively valid judgments (statements that can have a truth-value in some but not all possible worlds) require that concepts (rules of possible judgment) be combined with items in a spatio-temporal framework. But Kant was never able to explain how this “combination” happened; and as a result subsequent philosophers were tempted to either reduce concepts to intuitions (as in Mill’s psychologistic treatment of logic) or reduce intuitions to concepts (as in the absolute idealism of Fichte and Hegel). As C. I. Lewis and Sellars rightly saw, however, neither Mill nor Hegel could be right. Somehow, receptivity and spontaneity are both required and they must somehow be combined (at least some degree). But how?

Andy Clark’s “predictive processing” model of cognition (in Surfing Uncertainty) offers a promising option. According to Clark, we should not think of the senses as passively transmitting information to the brain; rather, the brain is constantly signaling to the senses what to expect from the play of energies across receptors (including not only exteroceptive but also interoceptive and proprioceptive receptors). The task of the senses is to convey prediction errors — to indicate how off the predictions were so that the predictions can be updated.

And this bidirectional flow of information takes place between any different levels of neuronal organization — there’s top-down and sideways propagation from the ‘higher’ neuronal levels and also bottom-up propagation from the ‘lower’ neuronal levels (including, most distally, the receptors themselves).

Now, here’s the key move: the bidirectional multilevel hierarchy of neuronal assemblies matches (but also replaces) the Kantian distinction between the understanding (concepts) and the sensibility (intuitions). And it explains the one major thing that Kant couldn’t explain: how concepts and intuitions can be combined in judgment. They are combinable in judgment (at the personal level) because they have as their neurocomputational correlates different directions of signal propagation (at the subpersonal level).

But if embodied-embedded cognitive science allows to see what was right in Kant’s high-altitude sketch of our cognitive capacities, and also allows us to vindicate that sketch in terms of empirical, naturalistic science, it also thereby strengthens both Kant’s critique of empiricism (because top-down signal propagation is necessary for sense receptors to extract any usable information about causal structure from energetic flux), and his critique of rationalism (because the proper functioning of top-down signal propagation is geared towards successful actions, and our only source of information about whether our predictions are correct are not is the bottom-up prediction errors).

And because we can understand, now, both spontaneity and receptivity in neurocomputational terms as two directions of information flow across a multilevel hierarchy, we can see that Kant, C. I. Lewis, and Sellars were correct to insist on a distinction between spontaneity and receptivity, but wrong about how to understand that distinction — and we can also see that Hegel and neo-Hegelians like Brandom and McDowell are wrong to deny that distinction.

 

 

 

 

324 thoughts on “Philosophy In An Age of Cognitive Science

  1. Kantian Naturalist: I finished Retrieving Realism two weeks ago. I found it deeply disappointing. But I’d be willing to write a (very critical!) review of it for us here.

    Please don’t do it on my account though. I have other things right now taking up my time. 🙂

  2. KN,

    I think we have to be a bit careful here about why neuroscientists and philosophers have been tempted by the idea that the brain is merely syntactical. I suspect that it lies in the successes of computational neuroscience, machine learning, evolutionary robotics and so forth.

    Those spectacular successes are just half of it. The other half is the lack of a plausible theory of original intentionality. The laws of physics don’t take semantics into account, as far as we can tell. Unless we’re mistaken about that, any physical system must be fundamentally syntactic.

    I don’t think that makes semantics illusory, but I do think it means that original intentionality doesn’t exist and that actual intentionality is “as if” intentionality, built on top of syntax.

    Those successes make it seem as if the relation between neurons is merely syntactical — the firing of one neuron modulates the firing of another, just as the activation of one gate in a microchip affects the activation of another — so if the latter is purely syntactical, then so too is the former.

    Indeed, I think that conclusion is inescapable.

    The idea that brains are merely syntactical engines could be undermined if either (a) computers are not syntactical engines either or (b) brains are not computers, despite the success of computational neuroscience in modeling them as such.

    It would take much more than that, because “syntactic” in this context refers not only to computers but to any system whose fundamental operations are insensitive to semantics. That would include things like can openers and rocks.

    To undermine the idea that brains are syntactic engines, you’d need to show that physics is somehow sensitive to semantics, at least in some relevant contexts.

  3. Neil Rickert: Predictive coding and Bayesian learning cannot get started before there is data.So you first need to understand perception well enough to understand how data is acquired.

    I’m afraid that the best I’ve been able to come up with by your use of “data” is that it means this: “something I, Neil, understand about how perception works but no other philosopher or scientist has recognized”.

    I don’t want to ask you about it directly again, because I just have not been able to follow your explanations. But is my characterization fair? If not, is there a scientist who does understand what you mean about data and perception (of course, I am sure no philosopher does). If so, who and what do they understand and what have they missed?

  4. keiths: The laws of physics don’t take semantics into account, as far as we can tell.

    Mathematically, V=IR (Ohm’s law), and f=ma (Newton’s law) are identical.

    We derive Kirchhoff’s laws from Ohm’s law. We do not derive anything like that from Newton’s laws, though we do derive laws about moments, angular momentum, etc.

    It seems to me that semantics is very much involved in the ways that we use the laws of physics.

  5. keiths: despite the success of computational neuroscience in modeling them as such.

    IO’m not convinced we have modeled brains, except in a cargo cult way.

    I am not surprised that we have been able to mimic many of the things that grains do. After all, we see the outward behavior, and we see the consequences that reinforce and shape behavior, but I don’t think we are modeling the behavior of brains.

  6. BruceS: I’m afraid that the best I’ve been able to come up with by your use of “data” is that it means this: “something I, Neil, understand about how perception works but no other philosopher or scientist has recognized”.

    Perhaps so.

    I’m not talking about anything deep and sophisticated. It is something that I see as almost trivially obvious. I’m not sure why philosophers find this hard, except that their traditions commit them to a different view.

    For me, it started when I was thinking about the question of how we can learn about the world. And it sure seemed to me that if I look at the firings of a neural sensor, that should look like noise. William James seemed to recognize this when suggested that it was a “bloomin buzzin confusion.”

    As background, I had read a lot of the history of science (as a teenager). And what came across is that data is not easy. A particularly good example of this was in the study of electricity. The scientists went through gold leaf in Leyden jars, twitching frog legs, and a number of other ideas, before they came up with our ideas about voltage, current and resistance.

    The world is full of signals but, by themselves, they are useless. Finding ways of getting useful data seems to require invention. The useful data is already intentional (meaningful, useful) because of the way that it is acquired. Data is theory laden, because the scientific theory carefully defines the data that it uses. Scientific laws are often definitions of data, rather than inductions from observed data. And if perception can work, then it has to be solving similar kinds of problems.

    Philosophers talk of “the myth of the given.” But they need to look into “the myth of data.” That’s the real myth on which philosophy relies.

  7. keiths:

    The laws of physics don’t take semantics into account, as far as we can tell.

    Neil:

    It seems to me that semantics is very much involved in the ways that we use the laws of physics.

    Sure, but that’s irrelevant. You’re confusing statements of the laws of physics with the laws themselves. In the statement of Ohm’s Law, V, I, and R all have meanings. The statement depends on those semantics. If you take V to represent current and I to represent voltage, the statement is no longer true.

    Physics doesn’t care about our equations, our symbols, or how we assign meaning to them.

  8. keiths: Physics doesn’t care about our equations, our symbols, or how we assign meaning to them.

    Physics doesn’t seem to care that such laws are just useful approximations.

  9. petrushka:

    keiths: despite the success of computational neuroscience in modeling them as such.

    IO’m not convinced we have modeled brains, except in a cargo cult way.

    That was KN’s phrase, not mine. However, he’s right that computational neuroscience takes a fundamentally syntactic approach. You can model neural networks without modeling the meaning of the inputs, outputs, or intermediate signals.

  10. Bruce, to Neil:

    I’m afraid that the best I’ve been able to come up with by your use of “data” is that it means this: “something I, Neil, understand about how perception works but no other philosopher or scientist has recognized”.

    Replace ‘understand’ with ‘assert’ and I would agree.

  11. keiths: The other half is the lack of a plausible theory of original intentionality. The laws of physics don’t take semantics into account, as far as we can tell. Unless we’re mistaken about that, any physical system must be fundamentally syntactic.

    I see the point quite clearly, but I do question whether it is correct. We would need to know either that (a) there are no emergent properties or (b) there are emergent properties, but original intentionality can’t be an emergent property. However, if there are emergent properties and if original intentionality is an emergent property, then we can have original intentionality constrained by the laws of physics.

    On my view, original intentionality is a property of living animals; it is biological, not (reductively) physical. But it is a property of animals, not of any one of their parts; it is no objection to my thesis that brains per se are only syntactical.

    I don’t think that makes semantics illusory, but I do think it means that original intentionality doesn’t exist and that actual intentionality is “as if” intentionality, built on top of syntax.

    Only if one does not think that original intentionality can be an emergent property.

  12. At this point in the development of Eurocentric philosophy, I think we have better reasons for accepting the reality of original intentionality in human experience than we have for assuming that it will one day be possible to reduce all of the natural sciences to fundamental physics.

    By “original intentionality” I do not mean anything Cartesian (private, transparent to introspection) but rather the fully public character of mind and culture, including in the latter not only language, art, religion, music, cooking, dance, etc. but also science itself. Seen thus, intentionality is not only public but also historical, contingent, and dynamic.

    Denying the human reality of language and culture in order to salvage the positivist dream of the unity of science (i.e. the reduction of all empirical knowledge to fundamental physics) seems quite foolhardy, to say the least.

    That is not to say that we should not pursue embodied-embedded computational neuroscience as a research program for understanding the subpersonal correlates of intentionality, or that we should not pursue paleoanthropology within an extended evolutionary synthesis as a research program for understanding the natural-historical origins of intentionality. It is only to say that intentionality (mind, language, culture, etc.) should be explained, not ‘explained away’.

  13. keiths:

    The other half is the lack of a plausible theory of original intentionality. The laws of physics don’t take semantics into account, as far as we can tell. Unless we’re mistaken about that, any physical system must be fundamentally syntactic.

    KN:

    I see the point quite clearly, but I do question whether it is correct. We would need to know either that (a) there are no emergent properties or (b) there are emergent properties, but original intentionality can’t be an emergent property.

    No, the only way my claim would be undermined would be if original intentionality existed and if it were strongly emergent, so that meaning could influence the behavior of the physical system via downward causation. There’s no evidence at all for that, and the very notion seems magical.

    However, if there are emergent properties and if original intentionality is an emergent property, then we can have original intentionality constrained by the laws of physics.

    If the behavior of the system is constrained by the (syntactic) laws of physics, then where does the original intentionality come from? Saying it’s emergent is not an explanation unless you can explain how it emerges. “As if” intentionality, on the other hand, can be built on top of syntax, but then the system remains fundamentally syntactic. There is no downward semantic causation.

    On my view, original intentionality is a property of living animals; it is biological, not (reductively) physical. But it is a property of animals, not of any one of their parts; it is no objection to my thesis that brains per se are only syntactical.

    The same syntactic laws of physics govern the behavior of the brain, the rest of the body, and the environment. Where does semantics enter into the causal picture?

  14. KN,

    At this point in the development of Eurocentric philosophy, I think we have better reasons for accepting the reality of original intentionality in human experience than we have for assuming that it will one day be possible to reduce all of the natural sciences to fundamental physics.

    The real choice is between affirming or rejecting original intentionality. Original intentionality is a magical notion, depending as it does on some yet undiscovered principle of physics by which meaning can directly influence matter. “As if” intentionality is built on well-understood, non-magical syntactic principles. What does original intentionality explain that “as if” intentionality does not? Why prefer a magical explanation to a down-to-earth one?

    Denying the human reality of language and culture in order to salvage the positivist dream of the unity of science (i.e. the reduction of all empirical knowledge to fundamental physics) seems quite foolhardy, to say the least.

    Straw man. To say that semantics is reducible to syntax is not to deny the reality of semantics. Chemistry remains real even if it is reducible to physics.

  15. BruceS: I’m afraid that the best I’ve been able to come up with by your use of “data” is that it means this: “something I, Neil, understand about how perception works but no other philosopher or scientist has recognized”.

    Let me try answering this in a different way.

    In my view, what the perceptual system is doing, is something like measurement. When a particular neuron fires, that is similar to the mercury column in a thermometer crossing a particular calibration mark.

    From this viewpoint, Hebbian learning is like calibrating measuring instruments.

    I see measurement as a semantic operation, not as a syntactic operation. That’s why I say that the brain is a semantic engine rather than a syntactic engine.

    It could be that your brain is measuring with metric units, and my brain is measuring with British units. For the measurement that is done internally, there is no need that we use the same units. So let’s say that my perceptual system measures in rickert units, and yours measures in bruces units. So there is no truth requirement for perception, because truth is conformance to a public standard. But here we each have our own private standards. There is, however, a internal consistency requirement. If I can make the same measurement with my left eye or with my right eye, then the two should agree. And Hebbian learning has to be periodically recalibrating, so as to maintain that internal consistency.

    When we use language to express what we have perceived we, in effect, translate our own private standards into a public standard. That’s where truth shows up. That connects truth with language, rather than with perception.

  16. I find myself in much agreement with Neil in these matters, except for his presuppositions. He seems to think that philosophers would oppose his ideas. In fact he is pretty much in line with philosophy of language I am familiar with.

    And there are things that would be analysed the other way round. For example,

    Neil Rickert: I see syntax as emerging from semantics. Roughly speaking, an attempt to give a semantic analysis of the mechanism of speech divides the speech into parts (sentences, words, phonemes). Those are really semantic units in the analysis of speech as a mechanism (ignoring the message being conveyed by speech). Our notion of syntax comes from an idealization of this analysis. The idealization doesn’t actually fit language all that well, in my opinion.

    If language is taken to be a formal system, i.e. a tool for something else, then it’s the tool for conveying meaning, i.e. everything in language (phonology, vocabulary, and syntax) serves the purpose of semantics. Syntax doesn’t “emerge” from semantics, but rather can be seen as the condensation or manifestation of semantics, even though this is not strictly true either. The relationship between the elements of syntax and semantics is arbitrary, there’s no necessary connection, but it can be seen as fixed and meaningful (purposeful) as long as it’s systematic.

    It’s like the relationship between utensils and eating. Fork and spoon help to eat, but they don’t emerge from eating. They are not even strictly necessary for eating. You can have a wooden spoon, you can have sticks, or use hands for eating, and similarly you can devise whatever syntax to suit your semantic purposes, but to convey a meaning from yourself to the next person, you have to share the syntax, the code.

  17. Neil Rickert: Let me try answering this in a different way.

    In my view, what the perceptual system is doing, is something like measurement.When a particular neuron fires, that is similar to the mercury column in a thermometer crossing a particular calibration mark.

    Neil: Thanks for taking the time to reply (twice!) to my somewhat snarky post.

    You did not choose to respond to my question about how your view fits with scientists or philosophers involved in current cognitive science. That is your prerogative.

    But I believe that in order to explain one’s ideas to others, it is necessary that one goes through the exercise of engaging with what other domain experts have proposed and then comparing one’s ideas to theirs.

    To my best understanding, your ideas are consistent with what some philosophers and some cognitive scientists claim. But only you are in a position to assess that in detail.

    Of course, whether you choose to make such an effort is a different issue and entirely up to you.

  18. keiths:

    To say that semantics is reducible to syntax is not to deny the reality of semantics.Chemistry remains real even if it is reducible to physics.

    What makes something real in your definition, Keith?

    Based on his Real Patterns, and on applying the Intentional Stance at agent level, I think Dennett might agree that organisms are have “real” intentionality under the “Real Patterns” conception of reality (without using “original” as an adjective for intentionalty).

    And he’d agree that an agent level intentionality depends on the culture in which the agent acts.

    (But I’d guess that the details of explanation after that likely differ from what KN might say)..

  19. Kantian Naturalist:
    BruceS,

    The piece by Rockwell I had in mind was “The Hard Problem is Dead; Long Live the Hard Problem“.

    I read the this article but I did not see any knockdown argument against Dennett’s thoughts on qualia. Rockwell himself describes how his approach could be countered by Dennett.

    I read Rockwell as claiming that Dennett extends Sellars view that “all awareness in linguistic” too far; that in fact, Sellars was not consistent on this, sometimes writing as if there were three things: (1) thinking linguistically, (2) non-linguistic thinking in a sensory mode (eg sound by musicians), and (3) sensations, eg audial sensations. Rockwell also claims that Dennett’s linguistic approach would deny awareness to animals (and babies for that matter).

    In support of (2) and (3) in this reading of Sellars, Rockwell proposes that consciousness is an emergent property of two kinds of awareness: one linguistic (for generating/participating in the space of reasons) and one for discriminative signal processing, which we share with many animals.

    Rockwell further speculates discrimination is implemented in connectionist networks. It is this which provides the qualitative background according to Rockwell. Language enables us to have higher-order-thoughts. Language enriches, deepens consciousness but does not enable us to have it.

    Under this conception, experience could not be completely rendered into language and hence would defy explanation. That’s the hard problem as Rockwell thinks it should be approached, as I understand him.

    But then he admits that Dennett could reply that the connectionist mechanism exists but is unconsciousness. I think Dennett would go further and say that Rockwell gets a basic point about Dennett wrong. Dennett does not say the micro-judgings that underlie experience are linguistic; I read him as saying they are neural transformations (which I suspect he would now say can be modelled by PP).
    Language is then a virtual machine that runs on this connectionist architecture (in some non-IT sense) and that has transformed the “user illusions” that the brain creates and which we experience as qualia. This transformation acts on the micro-judgments that become conscious because of something like the Global Workspace Model.

    In addition, Rockwell does not engage with Dennett’s thought experiments which try to show that these micro-judgings are the precursor to phenomenality, not the results of applying some qualitative dimension when discriminating.

    Under Dennett’s model, animals and babies could have phenomenal experience; it simply would be unlike that of people who use language. Dennett says that which animals have such experience is an empirical issue.

  20. Here is a recent Brains blog post providing empirical evidence about color experience and color discrimination and how these two are affected by which language an infant learns. But I’ll need to read it a few more times to understand to what extent it supports Dennett’s views, or even if it does at all.

    Here’s part of the conclusion as a teaser:

    The phenomenal effects of categorical colour properties are more likely a consequence than a cause of knowledge of categorical colour properties, and there are most likely no visual appearances of red or other categorical colour properties.

    This is why we need an different story about how humans first come to know about the categorical colour properties of things. The evidence we’ve been considering suggests that the true story will involve pre-linguistic visual discrimination plus learning colour words through communicative interactions.

  21. Bruce,

    My view on the reality of semantics is similar to my view on the reality of arithmetic. Humans and computers can both add up a column of numbers and get the right answer. To argue, as some (such as vjtorley) have, that computers aren’t really doing arithmetic seems unsupportable to me. What justifiable criterion would distinguish the “real” arithmetic done by a human from the “as if” arithmetic done by a computer?

    I think that both human and computers do real arithmetic, and that in both cases the operations are carried out by physical systems whose fundamental components operate purely syntactically.

    Substitute semantics for arithmetic and my argument remains basically the same. Arithmetic is reducible to syntax, and so is semantics, but arithmetic and semantics don’t thereby become illusory.

  22. keiths: Original intentionality is a magical notion, depending as it does on some yet undiscovered principle of physics by which meaning can directly influence matter. “As if” intentionality is built on well-understood, non-magical syntactic principles. What does original intentionality explain that “as if” intentionality does not? Why prefer a magical explanation to a down-to-earth one?

    I’m deeply puzzled by this assumption that original intentionality requires downward causation.

    I reject downward causation, because “downward causation” requires that there are “levels” of causation, and I don’t think there are any “levels” of causation — neither “upward” nor “downward”. The idea of “levels” of causation arises from a hopeful but ultimately misguided attempt to unify the sciences.

    Does that mean that I should reject original intentionality as well? Does original intentionality only make sense (conceptually) if there is downward causation?

    I’d like to see an argument for this claim.

    Keiths suggested that it would be “magical” to say that “meaning can directly affect matter”. I’m not entirely sure if my picture would count as “magical” by his criteria. My picture, in brief, is that the following are all true:

    (1) propositional content is constituted by inferential relations;
    (2) inferential relations are instituted by norm-governed social practices;
    (3) norm-governed social practices are a special kind of embodied coping;
    (4) embodied coping is ‘geared’ into motivationally salient ambient affordances of the physical and social environments;
    (5) embodied coping (including discursive practices) is causally implemented by the balance between center-to-periphery predictions and periphery-to-center prediction errors in the central nervous system of a living animal.

    I don’t know if this counts as “magical” or not, but I do see it as allowing plenty of room for “original intentionality.”

  23. keiths:
    Bruce,

    Substitute semantics for arithmetic and my argument remains basically the same.Arithmetic is reducible to syntax, and so is semantics, but arithmetic and semantics don’t thereby become illusory.

    So you believe intentionality/semantics IS real.

    It’s the label “original” that you don’t like.

    Have I got that right?

  24. Bruce,

    So you believe intentionality/semantics IS real.

    It’s the label “original” that you don’t like.

    Have I got that right?

    Yes. Our thoughts really are about things, and that is essential to their usefulness.

    What I doubt is the idea that thoughts are somehow metaphysically tethered to their referents, which is what I see proponents of original intentionality as claiming.

    For example, it’s obvious that the words “Donald Trump” on paper aren’t intrinsically about the man running for president. They could be about another man named Donald Trump (pity the fellow), or they could mean “man who wets himself” in some undiscovered tribal language of the New Guinea highlands.

    Any link to the man running for president depends on (variable) human convention. It’s an example of “derived intentionality”; the words “Donald Trump” aren’t intrinsically about the man, but they derive their aboutness from the fact that we associate them with him.

    Advocates of original intentionality would say that the buck stops with us. The aboutness of the words “Donald Trump” depends on something external — us — but our own thoughts about Donald Trump don’t derive their intentionality in the same way. They are intrinsically about the man running for president.

    I’m skeptical of this view because I think it suggests a magic, metaphysical connection between our thoughts and their referents.

  25. keiths,

    Would it be fair to say that, on your view, original intentionality means that embodiment, language, society, and culture (all of which are historical and contingent) play no constitutive role in determining semantic content and reference?

    I put the question that way because, on my view, embodiment, language and culture show us what original intentionality really is.

  26. keiths: What I doubt is the idea that thoughts are somehow metaphysically tethered to their referents, which is what I see proponents of original intentionality as claiming.

    I doubt that too. Perhaps some proponents of original intentionality are claiming that. But I don’t see that as a necessary part of original intentionality.

    From my point of view, the ability to refer to things in the world is learned. But it is not learned from the culture. Rather, it is learned individually through experience interacting with the world.

  27. Kantian Naturalist:

    (1) propositional content is constituted by inferential relations;
    (2) inferential relations are instituted by norm-governed social practices;
    (3) norm-governed social practices are a special kind of embodied coping;
    (4) embodied coping is ‘geared’ into motivationally salient ambient affordances of the physical and social environments;
    (5) embodied coping (including discursive practices) is causally implemented by the balance between center-to-periphery predictions and periphery-to-center prediction errors in the central nervous system of a living animal.

    How does the original intentionality of animals work with this explanation?

    Perhaps you mean it to be derived from us ascribing it to them through the intentional stance (which is Brandom’s view according to my very limited understanding). But you called animal intentionality original in another post, so that does not seem to work. Are you still appealing to a different type of intentionality for animals with its own explanation?

    Also, I guess “coping” is a term of art in some philosophically community or other, but I have not come across it except in the Kukla paper you cited at one point.

    Does it mean something like performing successful actions in the service of a successful life, where “successful” means something like in accordance with the organism’s role in its ecological niche?

  28. keiths:

    Yes.Our thoughts really are about things, and that is essential to their usefulness.

    My suspicion is that you and KN mean different things by “original”.

    But I’ve been down the “you guys are just disagreeing about the definition of words” road before (viz, “direct” perception).

    So I will butt out now.

  29. Bruce,

    My suspicion is that you and KN mean different things by “original”.

    Could be. I’m using “original intentionality” in what I think is the standard way: as the complement of “derived intentionality”. Derived intentionality depends on external factors, while original intentionality is intrinsic and self-contained.

    As I put it above:

    Advocates of original intentionality would say that the buck stops with us. The aboutness of the words “Donald Trump” depends on something external — us — but our own thoughts about Donald Trump don’t derive their intentionality in the same way. They are intrinsically about the man running for president.

    When KN invokes “norm-governed social practices” and “embodied coping” in his description, it sounds more like derived intentionality than original intentionality to me.

  30. KN,

    Would it be fair to say that, on your view, original intentionality means that embodiment, language, society, and culture (all of which are historical and contingent) play no constitutive role in determining semantic content and reference?

    I put the question that way because, on my view, embodiment, language and culture show us what original intentionality really is.

    I don’t think original intentionality is possible to begin with, so I haven’t thought much about the necessary preconditions beyond the need for some magical means by which meaning could influence physics.

    I’m deeply puzzled by this assumption that original intentionality requires downward causation.

    I reject downward causation, because “downward causation” requires that there are “levels” of causation, and I don’t think there are any “levels” of causation — neither “upward” nor “downward”.

    I don’t either. I think there are different levels of description, but that the causal processes are the same regardless of the level of description.

    Does that mean that I should reject original intentionality as well? Does original intentionality only make sense (conceptually) if there is downward causation?

    Yes, in my opinion. Original intentionality is intrinsic intentionality, and intrinsic intentionality involves intrinsic meaning. Neurons operate syntactically, so their firings lack intrinsic meaning. Thus the operation of the brain and nervous system can be described without reference to meanings.

    If so, then what causal role does meaning play? Where does semantics enter the causal picture?

  31. keiths:

    When KN invokes “norm-governed social practices” and “embodied coping” in his description, it sounds more like derived intentionality than original intentionality to me.

    I’ll be interested to see how the Bayesian stuff plays out in that. The operation of the neurons Bayes and PP is modelling is syntactic in your sense I believe. But there is still the need to add the norms to that causal process somehow.

    I don’t recall KN using Bayes in his book; I understood him as relying on the phenomenological ideas of M-P and others to justify original intentionality for whole organisms. He has mentioned elsewhere he is rethinking ideas in his book.

    The role of norms versus causal processes in Dennett’s thinking versus inferential semantics as in Brandom has come up before at TSZ. I think Dennett’s summary of their differences was linked then, but here it is again (pdf) in case you are interested.

    He does agree with a lot of Brandom, but sees a difference in how they reduce the norms to remove any regress, and in particular how to make sure the analysis goes far enough to ensure naturalism.

    He uses the terms “original intentionality” and “derived intentionality” freely at one point in his explanation, but of course goes on to say that in the full analysis he sees no difference between the two.

  32. BruceS: Does it [coping] mean something like performing successful actions in the service of a successful life, where “successful” means something like in accordance with the organism’s role in its ecological niche?

    Yes. And the stress here is on “good enough” actions that satisfy the organism’s goals.

    BruceS: Are you still appealing to a different type of intentionality for animals with its own explanation?

    Possibly. One of the big issues I don’t have a firm view on yet is whether animals have any intentionality at all — and if so, which ones. What are the material conditions of actualization for intentionality? I’m inclined to think that most (if not all) mammals and birds are capable of intentional action and object-directed thoughts with a sparse conceptual structure. But I simply don’t know what I want to say about lizards or goldfish.

    keiths: When KN invokes “norm-governed social practices” and “embodied coping” in his description, it sounds more like derived intentionality than original intentionality to me.

    I think that would be the case if original intentionality meant something like a Cartesian mind in which intentional thought is ‘at home’, and then practices and coping just borrowed their intentionality from that of the Cartesian mind. On this picture, the relation between embodiment and thought is like that between writing and thought: just a sentence only has semantic content by virtue of how it is used by a linguistic community, so too embodied purposive actions only convey semantic content by virtue of how the mind directs the body.

    But I don’t think that the Cartesian picture of mind could possibly be right, so the question is whether there is a non-Cartesian account of original intentionality. I think there is: original intentionality is the intentionality of animal coping and of human discourse. It counts as original simply by virtue of not being derived: it doesn’t stand in relation to something else in the same way that a stop-sign stands in relation to social conventions.

    Dennett rejects the original/derived distinction, of course, but I’m suspicious of his motivations here. Dennett is a verificationist, and I think it leads him astray here. He rejects the distinction because he sees no way of verifying it within a third-person, objective perspective on reality.

    I’m all in favor of verification as a criterion of epistemic significance for empirical knowledge, but I think that when it comes to understanding intentionality, we’re not operating at that level. The concept of intentionality is not ‘at home’ in objective, empirical knowledge, but rather in our linguistically-mediated, culturally informed self-understanding. It’s not a scientific concept, but a phenomenological/hermeneutic concept. (Or, if you prefer, it is a “manifest image” concept.)

    I do think the prospects are quite good for “naturalizing intentionality” — that’s my entire research project! — but doing so means (in my view) minimizing the conceptual distance between the manifest image and the scientific image, both of which are real.

    In the long run, “naturalizing intentionality” would mean using the relevant natural sciences (esp. cognitive science, neuroscience, comparative developmental psychology, and paleoanthropology) to understand how culture emerged within a specific trajectory of primate evolution.

    BruceS: I’ll be interested to see how the Bayesian stuff plays out in that. The operation of the neurons Bayes and PP is modelling is syntactic in your sense I believe. But there is still the need to add the norms to that causal process somehow.

    Yes, neurocomputation is syntatical in keiths sense (and in mine). But we’re not going to find norms in the head.

    I don’t recall KN using Bayes in his book; I understood him as relying on the phenomenological ideas of M-P and others to justify original intentionality for whole organisms. He has mentioned elsewhere he is rethinking ideas in his book.

    There’s no neuroscience in the book, Bayesian or otherwise. The book is an explication of the manifest image concept of intentionality, and I urge there a distinction between discursive and somatic intentionality in order to solve long-standing problems in the neopragmatist tradition of explicating the manifest image. It’s only been in my more recent work (none of which is published, or anywhere near ready for publication) that I’ve been trying to take neuroscience more seriously.

  33. KN:

    One of the big issues I don’t have a firm view on yet is whether animals have any intentionality at all — and if so, which ones.

    Wait — you think it’s possible that chimps don’t think about other chimps, or dolphins about other dolphins??

  34. keiths:

    When KN invokes “norm-governed social practices” and “embodied coping” in his description, it sounds more like derived intentionality than original intentionality to me.

    Bruce:

    I’ll be interested to see how the Bayesian stuff plays out in that. The operation of the neurons Bayes and PP is modelling is syntactic in your sense I believe. But there is still the need to add the norms to that causal process somehow.

    I still don’t understand why he invokes social norms in the first place.

    1. Thoughts can be non-linguistic, in which case linguistic norms are irrelevant. If I picture a ’74 aluminum-block Chevy Vega with oil leaking out of the pan — a common occurrence for aluminum-block Vegas — then my thought is clearly about the car. It’s intentional, but it doesn’t depend on social norms.

    2. The social aspect of linguistic norms doesn’t seem to be essential to their capacity to serve as a basis for intentionality. If I invent a personal language (for journalling, say) and become proficient enough with it that I start thinking in it, my thoughts remain intentional despite the fact that my linguistic norms are shared by no one else.

    3. He writes:

    Yes, neurocomputation is syntatical in keiths sense (and in mine). But we’re not going to find norms in the head.

    Supposing that’s true, I still don’t see how it gives semantics a causal role. If physics is syntactic, then it doesn’t matter whether norms are inside or outside the head. They’re physically instantiated, so any causal influence they exert is fundamentally syntactic, not semantic.

  35. keiths:
    keiths:

    I still don’t understand why he invokes social norms in the first place.

    1. Thoughts can be non-linguistic, in which case linguistic norms are irrelevant. If I picture a ’74 aluminum-block Chevy Vega with oil leaking out of the pan —

    Let me start with “norms” and ignore the “social”.

    Suppose you see that car and form that mental representation.

    But although the content of the representation is as you say (a 74 Vega), the causal target is actually a 75 Vega. So you have misrepresented.

    The norm is needed to make that distinction between representation and misrepresentation. For without it, one can ask: why isn’t the content of the representation correctly targeting the disjunction 74 OR 75 Vega?

    That’s the philosophers disjunction problem. A naturalistic solution must avoid regress to some other norms (eg saying simply one works better won’t do as “better” involves norms). As I understand him, Dennett thinks the solution is to use the subpersonal to bring the design stance and then fitness into play, and relying on the detailed approach of Millikan (eg the consumer/producer/representation separation) to explain the norms via past evolutionary selection.

    Supposing that’s true, I still don’t see how it gives semantics a causal role.If physics is syntactic, then it doesn’t matter whether norms are inside or outside the head. They’re physically instantiated, so any causal influence they exert is fundamentally syntactic, not semantic.

    The proximate causes are in the head but the distal causes are evolutionary and historical. You cannot explain the norm solely by the proximate causes.

    But I agree it is all causes in the end for a naturalistic explanation. You just need to flesh out the different sorts of causes in play and their interaction.

    I think current neuroscience ignores the norms issue and simply assumes the causal model which generates representations, without worrying about how to give a non-circular definition of misrepresentation.

    BTW, as I read the Dennett essay on Brandom, Dennett sees incompleteness Brandom’s solution to defining norms without regress. (Brandom’s approach involves social feedback to shape behavior via conditioning.)

  36. Kantian Naturalist: Yes. And the stress here is on “good enough” actions that satisfy the organism’s goals.

    I understand this means that “good enough” involves norms which would need grounding, perhaps naturalistic, although I understand your interpretation of that term might be different then (say) Dennett’s (!).

    Possibly. One of the big issues I don’t have a firm view on yet is whether animals have any intentionality at all — and if so, which ones. What are the material conditions of actualization for intentionality? I’m inclined to think that most (if not all) mammals and birds are capable of intentional action and object-directed thoughts with a sparse conceptual structure. But I simply don’t know what I want to say about lizards or goldfish.

    I take from this that you currently believe some animals have non-linguistic intentionality.

    But I don’t think that the Cartesian picture of mind could possibly be right, so the question is whether there is a non-Cartesian account of original intentionality. I think there is: original intentionality is the intentionality of animal coping and of human discourse. It counts as original simply by virtue of not being derived: it doesn’t stand in relation to something else in the same way that a stop-sign stands in relation to social conventions.

    We can reject the Cartesian view as the wrong way to start explication but still allow for a scientific approach which starts from the whole brain/body/world system and uses system analysis to try to develop models of subcomponents and their interactions. That is how I understand the PP approach.

    As an aside, while some tracing of references, I came across this provocatively titled paper by Grush, who helped developed the control theory version of neural modelling, which I believe is formally equivalent to some Bayesian models. The paper: In defense of some Cartesisn Assumptions Concerning the Brain and its Operations (pdf). I have not look at it yet.

    I’m all in favor of verification as a criterion of epistemic significance for empirical knowledge, but I think that when it comes to understanding intentionality, we’re not operating at that level. The concept of intentionality is not ‘at home’ in objective, empirical knowledge, but rather in our linguistically-mediated, culturally informed self-understanding. It’s not a scientific concept, but a phenomenological/hermeneutic concept. (Or, if you prefer, it is a “manifest image” concept.)

    I do think the prospects are quite good for “naturalizing intentionality” — that’s my entire research project! — but doing so means (in my view) minimizing the conceptual distance between the manifest image and the scientific image, both of which are real.

    As I’ve mentioned before, I see your challenge here as dealing with the pervasiveness of representation in cognitive science. If you are going to involve the PP approach as per your above list describing your order of explanation, then it seems your are stuck with representation and hence intentionality as part of the scientific image.

    Clark cites Sprevak (2013): Fictionalism about Neural Represenations which you can find here. It provides a clear summary of the issues of dealing with representation and its usage in the cognitive sciences. The paper examines whether a fictionalist approach might be applied to the representations in cognitive science. He describes issues with that approach, but at one point says that perhaps a fictionalist approach at subpersonal level combined with an approach that intentionality is real at the personal level might overcome his concerns. This sounds like your project, roughly speaking, so perhaps you might be interested is some papers is cites on the issue (page 15), although I don’t think they take a phenomological approach.

    In the long run, “naturalizing intentionality” would mean using the relevant natural sciences (esp. cognitive science, neuroscience, comparative developmental psychology, and paleoanthropology) to understand how culture emerged within a specific trajectory of primate evolution.

    That might work for intentionality for us linguistic, social primates, but what about other animals?

  37. Bruce,

    Let me start with “norms” and ignore the “social”.

    Suppose you see that car and form that mental representation.

    But although the content of the representation is as you say (a 74 Vega), the causal target is actually a 75 Vega. So you have misrepresented.

    I’m picturing the car, not seeing it, and my thought is a mental image of the car, not an indentification of the car as a ’74 Vega. Besides, I can picture a car that has never been built, but that doesn’t rob my thought of intentionality. Thoughts can be about nonexistent objects such as unicorns or titanium-block Vegas.

    The norm is needed to make that distinction between representation and misrepresentation.

    That question doesn’t arise until after we have ascribed intentionality to the thought. KN’s task is to show that norms are essential to the ascription of intentionality, and I don’t see why they should be.

    keiths:

    Supposing that’s true, I still don’t see how it gives semantics a causal role.If physics is syntactic, then it doesn’t matter whether norms are inside or outside the head. They’re physically instantiated, so any causal influence they exert is fundamentally syntactic, not semantic.

    Bruce:

    The proximate causes are in the head but the distal causes are evolutionary and historical. You cannot explain the norm solely by the proximate causes.

    That’s irrelevant, because the causes are physical, whether proximal or distal, and physics is syntactic, not semantic.

  38. keiths:

    Keith: KN will speak for himself; I was just responding to the part of your previous post directed at me asking about social norms. I admit only responded to the “norms” part of the phrase.

    I believe that philosophers think that any ascription of intentionality must include its semantic nature; in particular, the ability to be about something which is not the target of the representation. So it seems to me that merely ascribing intentionality requires norms of some sort.

    The other point I was trying describe was that a syntactic, causal explanation could not be limited to causes in the brain.

    I have no dog in the race on the question of which of the following two is the better description: as-if semantic engine or real and emergent semantic engine.

    I agree, by way, that the contents of a mental representation can be empty, ie that thoughts can represent non-existent entities. Another puzzle of intentionality. I guess you could combine them, as in the thought “Pegasus is a winged zebra”

  39. I am neither a philosopher, not a layman qualified to talk about these big ideas, but of course I have opinions.

    I do not think of brains a syntactic. Brains do stuff, but there is nothing in a brain that is about stuff. There is nothing in a brain that could be successful downloaded or uploaded except by the usual means of speaking, hearing, seeing, etc.

    There is thinking and remembering, but no thoughts or memories. The configuration of neurons is not entirely unlike the configuration of genomes. There is no way to map the configuration to behavior, except at the coarsest and grainiest level.

  40. petrushka,

    I do not think of brains a syntactic. Brains do stuff, but there is nothing in a brain that is about stuff.

    That sounds like an argument against the brain as a semantic engine, not a syntactic one.

    There is nothing in a brain that could be successful downloaded or uploaded except by the usual means of speaking, hearing, seeing, etc.

    Not even in principle? Why?

    There is thinking and remembering, but no thoughts or memories.

    ‘Thought’ is just a label for a bout of thinking. Likewise for memory.

    The question is whether there is anything fundamentally semantic about thinking and remembering. I say no, because physics itself is insensitive to semantics, as far as we can tell.

    The configuration of neurons is not entirely unlike the configuration of genomes. There is no way to map the configuration to behavior, except at the coarsest and grainiest level.

    That’s not an in-principle limitation.

  41. Bruce:

    KN will speak for himself;

    Of course.

    I was just responding to the part of your previous post directed at me asking about social norms. I admit only responded to the “norms” part of the phrase.

    Which makes sense, since the “social” part isn’t necessary for intentionality. For example, one of my pet names for my cat is “Scroopy”. It has no inherent meaning, and no one but me knows knew until now that it refers to my cat. It’s a purely private norm, yet “Scroopy” still refers to my cat. “Scroopy” has intentionality.

    I believe that philosophers think that any ascription of intentionality must include its semantic nature; in particular, the ability to be about something which is not the target of the representation. So it seems to me that merely ascribing intentionality requires norms of some sort.

    If I envision a boulder tumbling down a hillside, my thought has intentionality, but what norm(s) does it depend on?

  42. petrushka:

    Brains do stuff, but there is nothing in a brain that is about stuff.

    Two words that might make you reconsider: neural CODE.

    I am neither a philosopher, not a layman qualified to talk about these big ideas, but of course I have opinions.

    Perhaps the models cognitive scientists use should influence an informed opinion? Models using representations to explain brain processes are common among cognitive scientists. Does that make the representations in the brain real? Are quarks real because successful physics needs them? How about species in biology? Or supply and demand in economics?

    Or are such abstractions convenient fictions used to make models explainable and concise? That question was the theme of the paper I linked earlier regarding fictionalism and mental representations. You can have an informed opinion either way.

    There is no way to map the configuration to behavior, except at the coarsest and grainiest level.

    I agree since behavior emerges from the interaction of brains, bodies, and world and could not in general be predicted by isolating just one. And there is also the matter of the past influence of these factors on the brain and body.

    But if one could control for these, eg simple, short term predictions with given body and world conditions, and training the prediction model first on each person, then perhaps we can predict behavior solely from brain patterns. In fact, you can argue we already have (ignore the arguments there related to free will, I am just focusing on the nature of the reported experiments).

    Sure such predictions are coarse and grainy, and also only probabilistic, but that just shows we cannot do it well yet. The fact that we can do it all means we could do it better in future.

  43. keiths:

    Which makes sense, since the “social” part isn’t necessary for intentionality.For example, one of my pet names for my cat is “Scroopy”.It has no inherent meaning, and no one but me knows knew until now that it refers to my cat.It’s a purely private norm, yet “Scroopy” still refers to my cat. “Scroopy” has intentionality.

    But could you create your private names without
    (1) being a member of a species which has evolved with skills for social living which include communicating with others according to agreed norms, and
    (2) having learned a language by being a member of a linguistic community which trains its members in the norms of that language?

    Of course, that’s related to Wittgenstein’s PLA. I think philosophers are divided on these issues, but the majority would answer no, you could not invent private words without this background. IEP introduces some of the pros and cons. FWIW, based on his agreement with Brandom in the paper I linked, my guess is that Dennett would agree with the majority of philosophers.

    If I envision a boulder tumbling down a hillside, my thought has intentionality, but what norm(s) does it depend on?

    As I understand it, according to philosophers, the very concept of intentionality as specifically apply to mental representation involves norms. So just by that definition, your thought does depend on norms. Now your example would seem to also involve a fictional boulder, as opposing to seeing a real one, so that combines the two puzzles of non-existent contents with the ability to misrepresent.

    I think there is an analogy to rule obeying versus rule following.

    Planets obey the rules of physics when moving in their orbits. But people follow rules in using physics to calculate the orbits. Part of the difference between obeying and following a rule is that planets cannot make mistakes but people can.

    So it you want to have a causal (syntactic) explanation for rule following, you have to do more work than you have to do for providing a causal explanation for rule obeying. Namely, you have to explain how mistakes happen. Mistakes involve norms for correctness.

    Philosophers see intentionality the same way, as I understand it. For more details, see SEP and IEP

  44. Bunch of thoughtful posts here recently, Bruce. I think you’re ready for your oral defense!

  45. walto:
    Bunch of thoughtful posts here recently, Bruce.I think you’re ready for your oral defense!

    A friend of mine who is a CIO decided he wanted a PhD because it would help him get jobs on boards of directors once he retired. He is an excellent manager and people person, but I never would have classed him as an academic. He has a BA only, I believe.

    But, being such a good manager, he had made a lot of friends among his employees; two were now professors at a local university (not U of T though) and were willing to sponsor him in a PhD in a combined business/IT stream. They managed to convince the powers-that-be to let him in in that new stream.

    But what really impresses me was not his ability to get into the PhD program.

    It is his willingness to commit to working according to the schedule and standards imposed by others. Being able to avoid that is what I love about retirement. So no Phd programs for me. Though I would love to sit in on some classes, that is not an available option, AFAIK, and I can see why universities don’t want freeloaders in their paid-tuition streams.

  46. BruceS: A friend of mine who is a CIO decided he wanted a PhD because it would help him get jobs on boards of directors once he retired.He is an excellent manager and people person, but I never would have classed him as an academic.He has a BA only, I believe.

    But, being such a good manager, he had made a lot of friends among his employees; two were now professors at a local university (not U of T though) and were willing to sponsor him in a PhD in a combined business/IT stream.They managed to convince the powers-that-be to let him in in that new stream.

    But what really impresses me was not his ability to get into the PhD program.

    It is his willingness to commit to working according to the schedule and standards imposed by others.Being able to avoid that is what I love about retirement.So no Phd programs for me. Though I would love to sit in on some classes, that is not an available option, AFAIK, and I can see why universities don’t want freeloaders in their paid-tuition streams.

    Cool.

    I’ll tell you something else you could consider doing. For my birthday, earlier this week, in celebration/recognition of a “late-life crisis” I got a tattoo(!) The artist told me I was the third college professor he’d inked in a week.

  47. I’ve been absent from this conversation as the week’s teaching and writing wore away at my time, but I see that BruceS has already said most of the things I would have said, anyway.

    BruceS, what was that paper on mental representations and fictionalism? I missed that. Sounds really interesting.

    One question I’m still troubled by is whether computational neuroscience rests on a misguided metaphor. Churchland, Clark, and a few others seem happy to say that what brains do is compute. But one might worry that brains aren’t computers anymore than tornadoes.are. Just because we can build a simulation of both doesn’t mean that either is a computer.

    I agree with BruceS that there’s a deep link between intentionality, normativity, and being mistaken. In the case of discursive intentionality, it’s more or less clear how the normative pragmatics cash out: other people keep track of our commitments and entitlements (Brandom calls this “deontic scorekeeping”). On this account, you can’t have thoughts that you can keep to yourself unless you also have the capacity to engage in intentional discourse with others. Getting into the space of reasons in the first place requires socialization.

    The harder question is whether there’s something analogous in animal intentionality. Beisecker has an intriguing article on this, “The Importance of Being Erroneous: Prospects for Animal Intentionality” that I’ll read this weekend.

    I want to pursue the idea that discursive intentionality arises phylogenetically from how the animal intentionality of intelligent hominoids is transformed by the imperative to cooperate. We know that chimps can think and infer, and we also know that chimps can make inferences about the inferences of other chimps. But they do so in order to compete more effectively against each other.

    We do that, too, in games of strategy or in politics. But we also infer together in order to infer better. We bootstrap our inferential abilities by pooling our cognitive resources to solve common problems — whether coordinating a hunt, designing and running an experiment, solving a theorem, or figuring out what to cook for dinner.

    There’s been some pretty good research indicating that the cognitive psychological basis of cooperation is imitation. Even very young human children are much better at imitation than chimp infants at the same age, when the chimp infants have similar or better cognitive abilities in other domains (e.g. spatial awareness).

    I would surmise that there’s some neurophysiological difference between human brains and chimp brains that correlates with our ability to imitate and cooperate. But I have no idea how to go about testing for it. No doubt there’s already a large body of research on this question.

  48. petrushka:
    There is no way to map the configuration to behavior, except at the coarsest and grainiest level.

    Here is a rant on overhyped neurosciece by a neuroscientist I suspect you will enjoy. Even mentions “cargo cult science”; where have I read that phrase recently?

    Interesting comments too.

  49. BruceS: Two words that might make you reconsider: neural CODE.

    Not even close to making me reconsider.

    We have no fucking clue how to simulate a brain.

    We can’t simulate the brain of C. Elegans, a very well studied roundworm (first animal to have its genome sequenced) in which every animal has exactly the same 302-neuron brain (out of 959 total cells) and we know the wiring diagram and we have tons of data on how the animal behaves, including how it behaves if you kill this neuron or that neuron. Pretty much whatever data you want, we can generate it. And yet we don’t know how this brain works. Simply put, data does not equal understanding. You might see a talk in which someone argues for some theory for a subnetwork of 6 or 8 neurons in this animal. Our state of understanding is that bad.

    Guest post: Dirty Rant About The Human Brain Project

Leave a Reply