The Semantic Apocalypse

The other night I came across this fascinating set of lectures about “the semantic apocalypse” — the thought being that, as we come to know more about how the brain really works, the more it will seem as though meaning and intent are a sort of illusion — something that the brain generates in order to organize information — and in no way corresponding to what’s really going on.   Since the brain is adapted to modeling what is going on in the external environment (including the social environment), it doesn’t need to be good at modeling itself.  So the categories we use to describe “mental phenomena”, such as “intentionality”, are just cognitive shortcuts we rely on to compensate for the lack of the brain’s transparency to itself.

I found all three lectures quite fascinating.  I should warn you that the second lecture leans heavily on the work of Ray Brassier and Quentin Meillassoux, so it may seem somewhat off-putting at first.   I’ve only discovered their work recently myself, but I shall endeavor to respond to the best of my abilities to any questions that arise.

 

 

 

38 thoughts on “The Semantic Apocalypse

  1. How do we distinguish how the brain really works, without meaning and intent? And if meaning and intent are merely illusions, what can we really know about how the brain really works? And how would we discover it?

    This is why science needs philosophy. Philosophy can tell us that we’re sawing off the limb of rationality that our supposedly scientific conclusions are based upon and reveal the absurdity of using logic and reasoning, and meaning and intent, to deny logic and rationality, or meaning and intent.

    Or to claim they are merely an illusion.

    All of science is an illusion. Science proves it!

  2. Ali McMillan:

    While I agree with Nick that the scenario as framed in the paper is rather depressing, I simply don’t believe the conclusions we derive from neuroscience must be of this type.

    That seems about right to me.

    While there’s a lot of good stuff coming out of neuroscience, I think we should be skeptical of the interpretive conclusions such as those being discussed.

    The neuroscientists are attempting to reverse-engineer the brain. I’m inclined to think that cannot be done.

    Suppose some Martian scientists (we’ll pretend that they exist) came across a computer left on Mars by one of the NASA probes. So they tried dissecting it and analyzing it, much the way that the neuroscientists work with the brain. Could they work out what the computer is doing? I don’t think they could.

    If these Martian scientists happened to also have one of our computer science textbooks, describing the logical structure of a computer, then reverse-engineering might be possible. But, without that input, they would not know where to start.

    Neuroscience is informed by philosophy of mind, particularly by materialist versions of philosophy of mind. That they are finding evidence to support materialist conclusions might merely be a result of their using materialist ideas in their reverse engineering.

  3. “Of course it is happening inside your head, Harry, but why on earth should that mean that it is not real?”
    Dumbledore

  4. Neil Rickert: Neuroscience is informed by philosophy of mind, particularly by materialist versions of philosophy of mind. That they are finding evidence to support materialist conclusions might merely be a result of their using materialist ideas in their reverse engineering.

    That’s a really interesting idea, Neil! Would you mind fleshing out a bit further what you have in mind by that?

  5. Kantian Naturalist,

    Let me start by expanding on my comparison with a computer.

    If you purchased a PC around 1985 (when they were simpler than now), it would have come with 640K of memory. That memory would have consisted of 27 chips. Of those 18 would have been 256K bits and 9 would have been 128K bits of capacity. A particular byte of memory would have one bit in each of 9 separate chips (including the parity bit, which was common in those days). So if the Martian scientist was exploring memory, he would likely conclude that each bit was distributed over 9 different chips. That assumes that he was looking for bits (which is unlikely).

    If the Martian scientist was looking for correlations between internal signals and the incoming data, he would probably find the correlations in the signal bus, but not in the memory chips. The stored signal level in memory chips is very small, so hard to detect.

    That’s why I think neuroscience needs a lot of external guidance, before it can hope to uncover anything.

    At present it gets that external guidance from epistemology and philosophy of mind. I am deeply skeptical of both. Epistemology and PofM had there origins at a time when dualistic thinking was common. The explanations provided by E and PofM are implicitly tied to dualism. If you then go materialist, and toss away the dualistic assumptions, is it any wonder that you finish up with something that seems nihilistic?

    Back around 20 years, or so, ago I made a fresh start. So I have my own view of knowledge and cognition, which seems very compatible with the evidence but does not have the same bleak consequences. Unfortunately, it is unpublishable and almost impossible to explain, because the commitment to traditional views of E and PofM runs very deep in society (including in academia).

  6. I thought the second reply from Ali McMillan did a good job of responding to the concerns and I’d be interested in what you thought of it. In particular, when you say:

    the thought being that, as we come to know more about how the brain really works, the more it will seem as though meaning and intent are a sort of illusion — something that the brain generates in order to organize information — and in no way corresponding to what’s really going on.

    I am not clear why you infer that meaning and intent are an “illusion” just because we understand how brain processes produce them. Further, how could they not correspond to what is going on? I take the original authors point that what we experience is a model of the real world, but it seems it would have to be generally reliable or we would be extinct. Of course, I also recognize that the model is not completely reliable and that scientific processes are needed to improve its reliability.

    The author makes a similar point to yours, I believe, when he says:

    even though our brain generates behavioural outputs bottom up, we perform actions for this or that–under the guise of bottomlessness

    as part of the argument saying neuroscience discounts intentionality (in the sense of aboutness). But the statement is not an accurate description of neuroscience. Many of the brain’s processes combine top-down with bottom up. For example, visual image processing simultaneously involves analysing the pixels from our retinas with input from higher order mental representations: Is this a face? Is it someone I know? Do I have an emotional attachment to them?

    To me, this top down processing is one aspect of intentionality. I don’t see why it would have to be conscious processing to be considered so.

    So, like the second responder, I don’t accept the pessimistic conclusions from the “Blind Brain Hypothesis”.

    I’ll leave the “Bottleneck” Hypothesis for a possible followup.

    (By the way, it seems to me that the author is using the two meanings of “intentionality” haphazardly, eg in this quote:

    Historically, science tends to replace intentional explanations of natural phenomena with functional explanations.

    Is this about purpose or aboutness? And is it true in either case? Science has banished teleological explanations for inanimate matter, but purpose seems to remain part of behavioral explanations of living things. As for aboutness, cognitive science and computer science continue to study how concepts are formed from real world information.

  7. Neil Rickert: At present it gets that external guidance from epistemology and philosophy of mind. I am deeply skeptical of both. Epistemology and PofM had there origins at a time when dualistic thinking was common. The explanations provided by E and PofM are implicitly tied to dualism. If you then go materialist, and toss away the dualistic assumptions, is it any wonder that you finish up with something that seems nihilistic?

    I agree with the main thought here — “. If you then go materialist, and toss away the dualistic assumptions, is it any wonder that you finish up with something that seems nihilistic?” — it coheres perfectly with a point I tried making at Uncommon Descent many times, which is that materialism entails nihilism only if one begins with implicitly dualistic (and/or theistic) assumptions.

    I’m less enthusiastic about the idea that epistemology and philosophy of mind are themselves still fundamentally shaped by dualism, but maybe you’re right. It certainly is true that epistemology and philosophy of mind are often framed as trying to get away from, or get around, or respond to, Cartesianism. (This is because Kant’s model of the mind is often tarred with the idealist metaphysics he ties it to, even though they are eminently separable, and because the pragmatist conception of mindedness, meaning, and knowledge is still not taken as seriously as it deserves to be.)

    A less bleak interpretation of neuroscience might be that we just don’t know what intentionality really is, so we don’t know how far we’ll have to go in revising our pre-theoretic (and implicitly dualistic) intuitions about those concepts.

  8. You say the following about the first reply

    I should warn you that the second lecture leans heavily on the work of Ray Brassier and Quentin Meillassoux, so it may seem somewhat off-putting at first. I’ve only discovered their work recently myself, but I shall endeavor to respond to the best of my abilities to any questions that arise.

    I understand how he formulated the issue from a Kantian perspective, but his attempt to resolve the problem using these two philophers was completely opaque to me. So my first question: is there anyway to explain it to someone who has essentially no knowledge of that school of philosophy. In particular, it seems to involve a unique terminology and set of concepts.

    I also found the last section jarring in the discussion of neuroscience and philosophy of mind:

    And in our present world, this cultural sphere is dominated by the logic of capital. In order to resist this logic, what Malabou says we must distinguish between is flexibility and plasticity.

    The segue to a criticism of capitalism was what surprised me. Is it common in this type of philosophy? If so, why?

  9. BruceS: am not clear why you infer that meaning and intent are an “illusion” just because we understand how brain processes produce them. Further, how could they not correspond to what is going on? I take the original authors point that what we experience is a model of the real world, but it seems it would have to be generally reliable or we would be extinct. Of course, I also recognize that the model is not completely reliable and that scientific processes are needed to improve its reliability.

    Right, but Bakker’s speculation (I would not even call it a hypothesis) is that, while the brain is generally reliable at mapping the features of its environment, it is not good at mapping the features of itself, in part because there’s no adaptive pressure for the latter. So we think of ourselves as intentional agents, not because we really are, but because we receive so little data about our inner states from which to extrapolate.

    BruceS: Is this about purpose or aboutness? And is it true in either case? Science has banished teleological explanations for inanimate matter, but purpose seems to remain part of behavioral explanations of living things. As for aboutness, cognitive science and computer science continue to study how concepts are formed from real world information.

    Bakker might be conflating teleology and intentionality; I don’t know.

    To repeat some criticisms my friends made on Facebook about Bakker’s speculation: we would have to have the metaphysics of causation figured out in order to be able to rule out “emergence”. For it could be that purposiveness is an emergent phenomena distinctive of living things, and that intentionality is an emergent phenomena distinctive of proto-sapient and sapient creatures.

    (As I’ve indicated several times elsewhere, that is indeed my view. I’m not endorsing the semantic apocalypse — I think it is deeply wrong — but I find the idea fascinating and worth thinking about. Much as with Platinga’s EAAN, I enjoy taking the time to think seriously about positions I find completely muddle-headed.)

  10. Neil Rickert:
    That’s why I think neuroscience needs a lot of external guidance, before it can hope to uncover anything.

    At present it gets that external guidance from epistemology and philosophy of mind

    I understand that functionalism and the computational theory of mind are two core paradigms of neuroscience. I would say that the original source these is Turing’s work, not philosophy (although philosophers have since built on much of his thinking).

  11. BruceS: I understand how he formulated the issue from a Kantian perspective, but his attempt to resolve the problem using these two philosophers was completely opaque to me. So my first question: is there anyway to explain it to someone who has essentially no knowledge of that school of philosophy. In particular, it seems to involve a unique terminology and set of concepts.

    Yes, Srnicek is one of these so-called “speculative realists” — a relatively new school of philosophy. There are some on-line resources that introduce the basic ideas and key figures.

    Roughly speaking, the idea is to ground metaphysical realism through a criticism of Kantian assumptions. It gets really tricky because the speculative realists are coming out of so-called “Continental philosophy” (the German and French stuff), which is typically hostile to realism and especially to naturalism. So the speculative realists are responding to anti-realism coming out of Husserl, Heidegger, Derrida, and so on. It’s not a terrain especially welcoming to people who aren’t formally trained in it, I’m afraid.

    BruceS: The segue to a criticism of capitalism was what surprised me. Is it common in this type of philosophy? If so, why?

    In so-called “Continental philosophy” — as distinct from the dominant approach in professional philosophy in the English-speaking world, the so-called “analytic philosophy” — the distinctions between philosophy and politics aren’t as strict, and Marx is taken far more seriously than in the U.S. The line of thought here is that epistemology and metaphysics are themselves cultural practices, and they don’t take place in a vacuum — they are embedded in the societies in which they take place, and so political economy (and its critique, Marxist or non-Marxist) must be taken just as seriously as science and art.

  12. Kantian Naturalist: A less bleak interpretation of neuroscience might be that we just don’t know what intentionality really is, so we don’t know how far we’ll have to go in revising our pre-theoretic (and implicitly dualistic) intuitions about those concepts.

    As best I can tell, John Searle is expecting science to eventually give an account of intentionality. As best I can tell, you expect neuroscience to give an account of intentionality.

    I almost never hear physicists saying that intentionality is a problem. I do sometime hear biologists saying that, but the problem they see is that some of the literature is expressed with too much use of intentional language.

    I don’t see that intentionality is a scientist’s problem. I doubt that neuroscience will ever find intentionality modules in the brain. I see intentionality as a philosopher’s problem, to be solved within philosophy. A fully scientific account of knowledge and of cognition won’t depend on intentionality, though it might be possible to see where the idea of intentionality fits into traditional philosophical accounts.

  13. BruceS: I understand that functionalism and the computational theory of mind are two core paradigms of neuroscience. I would say that the original source these is Turing’s work, not philosophy (although philosophers have since built on much of his thinking).

    I’d push that back to Hume, though perhaps it should go all the way back to Plato. Some of the terminology might come from Turing and probably other mathematical logicians.

    When I read Locke’s empiricism, that looks about right to me, though there’s an awful lot of detail that would need to be filled in. At least, as I read him, Locke emphasized acquiring concepts. But when we get to Hume, it is all about acquiring beliefs, presumably with a fixed set of concepts. Perhaps Berkeley has already reduced it to a matter of acquiring beliefs.

    Science cannot be understood as acquiring beliefs. Science is much more about conceptual change. The role of beliefs is ancillary.

  14. It would be profoundly interesting if one could modify one’s brain functioning, temporarily, such that the “information horizon” to which Bakker refers is pushed back a significant distance and more of the causal precursors that underlie ordinary waking consciousness became available to conscious experience.

    It would be surprising were such an experience to have no bearing upon the questions Bakker raises, or upon the challenges raised by current neuroscience.

    (And, of course, one can have that experience, courtesy of the late Dr. Hofmann.)

  15. Neil Rickert,

    Yes, Searle expects that neuroscience will eventually tell us what intentionality is. I do not share that view. I do regard intentionality as a philosophical problem, with a philosophical solution. (That’s what my book is all about!)

    There’s a very particular tightrope I want to walk across here, two extremes to be avoided. The first extreme is telling the scientists what to look for when they do science. The second extreme is not caring about what scientists are finding out. My goal is to clarify the nature of intentionality under the constraint that intentionality is the sort of thing that could be realized in a causal structure, but without telling the cognitive neuroscientists what causal structures they should be looking for. In short, the model of intentionality should be causally realizable, but I will leave to cognitive neuroscientists to tell me how exactly it is causally realized.

    Neil Rickert: When I read Locke’s empiricism, that looks about right to me, though there’s an awful lot of detail that would need to be filled in. At least, as I read him, Locke emphasized acquiring concepts. But when we get to Hume, it is all about acquiring beliefs, presumably with a fixed set of concepts. Perhaps Berkeley has already reduced it to a matter of acquiring beliefs.

    No, Hume has Locke’s basic view about how concepts are acquired — what Hume calls the “copying” of “ideas” from “impressions”.

    Hume does worry more than Locke does about how exactly we chose beliefs, because as Hume sees it, the mind quasi-mechanically combines the ideas in all sorts of different ways, some of which are settled as “beliefs” and many of which are not. So the difference between the idea-combinations that get certified as beliefs/assertions/judgments (take your pick) and the ones that don’t get certified has to be explained somehow, and that’s a different process than concept-acquisition.

  16. Reciprocating Bill 2: (And, of course, one can have that experience, courtesy of the late Dr. Hofmann.)

    Right, but Bakker is raising the speculation of genetically engineered brains that have a lower ‘information horizon’ — the trip becomes the new normal for brains like that. (But wasn’t it Leary who said, “the secret isn’t getting high, it’s staying high”?)

  17. Discuss,

    Intentionality is an illusion. Intention is better categorized as opportunity. Organisms tumble into niches they can live in.

  18. Kantian Naturalist: Right, but Bakker is raising the speculation of genetically engineered brains that have a lower ‘information horizon’ — the trip becomes the new normal for brains like that. (But wasn’t it Leary who said, “the secret isn’t getting high, it’s staying high”?)

    That would be profoundly maladaptive in many ordinary circumstances. IMHO the information horizon does not simply reflect the inability of brain activity to represent its own causal precursors, but is, at least in part, an instance of active filtering, such that what is presented to consciousness are the relationships relevant to survival (rather than knowledge).

    For example, imagine having to execute a lengthy leap onto a small platform, such that missing the leap means serious injury or death. A huge amount of non-conscious brain activity underlies the ineffable estimations of distance and motor planning that precede such a leap. Hofmann’s discovery inarguably (well, to anyone who has had that experience) makes elements of that underlying activity more available to awareness. But pushing back the information horizon in that way is likely to be dangerously distracting. We are adapted to make such leaps without conscious awareness of the computations that lie behind that planning. Awareness of the ordinarily non-conscious neural activity that enables a leap is unlikely to be of any help, and indeed would be a distraction that increases the likelihood of a fatal fall.

    I would argue that the upshot of this is that the location of the horizon doesn’t simply reflect an inability of the human brain to perceive its own activity, but rather set-points that have been crafted by evolution such that only information required to complete the leap is presented to consciousness for executive attention.

  19. Reciprocating Bill 2: I would argue that the upshot of this is that the location of the horizon doesn’t simply reflect an inability of the human brain to perceive its own activity, but rather set-points that have been crafted by evolution such that only information required to complete the leap is presented to consciousness for executive attention.

    That’s a decent argument for why “the blind brain” (as Bakker calls it) was adaptive, but it doesn’t touch on the question of whether genetic, technological, or pharmacological manipulation that made our brains less “blind” to their own workings would be maladaptive — after all, it all depends on how much we also use technology to alter the environment. What would have been maladaptive under one set of constraints needn’t be maladaptive once the constraints are altered.

  20. William J. Murray: Postmodernism = The Semantic Apocalypse

    It’s easy enough to see why one might think so, but in fact the difference is quite interesting. For suppose we take the most extreme versions of “postmodernism” — say, the least cautious and least qualified statements by Nietzsche, Derrida, Foucault or Lyotard, put all all those statements in a blender, and we get a postmodern smoothie that says something like, “there is no truth, just interpretation” or “it’s all just a play of signs that don’t represent anything outside of themselves”, etc. And the arguments for these conclusions, when there are arguments, are basically linguistic — they turn on the impossibility of a term ever really satisfying the condition of referring to or representing anything. So there’s a semantic nihilism, but it’s based on a combination of semantic, logical, and phenomenological considerations — in other words, it’s an a priori argument.

    The semantic apocalypse is different, because it’s grounded in a posteriori, or empirical considerations. It turns on the idea that, whatever we think ‘meaning’ and ‘intention’ are, they don’t exist in rerum natura, in the order of things, because that’s just not how brains, qua natural objects, function.

    (Of course, the postmodernist would reject the a priori/a posteriori distinction — that’s part and parcel of her semantic nihilism — so in saying the postmodernist argument is itself an a priori argument I am being disingenuous.)

  21. Kantian Naturalist: after all, it all depends on how much we also use technology to alter the environment.

    I agree – which is why I included “under ordinary circumstances.”

  22. Bakker:

    Ostensibly, the narrative of Neuropath is structured around something called ‘The Argument,’ which is simply that humans are fundamentally biomechanical, such that intentionality can only be explained away.

    At this stage, I wonder what “biomechanical” is supposed to mean. It has always seemed to me that bio is not mechanical.

    The first is a straightforward pessimistic induction. Historically, science tends to replace intentional explanations of natural phenomena with functional explanations. Since humans are a natural phenomena we can presume, all things being equal, that science will continue in the same vein, that intentional phenomena are simply the last of the ancient delusions soon to be debunked.

    I see this as a misunderstanding of science. Bakker seems to see science as being about explanation. But I see science as being about control and prediction. Science prefers mechanistic accounts, because they are what allow control and prediction. If a scientist cannot find a mechanism in what she is studying, then she is likely to move onto to something where a mechanism can be found.

    The final secondary argument offered in the novel is based on something called the ‘Blind Brain Hypothesis.’

    Bakker’s view here seems to be that the brain is doing some really complex stuff, and it might be deceiving us.

    I take the opposite view. What the brain is doing is mostly very simple, and we don’t know all of the details because there isn’t much to know.

    I turn on a light switch, and the light goes on. I suppose we could put all kinds of monitoring equipment on the wire to measure the progress of the electric flow from the switch to the light. But most of us see what the switch and wire do as simple enough that we don’t need that monitoring.

    I’m not suggesting that the brain is just switching. But I do suggest that what it is doing is simple in principle, though complex in detail (because there are so many neurons doing it). In particular, I see the brain as low-tech, not as high-tech.

  23. Neil Rickert: At this stage, I wonder what “biomechanical” is supposed to mean. It has always seemed to me that bio is not mechanical.

    I would surmise that by “biomechanical,” he means the efficient causation obtaining between the molecules comprising living things. It is true that Bakker is implicitly assuming something that is prima facie plausible but by no means beyond questioning: that teleology (purposiveness, “final causes”) cannot be itself an emergent phenomenon from certain kinds of ‘mechanical’ (‘efficient causation’) relations.

    I see this as a misunderstanding of science. Bakker seems to see science as being about explanation. But I see science as being about control and prediction. Science prefers mechanistic accounts, because they are what allow control and prediction. If a scientist cannot find a mechanism in what she is studying, then she is likely to move onto to something where a mechanism can be found.

    Here I would disagree slightly — I don’t think that the debate between scientific realism and instrumentalism simply depends on understanding science correctly. But yes, Bakker’s argument assumes scientific realism (as does Paul Churchland’s argument, on which Bakker is drawing for his speculation).

    I’m not suggesting that the brain is just switching. But I do suggest that what it is doing is simple in principle, though complex in detail (because there are so many neurons doing it). In particular, I see the brain as low-tech, not as high-tech.

    Yes, that’s a very nice way of putting it: what each individual neuron is doing could be (comparatively) low-tech, but the sheer number of neurons, of different shapes and sizes, and the number of synapses, etc. — plus how various neurotransmitters modulate the signals — all of that makes the brain extremely difficult to model as a whole. But that still leaves open the possibility that “meaning” and “intention” and all the rest of our “mental vocabulary” are nothing more than a very poor model of the brain.

  24. Replacing the word “illusion” with “sensation” renders moot much of the philosophical conundrum. Humans have a sensation of stubbing their toe, a sensation of loss after a death of a loved one, a sensation of self, a sensation that their actions or relationships are meaningful. And people can share common experiences through language.

    Attributed to Lincoln, concerning the loss of an election to the opposing political party, “Somewhat like that boy in Kentucky, who stubbed his toe while running to see his sweetheart. The boy said he was too big to cry, and far too badly hurt to laugh.”

  25. Zachriel: Replacing the word “illusion” with “sensation” renders moot much of the philosophical conundrum. Humans have a sensation of stubbing their toe, a sensation of loss after a death of a loved one, a sensation of self, a sensation that their actions or relationships are meaningful. And people can share common experiences through language.

    Sure, we experience ourselves (and each other) in those terms. But, two problems:

    (1) from the fact that we experience ourselves in those terms, it doesn’t follow that those experiences reliably indicate what is really going on — certainly those feelings are no reliable guide to the causal mechanisms which generate those experiences in us;

    (2) from the fact that we experience ourselves in those terms, it doesn’t follow that we must experience ourselves in those terms — we experience ourselves in those terms because of the mostly-acquired, partly-innate conceptual framework that we use, and that conceptual framework can be modified. Even the innate parts of it could be modified through genetic engineering of a sufficiently advanced degree.

  26. Kantian Naturalist: It is true that Bakker is implicitly assuming something that is prima facie plausible but by no means beyond questioning: that teleology (purposiveness, “final causes”) cannot be itself an emergent phenomenon from certain kinds of ‘mechanical’ (‘efficient causation’) relations.

    Scientists and engineers build heat-seeking missiles. That seems pretty teleological to me. As Petrushka keeps reminding us, teleology has to do with the use of feedback.

    A great tennis player doesn’t carry out a sequence of pre-planned mechanical actions to serve that ace. Rather, as he makes his stroke, his proprioceptive system is continually monitoring his action, and he is adjusting it to keep his aim on target.

    If I am typing in a password to the computer, what I am typing does not show on the screen. Yet, I know when I have hit a wrong key, and am usually able to correct it. This self-monitoring and continuous correction (i.e. feedback) is what does teleology.

    Here I would disagree slightly — I don’t think that the debate between scientific realism and instrumentalism simply depends on understanding science correctly.

    That’s a different issue. I probably wasn’t clear enough.

    My point was that science is not comprehensive. It picks and chooses what it studies, and it chooses that for which there can be useful mechanistic accounts. I’m disagreeing with Bakker’s use of “a straightforward pessimistic induction.”

  27. Kantian Naturalist: Right, but Bakker’s speculation (I would not even call it a hypothesis) is that, while the brain is generally reliable at mapping the features of its environment, it is not good at mapping the features of itself, in part because there’s no adaptive pressure for the latter.So we think of ourselves as intentional agents, not because we really are, but because we receive so little data about our inner states from which to extrapolate.

    That argument sounds close to McGinn’s Mysterianism.

    Just because the brain did not evolve to do something specific, or just because we don’t have introspective access to something, does not mean we cannot understand it.

    Obvious examples would be QM and cosmology or indeed almost all of modern science.

    The only way it may make sense of Bakker for me is to take “understanding” of the mind/brain to require introspection, but then the argument is just a tautology.

    Philosophers once thought that the all of the mind was conscious only. And that it was possible to understand the mind solely by the use of introspection. I can understand how such philosophers would find neuroscience and cognitive science threatening. But are there any of these philosphers left?

    In one of her books (I think Touching a Nerve, but I don’t have it here to check), Pat Churchland tells of a philosopher who, at a conference she was attending, said something to the effect that (my paraphrase): ” The brain, the brain, I’m tired hearing about the brain. I want to talk about the mind”. So I guess the answer to my question is yes.

  28. Kantian NaturalistIt is true that Bakker is implicitly assuming something that is prima facie plausible but by no means beyond questioning: that teleology (purposiveness, “final causes”) cannot be itself an emergent phenomenon from certain kinds of ‘mechanical’ (‘efficient causation’) relations.

    Bakker’s concern that science eliminates the concepts of function and purpose may apply to physics but not to biology. Function is scientifically acceptable once we understand how it arises from natural selection. Purpose remains a part of behavioral explanation once we understand how mechanisms to produce it can again arise from natural selection, and such use of purpose would include the intentionality implied with forming a mental goals (admittedly the required evolutionary explanation is not complete yet — maybe your book will help spur it along!).

    Just because we might have a theoretical reductive explanation does not mean higher level concepts become “illusions”.

    Pat C believes that, despite the caricature of eliminative materialism that Bakker seems to believe. (For example, listen to the last few minutes of the Mp3 here
    Churchland on Eliminative Materialism

  29. Kantian Naturalist:
    goal is to clarify the nature of intentionality under the constraint that intentionality is the sort of thing that could be realized in a causal structure, but without telling the cognitive neuroscientists what causal structures they should be looking for.In short, the model of intentionality should be causally realizable, but I will leave to cognitive neuroscientists to tell me how exactly it is causally realized.

    Who would be tasked with determining whether and if so how the proposed causal mechanisms could be realized in the physical brain as it is demonstrated to be by neursoscience, and with then developing the ideas for the experiments to do the test of that proposed explanation. From your quote, it seems there would be a gap between the philosophical and the scientific. IMHO, philosophy of science should include bridging that gap (although I am not sure if you would class your book as philosophy of science).

    Yes, that’s a very nice way of putting it: what each individual neuron is doing could be (comparatively) low-tech, but the sheer number of neurons, of different shapes and sizes, and the number of synapses, etc. — plus how various neurotransmitters modulate the signals — all of that makes the brain extremely difficult to model as a whole. But that still leaves open the possibility that “meaning” and “intention” and all the rest of our “mental vocabulary” are nothing more than a very poor model of the brain.

    That seems to imply that a successful model of how the brain implements the mind would require a neuron-level explanation. I don’t think that is how science works.

    As an analogy, we don’t model weather by modelling every individual molecule in the atmosphere. Similarly, intentionality could be explained by a higher-level system explanation of information processes being executed by neurons, and that model could be complex and high-tech. Tononi’s Integrated Information Theory of Consciousness would be an example of what I have in mind, although it is not specifically about intentionality. My point is that is it “high tech”,eg mathematically complex and not expressed in terms of individual neurons.

  30. Neil Rickert: I’d push that back to Hume, though perhaps it should go all the way back to Plato.Some of the terminology might come from Turing and probably other mathematical logicians

    I’d reverse the emphasis to say that, once Turing created the concepts of Turing machines and the Turing test, which in turn lead to CTM and functionalism, we can see that Hume and Plato can be interpreted as providing some vague philosophical antecedents. But it took the clarity and specificity of Turing’s approach to make it possible for these ideas to become scientific paradigms.

  31. Kantian Naturalist: Sure, we experience ourselves (and each other) in those terms. But, two problems:

    (1) from the fact that we experience ourselves in those terms, it doesn’t follow that those experiences reliably indicate what is really going on — certainly those feelings are no reliable guide to the causal mechanisms which generate those experiences in us;

    No sensation is equivalent to “what is really going on”. At best, experience in the human mind is a phantom of the actual event. Much of human experience is actually detached in many ways from “what is really going on”. Imagination, daydreams, longing, all of these are sensations, but not illusions as the word is usually construed. The word “illusion” is usually reserved for those rather rare sensations which someone confuses with “what is really going on”.

    Kantian Naturalist: (2) from the fact that we experience ourselves in those terms, it doesn’t follow that we must experience ourselves in those terms — we experience ourselves in those terms because of the mostly-acquired, partly-innate conceptual framework that we use, and that conceptual framework can be modified. Even the innate parts of it could be modified through genetic engineering of a sufficiently advanced degree.

    Sure. For instance, the sense of self has cultural components. That doesn’t make it an illusion. It’s still a sensation, albeit modified by experience. An experienced hunter senses the wilderness in a different fashion than a novice. Should that cause philosophers philosophical angst?

  32. I would have thought a good way of attacking the problem of consciousness is via observation, investigation and experiment. Modelling should be (I suspect already is) a powerful tool.

    Also we have the opportunity to look at simpler systems and we have a whole range of organisms that have a range of cognitive ability. We don’t have to, nor should we expect to, crack the nut of human cognition in one fell swoop. For one thing, individual humans may lack the cognitive ability to understand the working of the human brain. I see no reason to assume that the human brain and how it results in the conscientiousness we observe is able to be understood in every detail by those same human brains.

    Take a very simple example of a sensory/respnse system, the “run and tumble” strategy of E. coli bacteria. I don’t think there could be any simpler chemotaxis system unless someone else can visualise it but even this simple system is not yet fully understood.

    If we are unable to understand how this simple network of proteins functions as an integrated system, then what hope have we of understanding the complex pathways in eukaryotic cells?

    From here

  33. If we are unable to understand how this simple network of proteins functions as an integrated system, then what hope have we of understanding the complex pathways in eukaryotic cells?

    I’d like to see Design advocates demonstrate that biological design is even possible.

    I’m pretty much convinced that even if we produce strong artificial intelligence (consciousness) we will not understand it.

  34. If it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight design modifications, ID theory would absolutely break down.

  35. Mung:
    If it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight design modifications, ID theory would absolutely break down.

    And the LORD God formed man of the dust of the ground, and breathed into his nostrils the breath of life; and man became a living soul.

  36. Mung: If it could be demonstrated

    What I like about that is the different levels of “demonstrated” that underlie all these discussions. You know what I mean Mung….

Leave a Reply