Obscurantism

The subject of obscure writing came up on another thread, and with Steven Pinker’s new book on writing coming out next week, now is a good time for a thread on the topic.

Obscure writing has its place (Finnegan’s Wake and The Sound and the Fury, for example), but it is usually annoying and often completely unnecessary. Here’s a funny clip in which John Searle laments the prevalence of obscurantism among continental philosophers:

John Searle – Foucault and Bourdieu on continental obscurantism

When is obscure prose appropriate or useful? When is it annoying or harmful? Who are the worst offenders? Feel free to share examples of annoyingly obscure prose.

408 thoughts on “Obscurantism

  1. KN,

    Dennett is strongly sympathetic to functionalism; as he sees it, the brain is basically a computer. And if one endorses that view, then one will have to face the syntax-to-semantics problem (“content”) and where does awareness come from (“consciousness”)? And that’s just what Dennett has addressed in Content and Consciousness, The Intentional Stance, Consciousness Explained, and since.

    But, enactively construed, the brain is not a computer; it is a biological organ, and in fact it is a constituent of a second-order autopoietic system (the organism) that is comprised of first-order autopoietic systems (cells). Certain aspects of its functioning can be modeled by certain kinds of computers, but that doesn’t mean that neuron firing rates are just syntactical any more than it means that convection cells are just syntactical because we can build a computer model of the storm.

    I don’t think that Dennett argues that these things are syntactical because they can be modeled by computers. I think he would say that they are syntactical for the same reason I do: because they operate without regard to meaning.

    A neuron doesn’t care what its inputs and outputs mean. It just operates, and its operation is determined by the laws of physics. Meaning doesn’t enter into it.

    A neuron may be embedded within a network in which its inputs and outputs have meaning, but the neuron doesn’t care about that. It just keeps operating.

  2. Meaning would emerge from the interaction of the system. The laws of physics and chemistry do not know anything about meaning, but systems capable of general learning do.

  3. keiths:
    walto,

    It seems to me that meaning is an evolutionary prerequisite for thinking, rather than vice-versa. Meaningless thoughts confer no selective advantage.

    I think I’d say something like “Neither can happen without the other”–at least if we eliminate the kind of “meaning” tree rings might be said to have in the absence of any thinking things around to “take” that “meaning.”

    I don’t understand either meaningless thoughts or stuff said to mean something in spite of meaning it to no one.

  4. Neil Rickert: But machine learning doesn’t work very well.

    Tell that to Google and to all those Jeopardy champs that lost to Watson.

    “Very well” needs to be defined.

  5. keiths:
    KN,

    I don’t think that Dennett argues that these things are syntactical because they can be modeled by computers.I think he would say that they are syntactical for the same reason I do:because they operate without regard to meaning.

    A neuron doesn’t care what its inputs and outputs mean.It just operates, and its operation is determined by the laws of physics.Meaning doesn’t enter into it.

    A neuron may be embedded within a network in which its inputs and outputs have meaning, but the neuron doesn’t care about that.It just keeps operating.

    I think Dennett now says that one should take the intentional stance towards neurons, because neurons want to join coalitions with other neurons to “push” their point of view (ie their source of activation) towards “brain fame” = consciousness.

    So neurons have a limited form of intentionality, at least his intentional stance. It is not until the lower levels of biochemical interaction that meaning in this limited sense is not helpful for explanation.

    Neurons evolved from single cells that had developed the ability to survive in the world which I think is needed for the types of intentionality that are necessary (but not sufficient) for consciousness.

  6. Kantian Naturalist: So treating minds as computers made talk of “inner representations” scientifically respectable.

    I’ll be interested to see how far you take the enactivist alternative.

    I think the most successful scientific models for the brain will consider it as embodied in an agent that must survive in the world and is in constant contact with world, especially during development (I’m hedging to allow for people who get “locked-in” syndrome as adults).

    But does that mean perception and mental events are in all of brain, body, and world ? That seems to be the view of some enactivists.

    ETA: I think the best way to determine the answer is by seeing how successful science can be in developing models of an embodied brain which are only “loosely” causally coupled to the environment. That is, how successful will scientific models which draw lines around the brain/body be?

  7. BruceS: Tell that to Google and to all those Jeopardy champs that lost to Watson.

    As far as I know, there are humans involved in both cases.

    I seem to recall an interview with Watson programmers, where they were quite open about the behind-the-scenes human learning that fed into program changes.

    It is my understanding that Google makes use of human key clicks in its algorithm. Also, I understand that there are humans working behind the scenes doing some tweaking.

    “Very well” needs to be defined.

    Comparable to learning by a human child.

  8. walto:

    I don’t understand either meaningless thoughts or stuff said to mean something in spite of meaning it to no one.

    I’m assuming you both agree that there can be both conscious and unconscious thoughts or at least conscious and unconscious access to mental representations.

    So what extra does it take to make a mental representations conscious?

    Block claims neuroscience currently only takes three options seriously:
    – Higher Order Thought, you have a thought about a thought (I think this one is almost dead)

    – Global workspace — conscious thoughts have won a brain competition of competing neural activation networks and have hence been made globally available through working memory (Dennett fits here)

    – Biological: Blocks favorite: consciousness is a biological property but we currently lack the scientific discriminators to pick it out (or something like that, I don’t really follow him)

    Anything else out there that you are aware of?

  9. Neil Rickert: As far as I know, there are humans involved in both cases.

    [….” Very well” means …]

    Comparable to learning by a human child.

    Well, last I heard, there were humans involved in creation and raising of children too!

    But seriously, you are right, we are nowhere near that yet.

    Part of the reason, I think, is that any successful learning system which matches human children must have both the head start provided by evolution and reflected in the genes and also the ability to interact with the world as an autonomous agent during development.

    Human programming of a limited sort can be acceptable as a replacement for the genetic head start.

    Interaction with the world would seem to require a sensorimotor systems embedded in a robot.

    But there are projects which use interaction with the internet as a partial replacement for the that robot, like NELL at Carnegie Mellon.

    Simply perceiving the environment is not enough, agents have to be able to change their relationship to the environment and perceive the impacts of change. So maybe these systems also need the ability to post things to the internet and learn from the results?

    In fact, maybe some are posting here right now?

  10. BruceS: I’m assuming you both agree that there can be both conscious and unconscious thoughts or at least conscious and unconscious access to mental representations.

    I’m not sure if I am one of the “both”. I am very skeptical of “unconscious thoughts”. Learning can go on while we are not conscious, but I’m doubtful that thoughts can.

    Block claims neuroscience currently only takes three options seriously:

    So much the worse for neuroscience.

    – Biological: Blocks favorite: consciousness is a biological property but we currently lack the scientific discriminators to pick it out (or something like that, I don’t really follow him)

    I see biology is a mere implementation detail, though I grant that homeostasis is important. Consciousness is far more a philosopher’s problem than it is a scientist’s problem.

  11. BruceS: I’m assuming you both agree that there can be both conscious and unconscious thoughts or at least conscious and unconscious access to mental representations.

    Who is “you both”? If you’re asking me, I think there are unconscious thoughts.

    So what extra does it take to make a mental representations conscious?

    I don’t know.

  12. Erik: Rather, it’s about structure and meaning – consistently.

    Kantian Naturalist: I find it hard to share that view if Russell and Whitehead are right in saying that all of mathematics can be formally derived from set theory!

    (sorry for jumping in late) Whether or not all of mathematics can be derived from set theory is beside the point, I think. Set theory can be used (in more than one way!) to axiomatize, to provide a simulacrum for the real numbers. The real numbers as such, axiomatized in whatever way, still constitute a distinctive structure with their distinctive properties.

  13. Neil Rickert: Consciousness is far more a philosopher’s problem than it is a scientist’s problem.

    I imagine that scientists who are working on consciousness might disagree! Especially those with mortgages.

    Although what you say was true about science in the 80s.

    Consciousness is a feature of the world and so is a scientific problem.

    But there still conceptual issues for philosophers weigh in on and possibly help with in setting scientific research programs and interpreting results across sciences.

  14. It’s not a scientific problem until there’s a route to solution. Current research into consciousness is butterfly collecting.

    If we understood how brains produce consciousness, we would be able to mimic it in AI hardware.

  15. BruceS: Part of the reason, I think, is that any successful learning system which matches human children must have both the head start provided by evolution and reflected in the genes and also the ability to interact with the world as an autonomous agent during development.

    As I see it, the whole machine learning strategy is wrong.

    Learning is seen as finding patterns in the data. But that is not what a child faces. Rather, the problem for a child is that there is no data. There are only natural signals which will be pretty much noise. Learning requires inventing ways of getting useful data. That’s why I am repeatedly talking about categorization. And if a child invents ways of getting useful data, then he is inventing ways of getting meaningful data. So he is constructing his own original intentionality.

  16. :
    [Bruce asks: What extra is needed to make mental content conscious?]
    Walt: I don’t know.

    Humble admission of ignorance is a virtue.

    But not much fun at parties or internet forums.

  17. BruceS: I imagine that scientists who are working on consciousness might disagree!

    Of course it depends on what you mean by “consciousness”, and there’s no agreement on that. It cannot be a scientists problem until we have clear agreed definitions. And we don’t have that.

    It is a philosophers problem, because consciousness is largely about subjective experience. If science can “solve” it, they will have to prove that it is really objective. And if they do that, they will have proved the consciousness does not exist.

  18. Neil Rickert:

    Learning is seen as finding patterns in the data.But that is not what a child faces.Rather, the problem for a child is that there is no data.There are only natural signals which will be pretty much noise.Learning requires inventing ways of getting useful data.That’s why I am repeatedly talking about categorization.And if a child invents ways of getting useful data, then he is inventing ways of getting meaningful data.So he is constructing his own original intentionality.

    I think one of our main differences is that I see the brain machinery, innate concepts, and the process of development in a society as doing the following for all normal children: giving a starting point for finding data, as acting as filters meaning that sensory perceptions do not start as white noise, and finally as constraining the “shape” of the categories children form.

    Another difference: I have too much respect for neuroscientists to use their profession as the object of the phrase “so much the worse for”, at least without further elaboration.

    But you’ve been at this posting business much longer than I, and may have gotten tired of repeating yourself.

  19. Neil Rickert: Of course it depends on what you mean by “consciousness”, and there’s no agreement on that.It cannot be a scientists problem until we have clear agreed definitions.And we don’t have that.

    It is a philosophers problem, because consciousness is largely about subjective experience.If science can “solve” it, they will have to prove that it is really objective.And if they do that, they will have proved the consciousness does not exist.

    I think can have multiple competing scientific research programs staring with initial operational definitions of consciousness; the winning research program or some hybrid of the programs will determine the explanation of consciousness and the definition will be implicit in that explanation.

    So I think we can do science while simultaneously working on the philosophy, as long as the philosophers stay familiar with the ongoing science.

  20. Neil,

    And if a child invents ways of getting useful data, then he is inventing ways of getting meaningful data. So he is constructing his own original intentionality.

    Straight out of the womb, infants preferentially pay attention to faces over non-faces, moving objects over stationary ones, novelty over stasis.

    They don’t independently “invent” these classification schemes immediately after birth. They inherit them.

  21. keiths: They don’t independently “invent” these classification schemes immediately after birth. They inherit them.

    Brain learning is a continuation of evolution in a different substrate.

    Instincts are just coarse grained learnings. Details to be filled in.

    I’ve seen this in bantam chickens learning to raise their young. The hens are born with many of the required movements hard wired, but they are uncoordinated, and their first efforts are always fatal to the chicks. Something selects and refines their efforts.

  22. keiths:

    It seems to me that meaning is an evolutionary prerequisite for thinking, rather than vice-versa. Meaningless thoughts confer no selective advantage.

    walto:

    I think I’d say something like “Neither can happen without the other”–at least if we eliminate the kind of “meaning” tree rings might be said to have in the absence of any thinking things around to “take” that “meaning.”

    Well, if you define meaning in a way that depends on “thinking things”, then of course it can’t have preceded them. But I think that’s a mistake, because the capacity for thought had to evolve, and it seems implausible that it could have evolved from anything but the kind of precognitive meaning that you are excluding.

  23. petrushka,

    My point is that Neil’s portrayal of children is untenable. He sees them as blank slates, faced with a barrage of meaningless sensory noise, who must “invent” classification schemes in order to make sense of the noise and invest it with meaning.

    He is overlooking the fact that we come with inbuilt, innate capacities for classification and categorization.

  24. keiths: My point is that Neil’s portrayal of children is untenable. He sees them as blank slates, faced with a barrage of meaningless sensory noise, who must “invent” classification schemes in order to make sense of the noise and invest it with meaning.

    I would say children have some coarse grain learning hard wired by evolution. One of these hard wired things is the equipment necessary to learn. Learning is inventing and testing.

  25. I don’t understand what precognitive meaning is or how taking tree rings or outboard engine ropes as meaningful in the absence of a meaning taker helps explain the evolution of thinking.

    Are you saying that there have to be things around that are capable of being understood in order for understanding ever to take place? I think I could agree with that. If there were no law-like behaviors and everything in the world were completely random, maybe it would have been impossible for cogitation ever to have evolved.

    If you don’t mean that, I’m not following you.

  26. Bruce,

    I think Dennett now says that one should take the intentional stance towards neurons, because neurons want to join coalitions with other neurons to “push” their point of view (ie their source of activation) towards “brain fame” = consciousness.

    If so, I think he’s stretching the intentional stance to the breaking point. But even if you accept its application to neurons, the point is that their operation is ultimately syntactic, not semantic. Atoms certainly don’t care about meaning. They just follow the laws of physics. Put enough of them together in the right ways and you have a neuron. Put enough of those together in the right ways and you have a brain. It’s still a collection of atoms, all of which are blindly following the laws of physics without regard to meaning.

  27. BruceS: I think one of our main differences is that I see the brain machinery, innate concepts, and the process of development in a society as doing the following for all normal children: giving a starting point for finding data, as acting as filters meaning that sensory perceptions do not start as white noise, and finally as constraining the “shape” of the categories children form.

    If you assume innate concepts, then you are only pushing the learning problem back to evolution. You are not solving it.

    I’m doubtful that there can be much in the way of innate concepts, because I don’t see how DNA can encode concepts. The information carrying capacity of the DNA is too small.

    I expect that the DNA codes for producing the various kinds of neutrons, and putting them in approximately the right place. But connecting them all up is likely going to depend on adaptive behavior (including adaptive neural growth). So perhaps the initial state provides something like a crude way of categorizing, but it would need to be greatly refined in the presence of real world interactions.

  28. keiths: Put enough of those together in the right ways and you have a brain. It’s still a collection of atoms, all of which are blindly following the laws of physics without regard to meaning.

    I think you are wrong there. Meaning emerges.

    Sculptures are not just blocks of stone with everything that isn’t David chipped away. Systems are not just masses of atoms.

    Saying they just follow the laws of physics is not useful. It’s a non-useful form of reductionism.

  29. keiths: My point is that Neil’s portrayal of children is untenable. He sees them as blank slates, faced with a barrage of meaningless sensory noise, who must “invent” classification schemes in order to make sense of the noise and invest it with meaning.

    You are making stuff up. I see a child as having a lot of innate ability, but very little that has to do with reality. The child can use that ability to come up with ways of conceptualizing the world.

    He is overlooking the fact that we come with inbuilt, innate capacities for classification and categorization.

    Innate abilities for categorization — yes. Innate categories — no.

  30. Neil,

    You are making stuff up. I see a child as having a lot of innate ability, but very little that has to do with reality. The child can use that ability to come up with ways of conceptualizing the world…

    Innate abilities for categorization — yes. Innate categories — no.

    The distinctions I already mentioned — faces vs non-faces, moving objects vs stationary ones, novelty vs same-old same-old — are innate categories.

    Some other innate categories: Red vs blue, light vs dark, loud vs quiet, sweet vs bitter.

  31. Neil Rickert: If you assume innate concepts, then you are only pushing the learning problem back to evolution.You are not solving it.

    Yes, I agree that evolution had to solve the problems you note. But people get a head start because of that. I’m only talking about human learning based on that starting point.

    I’m doubtful that there can be much in the way of innate concepts,

    I don’t understand why you don’t consider the face preference and recognition abilities of days-old infants to be an example of an innate concept.

  32. keiths:

    If so, I think he’s stretching the intentional stance to the breaking point.

    Keith:
    It has to do with his personal-subpersonal distinction where he sees a hierachy of successively less capable “robots” until the atomic level; see p 88 of Intuition Pumps for more if you are interested.

    It does seem that the intentional stance stuff and the personal-subpersonal stuff seems to be only of interest to philosophers. I think psychologists use different concepts.

  33. keiths:

    …the point is that their [neurons’] operation is ultimately syntactic, not semantic. Atoms certainly don’t care about meaning. They just follow the laws of physics. Put enough of them together in the right ways and you have a neuron. Put enough of those together in the right ways and you have a brain. It’s still a collection of atoms, all of which are blindly following the laws of physics without regard to meaning.

    petrushka:

    I think you are wrong there.

    Which sentence(s) do you disagree with, and why?

    petrushka:

    Meaning emerges.

    Sure, but that doesn’t mean that meaning “reaches down” and modifies the behavior of the underlying atoms.

    Sculptures are not just blocks of stone with everything that isn’t David chipped away.

    No, because there are many different ways to sculpt David. The sculptor must choose one, and the choice may change as he proceeds to chip away stone.

    In the end, however, a sculpture is a collection of atoms, the sight of which can have interesting emotional and aesthetic effects on other collections of atoms known as ‘human brains’.

    Systems are not just masses of atoms.

    Right. They’re masses of particular kinds of atoms in particular arrangements.

    Saying they just follow the laws of physics is not useful. It’s a non-useful form of reductionism.

    I think it’s extremely useful. It reminds us that there is no such thing as downward causation, it rules out substance dualism, it explains why we needn’t invoke an élan vital to explain life. It solves many scientific and philosophic problems.

  34. BruceS: I don’t understand why you don’t consider the face preference and recognition abilities of days-old infants to be an example of an innate concept.

    Why would you consider that a concept?

    It is probably an indicator that there are a lot of neurons in about the right place for face recognition. But there is probably still a lot of adaptive tuning needed at that age.

  35. Neil Rickert: Why would you consider that a concept?

    It is probably an indicator that there are a lot of neurons in about the right place for face recognition.But there is probably still a lot of adaptive tuning needed at that age.

    Why wouldn’t I consider face recognization the usage of a concept or category? I agree it involves a bunch of neurons doing the right thing.

    Interesting enough with regard to tuning, what actually happens is we lose abilities to sort faces through experience. In particular, six month old babies are also good at recognizing different monkey faces, but we lose that ability through experience (unless special care taken to preserve it).

    But I never said categories don’t change with experience, only that that that we start with some innate categories.

  36. petrushka:
    Current research into consciousness is butterfly collecting.

    If we understood how brains produce consciousness, we would be able to mimic it in AI hardware.

    If we understood it, we would not have to do the science.

    Most of the consciousness scientists I have read are more aligned to the stamp-collecting view of consciousness. Butterflies are too flighty.

  37. walto,

    I don’t understand what precognitive meaning is…

    By ‘precognitive meaning’ I mean meaning in the absence of thought, e.g. when a bacterium interprets a chemical gradient as meaning that a food source can be found by swimming in a certain direction (aka ‘chemotaxis’).

    …or how taking tree rings or outboard engine ropes as meaningful in the absence of a [thinking] meaning taker helps explain the evolution of thinking.

    It’s implausible that evolution went from ‘no thinking, no meaning’ to ‘thinking and meaning’ in a single leap. I think that precognitive meaning came first, and that it evolved into cognition as organisms began responding to meaningful stimuli in ways that were more complex and less stereotyped.

  38. Bruce:

    I think Dennett now says that one should take the intentional stance towards neurons, because neurons want to join coalitions with other neurons to “push” their point of view (ie their source of activation) towards “brain fame” = consciousness.

    keiths:

    If so, I think he’s stretching the intentional stance to the breaking point.

    Bruce:

    It has to do with his personal-subpersonal distinction where he sees a hierachy of successively less capable “robots” until the atomic level…

    I’m comfortable with that idea, and I’m even comfortable saying that neurons ‘decide’ to fire under certain conditions, but to say that neurons

    …want to join coalitions with other neurons to “push” their point of view (ie their source of activation) towards “brain fame” = consciousness.

    …is going way too far. A neuron knows nothing of coalitions or consciousness or “brain fame”.

  39. BruceS: If we understood it, we would not have to do the science.

    That’s not true. understanding occurs in layers and degrees.

    We understand chemistry, but cannot do anything that is possible in chemistry. We have nothing in cognitive science equivalent to the periodic table. We do not even have the door open.

  40. keiths: By ‘precognitive meaning’ I mean meaning in the absence of thought, e.g. when a bacterium interprets a chemical gradient as meaning that a food source can be found by swimming in a certain direction (aka ‘chemotaxis’).

    It makes sense to me that our ability to understand our world evolved from bacteria’s ability to take this or that from their environment–to “interpret” as you say above. But how is that like tree rings or outboard motors?

  41. petrushka: That’s not true. understanding occurs in layers and degrees.

    Yes, that seems right.

    J.B. Priestley understood combustion. Lavoisier understood it better and differently. And today’s chemists understand it better still.

  42. walto,

    It makes sense to me that our ability to understand our world evolved from bacteria’s ability to take this or that from their environment–to “interpret” as you say above. But how is that like tree rings or outboard motors?

    It gets back to my definition of ‘meaning’:

    I think that meaning is present whenever one thing stands for another, and this expansive definition encompasses phenomena as dissimilar as novels and tree rings. I don’t object when someone says something like “the sun is high in the sky; that means it’s late morning or early afternoon”, and I wouldn’t feel the slightest compulsion to put scare quotes around the word “means” in that sentence. Whether in brains, novels, thermostats, tree rings, or outboard motors, something is standing for something else, and that counts as meaning in my book.

    The cranking of the motor stands for the fact that someone or something is pulling the rope. The tree rings stand for the fact that the tree has been growing for a certain number of years, at a different rate each year. The chemical gradient stands for the fact that a food source is nearby in a certain direction.

    Evolution has taken advantage of these “stands for” relations — these meanings — first in a precognitive way and then in a fully cognitive way.

  43. So, suppose we want to respond to Searle’s Chinese Room argument, according to which one can never get to semantics from syntax, because, on his view, there’s what amounts to an impregnable barrier.

    He likely wouldn’t object to these, call them, “proto-meanings” that bacteria use and that can be replicated by machines. There’s no question that THOSE are entirely syntax. There’s also no question that our own full-blooded meanings evolved from something like that utilized by the bacteria. Hence…..

    Interesting. Is that the strategy?

  44. keiths:

    …is going way too far.A neuron knows nothing of coalitions or consciousness or “brain fame”.

    Keith:
    Yet that is one way of describing a common working theory of neuroscientists to help explain consciousness or at least the way awareness and attention work together. Granted that it is just a way of thinking about it, not the detailed model that the scientists test.

    That model is in the details of what a coalition means operationally for science, how to recognize such a thing on (eg fMRI), what type of inter-neuron and possibly intra-neuron state changes occur, how long they last, etc.

    An analogous, higher-level problem arises when we say people know things and brains compute things. The challenge is to see how to explain one in terms of the other. ETA: Or even if it makes sense to try to do so. But we’ve been through a lot of that already in the reducibility thread.

  45. walto:

    .There’s also no question that our own full-blooded meanings evolved from something like that utilized by the bacteria.Hence…..

    Interesting. Is that the strategy?

    A variation is Millikan’s position, Dretske’s, and Papineau’s and others, as I understand it. Dennett is a supporter of Millikan so I think he goes along with it.

    Teleosemantics — roughly, meaning has arisen naturally from the representation mechanism evolving within an agent to serve some other consumer within the agent and being selected by evolution because it increases fitness of the agent. I think Keith’s point is that such a thing would not be possible if there was no natural way to form somewhat reliable causative links between content and its representation in some vehicle in the agent.

    The Swamp Man stuff that I bring up as a part of a joke from time to time is an allusion to a common counter argument to this position. The teleosemantics position is that history matters: things have meaning because of the evolutionary history of the mechanism. But what if a molecular duplicate of you or me was created by lightning striking a swamp? Such a being would not have our evolutionary history. Yet our intuition is that “meaning” would still apply in the same way since it is a molecular duplicate of a person.

    One’s response to this says something about how seriously one takes intuition and thought experiments.

  46. petrushka: That’s not true. understanding occurs in layers and degrees.

    We understand chemistry, but cannot do anything that is possible in chemistry. We have nothing in cognitive science equivalent to the periodic table. We do not even have the door open.

    Sure I agree we have a long way to go.

    But are you saying we should not continue to slowly open that door through science?

    Student to Neils Bohr: how will we ever understand these crazy spectral lines patterns?

    Neils Bohr: Beats the hell out of me. Better call in the philosophers.

    Somehow, I don’t think it went that way…

  47. BruceS: A variation is Millikan’s position, Dretske’s, and Papineau’s and others, as I understand it.

    What is the (late, great) Dretske cite on this issue? Thanks.

  48. Thanks. I have those two Dretske books, though I haven’t read them. I do know the “Experience as Representation” paper very well (and like it a lot)–I even considered including it in my Hall book–but it’s not really relevant to this particular issue, I don’t think.

  49. BruceS: Sure I agree we have a long way to go.
    But are you saying we should not continue to slowly open that door through science?

    I will try to put my thought into perspective.

    Rockets were invented many centuries ago. I forget how many, but let’s say a thousand or more years ago.

    Since Newton, we have understood action/reaction. We have had the ability to calculate what it would take to escape earth’s gravity. When you understand the fundamentals you can do research and development. The science and the technology progress together.

    I would argue that we have nothing equivalent to Newton’s laws with regard to how brains work. Perhaps we have gunpowder rockets in the form of computers, but we do not know how to make a self-booting general learning device.

    I have compared the problem to that of OOL. We know a lot of chemistry, but we can’t make life from first principles. Nor can we make AI.

Leave a Reply