677 thoughts on “Consciousness Cannot Have Evolved

  1. keiths: One condition of the car doesn’t mean one condition of the universe

    No sorry, you are the one being imprecise.

    One condition means that ANYTHING that decides the outcome must be the same. So whether we talk about computers (or your brain, the way you have defined it), then OF COURSE the circumstances of the environment within which it is making the outcome can only be one condition=one outcome. How could anyone think otherwise, its nonsensical. It seems you haven’t thought about this.

    If you ask a computer which forecasts the weather, if it will rain today, then how could you think that you can separate the computer from the factors which cause it to make the outcome. THE FACTORS DECIDE THE OUTCOME. If you ask it on a day in which it will probably rain, then you are going to get a different answer then if you ask it on a day when it probably won’t rain. Same computer!

    Now you are trying to say, well, forget about the factors just think of the condition of the computer. Well, the condition of the computer is affected by the factors, for petes sake. Just the same as YOU are affected by the factors of the environment. Did you think somehow someone was suggesting they are separate?

    So, if you NOW say that outside factors being the same, and internal factors of the computer being the same, then only ONE outcome is possible, under your paradigm, then great we agree. Then YOU can not make a decision. Whatever combination of outside factors and internal factors, only ONE outcome is possible for you.

    I just don’t agree with your paradigm.

  2. keiths,

    The only mystery now would be what the heck it is you mean by a decision. If the internal factors and external factors combined can only produce ONE outcome and not TWO, then what does a decision mean? The meaning of decision now can only mean the result, not a “choice”. There is no choice, if only one result is possible.

    THIS is the logical problem you haven’t overcome, even then you keep claiming you have.

  3. phoodoo,

    Now you are trying to say, well, forget about the factors just think of the condition of the computer.

    I’m not trying to say that. I’m pointing out that your suboptimal writing left your intended meaning of “one condition” unspecified.

    So, if you NOW say that outside factors being the same, and internal factors of the computer being the same, then only ONE outcome is possible, under your paradigm, then great we agree.

    One outcome is possible, but a choice is still made. We’ve been over and over this.

    Second, it doesn’t depend on physicalism as you seem to think. Do you understand why?

  4. keiths: One outcome is possible, but a choice is still made.

    An outcome was produced, but not a choice. You are being imprecise. A computer doesn’t make a choice, it produces an outcome. That outcome is ENTIRELY based on the combination of external input, and internal configuration.

    There are no other factors. There is no choice being made.

    This is turning into another case of you claiming you resolved an issue with you argument that you have not done.

  5. phoodoo:

    This is turning into another case of you claiming you resolved an issue with you argument that you have not done.

    Does this ring a bell?

    phoodoo:

    You are making up concepts that don’t fit the English language. You have obliterated what the word choosing means.

    keiths:

    Here’s what the dictionary says:
    choose
    /CHo͞oz/
    verb
    1 pick out or select (someone or something) as being the best or most appropriate of two or more alternatives.
    “he chose a seat facing the door”
    2 decide on a course of action, typically after rejecting alternatives.
    “he chose to go”

    Those definitions work just fine when the outcome of the choice is predetermined.

    Think about it, phoodoo.

  6. keiths: 1 pick out or select

    Think about it, what does pick out or select mean?

    You have use the term pre-determined, where the “pre” is unnecessary. The outcome is determined. It is not selected, it is forced by the conditions.

    Its not selected any more than a ball on a roulette wheel selects where it lands. It just looks more complicated when you can’t see all the factors.

  7. phoodoo,

    Its not selected

    Sure it is. Before the car’s decision is made, there are two or more alternatives for it to consider. It considers each of them in turn in order to determine which one is the best. It then picks that one.

    Fits the definition perfectly.

  8. keiths:
    phoodoo,

    Sure it is.Before the car’s decision is made, there are two or more alternatives for it to consider.It considers each of them in turn in order to determine which one is the best.It then picks that one.

    Fits the definition perfectly.

    Car? Does a roulette ball make a decision? It chooses to land on either red or black?

    Nonsense.

  9. phoodoo,

    The roulette ball doesn’t examine the alternatives and select the best one. It doesn’t fit the definition. The car does fit the definition.

  10. keiths,

    You keep talking about a car, what car??

    You have now added another criteria to your definition-examining. A computer doesn’t examine. It may seem like examine, because , you know sometimes lights flash and stuff, that is not examining. It doesn’t have sense to examine, it doesn’t look feel, and touch. I can understand why you would like to incorporate the term examining into the discussion, because you are aware of the differences between a computer and a mind. But you are destroying your own argument by doing so.

    Do you think a calculator examines? When you press 1+1 on a calculator, does it examine that, then choose the best response?

    Sorry, you argument is not solid at at all Keiths.

    I am not a big fan of the style, where you just keep repeating “choose the best one” and then claim you are saying something valid. Its not a selection if one condition only equals one outcome. There is no choice involved. A calculator doesn’t choose the answer to 1+1. Two is the only option.

  11. phoodoo:
    I don’t get this part. Neither Searle’s argument, nor the students have come close to arguing ANYTHING about consciousness

    From the SEP article I linked upthread:
    [start of quote]
    Searle 2010 describes the conclusion in terms of consciousness and intentionality:

    I demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentionality

    […]
    Searle’s shift from machine understanding to consciousness and intentionality is not directly supported by the original 1980 argument. However the re-description of the conclusion indicates the close connection between understanding and consciousness in Searle’s later accounts of meaning and intentionality.
    [end of quote]

    The experiment is supposed to show that one cannot get semantics from syntax alone; that is what “intentionality” is referring to. I think consciousness is related to understanding — Searle says the man following the instructions does not understand Chinese. I take understanding for Searle to require the possibility of consciousness of meaning, eg in the sense to we can consciously think about and respond to the question “what do you mean by that?”

  12. phoodoo: Well, I am still just not sure that there is “rules following” going on, as much as there is data matching.

    I realized later I should have been clearer about my understanding of “rule-following”. Informally, it means carrying out a set of precise instructions literally and by rote. More formally, it means what is described as an effective procedure in this SEP article on the Church-Turing thesis (and so it also means rule-following is a key part of what Turing machines do):.
    https://plato.stanford.edu/entries/church-turing/

    So when you say “data matching”, to me that implies rule following — namely the rules used to match data. Physical computers running software involve multiple layers of rule following; that’s the point of my hardware reference upthread.

    Another point: you talk about programming in commands as what human programmers do. I’m not sure what you mean by that, but for me the programming languages used by human programmers all must involved three concepts: sequence, selection, iteration. If programming commands just involves sequence for you, then it is not enough to capture what programming languages involve.
    https://irisiri.weebly.com/sequence-selection-and-iteration.html

    I’ll leave how selection relates to human choosing for you and Keith.

  13. BruceS,

    My more pragmatic conclusion would simply be that, with enough data, one could present the illusion of understanding.

    I think the same thing is happening with Keiths. He thinks if a self-driving car has enough data, it somehow understands that data, and thus can make a decision based on that data. When it reality it is just a bunch of small calculators, wired together and coming up with complex results, that without a detailed look into the calculations, give the illusion of choice.

  14. BruceS,

    Yes, I don’t disagree with that. I think each of those concepts are still essentially forms of if-thens.

    I suppose matching is another part of the equation when you get to data base referencing, like some programs have to do. If you ask a computer a question about Selina Gomez, it has to find within its database the things that have been tagged with that reference, and figure out what best matches based on the order of the question, etc…I don’t which of those three concepts you would call the matching process.

  15. phoodoo: My more pragmatic conclusion would simply be that, with enough data, one could present the illusion of understanding

    Well, that’s what the Chinese Room claims to demonstrate with the data being the rule book the man follows.

    Given that argument, for a scientific explanation of human understanding, you have to accept of one of the replies or create one of your own and then incorporate that into an approach to cognitive science. The Computational Theory of Mind article I linked covers many such attempts.

    Sorry, I am not going to get into the choice topic, aside from the comment that you seem to be making a claim similar in spirit at least to the Chinese Room argument.

    Are you in some time zone where this is a normal time of day. My excuse is insomnia (I am in Toronto).

  16. BruceS: Are you in some time zone where this is a normal time of day. My excuse is insomnia (I am in Toronto).

    Yes.

    All I am saying about the Chinese Room argument is that I don’t find it useful to explain anything, other than we can be fooled. People can also do puppet shows and make it seem they are really talking. Its not very explanatory in regards to life. It doesn’t help us to understand when we are NOT being fooled, so, there seems little point.

  17. phoodoo: All I am saying about the Chinese Room argument is that I don’t find it useful to explain anything,

    You are right that it does not explain anything.

    But I take the point of the argument to be that there is something that needs to be explained, namely the difference between human understanding and computer syntactic rule following.

    Searle says the explanation lies in human biology.

    CTM says that, in essence, there is no difference, although the details matter. Precisely which details is unresolved and is the subject of ongoing research and philosophical/scientific controversy..

  18. Simple engineering question:

    Is it possible (for humans) to make something whose behavior is too complex to predict?

  19. BruceS,
    Yup. Those who claim there has to,be some essential difference between a brain (or an individual with a brain) analysing sensory information and acting on it and a sufficiently complex computer analysing sensory information and acting on it (as a sufficiently well-designed driverless car control system might) need to explain why computation depends on the medium.

  20. Kantian Naturalist:
    CharlieM,

    Dennett is to be read as exploring the consequences of a hypothesis. The proof of the pudding is in the eating of it: what problems does it avoid and what puzzles does it solve? You are constantly trying to go back to some ultimate first principle. That’s not how Dennett does philosophy — and I think he’s right to avoid that whole briar patch of epistemology. (But for a work of philosophy that develops the epistemology that’s compatible with Dennett’s work, try Groundless Belief by Williams.)

    To “avoid that whole briar patch of epistemology” is reminiscent of the drunk man looking for his keys under the light. We should not be avoiding an undertaking purely on the grounds that it is going to be difficult to get through.

    When you talk about “the given” as used by Sellars and I talk about “the given” as used by Steiner, we are talking about two different things.

    Correct me if I’m wrong but my understanding is that Sellars’ “given” is something that is known in a fundamental way without any effort on the part of the knower. Steiner’s “given” is the opposite of this. It is everything and anything that enters my sphere of apprehension prior to my activity in trying to understand it.

    I think that Sellars and Steiner would have agreed that what is given through our senses contains no information that we would be able to gain knowledge of without activity on our part. Do you agree?

    So Steiner begins by trying to understand cognition itself without making any prior assumptions. He states that, “when the better-known systems of epistemology are more closely examined it becomes apparent that a whole series of presuppositions are made at the beginning, which cast doubt on the rest of the argument” and in Truth and Knowledge, Introduction to The Philosophy of Freedom he says:

    The object of the following discussion is to analyze the act of cognition and reduce it to its fundamental elements, in order to enable us to formulate the problem of knowledge correctly and to indicate a way to its solution. The discussion shows, through critical analysis, that no theory of knowledge based on Kant’s line of thought can lead to a solution of the problems involved. However, it must be acknowledged that Volkelt’s work, with its thorough examination of the concept of “experience” provided a foundation without which my attempt to define precisely the concept of the “given” would have been very much more difficult. It is hoped in this essay to lay a foundation for overcoming the subjectivism inherent in all theories of knowledge based on Kant’s philosophy. Indeed, I believe I have achieved this by showing that the subjective form in which the picture of the world presents itself to us in the act of cognition — prior to any scientific explanation of it — is merely a necessary transitional stage which is overcome in the very process of knowledge. In fact the experience which positivism and neo-Kantianism advance as the one and only certainty is just the most subjective one of all. By showing this, the foundation is also laid for objective idealism, which is a necessary consequence of a properly understood theory of knowledge. This objective idealism differs from Hegel’s metaphysical, absolute idealism, in that it seeks the reason for the division of reality into given existence and concept in the cognizing subject itself; and holds that this division is resolved, not in an objective world-dialectic but in the subjective process of cognition. I have already advanced this viewpoint in An Outline of a Theory of Knowledge, 1885, but my method of inquiry was a different one, nor did I analyze the basic elements in the act of cognition as will be done here.

    Steiner argues that Kant’s question, “How are synthetical judgments a priori possible?” is not free of presuppositions and so sets us in the wrong direction right from the start.

    Kantian Naturalist:
    What emerges quite nicely in Williams and Dennett is a consistently anti-foundationalist, holistic epistemology: what we aim for is not a bedrock of unquestionable first principles but inferential consistency across multiple lines of evidence and inquiry.

    As Charles Peirce put it, “reasoning should not form a chain which is no stronger than its weakest link, but a cable whose fibers may be ever so slender, provided they are sufficiently numerous and intimately connected” (in “Some Consequences of Four Incapacities“).

    The consistency and strength of the cable doesn’t matter if the load it is supporting is of no use to anyone and is just so much excess baggage.

  21. newton:

    CharlieM: Newton believed that colours are somehow “hidden” in white light. In what way do you think this has been applied to technology?

    Fiber optic transmission of data.

    The first thing I would note here is that fibre optic transmission uses radiation outside of the visible spectrum.

    What has the fact that we perceive a range of colours have to do with fibre optic transmission? What is being transmitted, colours or light energy? Both have to do with light but these are not the same thing.

  22. Alan Fox: need to explain why computation depends on the medium.

    What they say is that the exact functional (ie input/output) behaviour implemented by the medium depends on the details of the medium, eg to properly capture brains, one needs to capture the functions made possible by the biochemistry and architecture of neurons, systems of interconnected neurons, hormones, and other brain components.

    That is called neurofunctionalism.

  23. BruceS: What they say is that the exact functional (ie input/output) behaviour implemented by the medium depends on the details of the medium, eg to properly capture brains, one needs to capture the functions made possible by the biochemistry and architecture of neurons, systems of interconnected neurons, hormones, and other brain components.

    That is called neurofunctionalism.

    OK. So is this supposed to be binary? Could not an accurate enough model emulate the behaviour of neurons, architecture, the connections, the changing (dendrite growth and atrophy) of connections, the effect of hormones to produce an “artificial” brain? Of course it cannot be more complex than a human brain!

  24. Alan Fox: Could not an accurate enough model emulate the behaviour of neurons

    Yes, that is what neurofunctionalist means. In particular, functionalism means that there is nothing to being X beyond functional behavior of X.

    If it walks like a duck and quacks like a duck then it is a duck. Or as Dennett puts it:
    [start of quote]
    “Functionalism is the idea that handsome is as handsome does, that matter only matters because of what matter can do. Functionalism in this broadest sense is so ubiquitous in science that it is tantamount to a reigning presumption of all of science.”
    [end of quote]

    The neuro bit emphasizes that it is the functions of brain processes and structures that are of concern.

    And to get back to OP, Goff and Kastrup disagree that duplicating brain functions is enough. They argue that functional behavior and so science cannot capture the qualitative nature of subjective experience.

    But they don’t agree on the best alternative — Goff says bottom-up panpsychism, but Kastrup rejects this (and top down cosmopsychism) in favor of idealism.

  25. Corneel:

    CharlieM: I have left it and I have built my world picture from there. There are many here including yourself who have been arguing against some of the conclusions I have drawn from this starting position.

    That’s not very reassuring. Your conclusions never logically follow from any “sure starting point”, and are invariably fanciful fabrications (sorry). Looks to me like you use your insistence on epistemological bedrock mainly to dismiss alternative “reductionist” explanations .

    I’d need you to be more specific to respond to this. If you are just making an observation I’ll leave it at that.

    CharlieM: there is this from Physics Today:

    “Exploratory Experimentation: Goethe, Land, and Color Theory”

    I agree that exploratory and descriptive experiments have a place in research, but fail to see why you claim that as a success of “not prematurely assuming a separation between subject and object”. You will need to unpack this a little for me.

    That is not what I am claiming. I gave that as an example of observation without prejudgement.

    And how did you envisage this to be implemented for gaining an understanding of consciousness?

    By observing consciousness in ourselves and in the world around us without jumping to conclusions about cause and effect, I’m sure we can come to some agreements about its attributes.

    CharlieM: Newton believed that colours are somehow “hidden” in white light. In what way do you think this has been applied to technology?

    Newton* mentioned one application already. The example I was thinking of was spectrophotometers, which can measure light absorbance of a sample. Light diffraction is used to produce a monochromatic beam of light. There are many more applications.

    ETA *The other one, who comments here at TSZ. LOL!

    As Goethe said colours are the product of the interactions of light and darkness Spectrophotometers work by manipulating attenuated light. The results of spectrometry are not dependent on light containing colours. The colours are products of the activity, they are not “in the light”. Light is invisible.

  26. Quote from Chalmers:
    ‘When I was in graduate school, I recall hearing “One starts as a materialist, then one becomes a dualist, then a panpsychist, and one ends up as an idealist”. I don’t know where this comes from, but I think the idea was something like this. First, one is impressed by the successes of science, endorsing materialism about everything and so about the mind. Second, one is moved by problem of consciousness to see a gap between physics and consciousness, thereby endorsing dualism, where both matter and consciousness are fundamental. Third, one is moved by the inscrutability of matter to realize that science reveals at most the structure of matter and not its underlying nature, and to speculate that this nature may involve consciousness, thereby endorsing panpsychism. Fourth, one comes to think that there is little reason to believe in anything beyond consciousness and that the physical world is wholly constituted by consciousness, thereby endorsing idealism.’

    https://philpapers.org/archive/CHAIAT-11.pdf

  27. BruceS: Goff and Kastrup disagree that duplicating brain functions is enough. They argue that functional behavior and so science cannot capture the qualitative nature of subjective experience.

    The thinking-about-thinking trap! We can’t understand ourselves just by thinking about it. There is mileage in a bottom-up approach – better models of simpler systems – but I guess economics will govern where research money is spent.

  28. BruceS: Fourth, one comes to think that there is little reason to believe in anything beyond consciousness and that the physical world is wholly constituted by consciousness, thereby endorsing idealism.

    If it works for him, best of luck! 🙂

  29. Alan Fox: The thinking-about-thinking trap! We can’t understand ourselvesjust by thinking about it. There is mileage in a bottom-up approach – better models of simpler systems – but I guess economics will govern where research money is spent.

    I am incompetent to discuss the philosophical issues here, but I wonder why no one is discussing things from an evolutionary standpoint. Instead of asking how a human consciousness could come into existence, start with (say) a worm, a bilaterally symmetric tubelike thing with muscles, a mouth at the front, and some nerve cells to sense the environment. Are there ways that natural selection could improve the connections between nerves and muscles? Could improve the firing patterns of the nerves to make the worm orient better toward food or away from predators? Could make ganglia bigger and have their nerve connections do a better job of turning the nerve signals into effective behavior? Are there ways that such a nervous system could have states that reflect recent perceptions in addition to immediate ones?

    And does this get us closer to a beast that can process inputs algorithmically? Or have I misunderstood the issues, and failed to understand that the “consciousness” of monkeys, or mice, is irrelevant, that we must be discussing human consciousness and only that? (Or that all this was covered somewhere upthread in the 500 comments not all of which I read?)

    (And to anyone who points out that I have started by assuming that evolution happened and that natural selection is the reason we have such effective adaptations, of course I did — this is not addressed to people who want to disbelieve those assumptions).

  30. Joe Felsenstein: I am incompetent to discuss the philosophical issues here, but I wonder why no one is discussing things from an evolutionary standpoint.

    I have to agree.

    “Consistent with this hypothesis, Gordon Gallup found that chimps and orangutans, but not little monkeys or terrestrial gorillas, demonstrated self-awareness in mirror tests. The concept of consciousness can refer to voluntary action, awareness, or wakefulness.”

    https://en.m.wikipedia.org/wiki/Animal_consciousness

    Proof for the evolution of consciousness…

  31. phoodoo: An outcome was produced, but not a choice. You are being imprecise. A computer doesn’t make a choice, it produces an outcome. That outcome is ENTIRELY based on the combination of external input, and internal configuration.

    It also depends on other factors, such as whether a computer chip fails in the middle of the computation.

    There is no choice being made.

    I agree with phoodoo on this point. We talk of computers making decisions, but that talk involves using metaphors.

  32. keiths: Here’s what the dictionary says:
    choose
    /CHo͞oz/
    verb
    1 pick out or select (someone or something) as being the best or most appropriate of two or more alternatives.
    “he chose a seat facing the door”
    2 decide on a course of action, typically after rejecting alternatives.
    “he chose to go”

    That definition fits pragmatic decision making far better than it fits logic conclusions.

    Pragmatism is in; logic is out.

    Logic is still available as a pragmatically chosen tool. But at its core, choosing is pragmatic rather than logical.

  33. phoodoo: All I am saying about the Chinese Room argument is that I don’t find it useful to explain anything, other than we can be fooled.

    I’d say that it is useful. But it does not demonstrate what Searle claims to show.

    It is useful for making clear the distinction between semantic decisions and syntactic decisions. The argument makes the case that the computer uses purely syntactic operations, and does not touch the semantics (in the normal sense of semantics rather than in the sense of “formal semantics”).

    This stark distinction is well known to mathematicians and computer programmers. But it is not nearly as familiar to people whose background is in the humanities.

  34. Joe Felsenstein,

    I think that an evolutionary perspective is extremely helpful, but it doesn’t resolve the philosophical issue. (From which one might conclude, so much the worse for philosophy.)

    The evolution of complexity in neurocomputational processes , beginning with an ancestral bilateral worm, still leaves open the philosophical question as to why that process is accompanied by increasing degrees of awareness.

  35. BruceS: Searle says the explanation lies in human biology.

    That’s Searle’s failure right there.

    He is dealing with a philosophical question, not a biological question. It is a mistake to pass the buck to biology.

  36. Alan Fox: Those who claim there has to,be some essential difference between a brain (or an individual with a brain) analysing sensory information and acting on it and a sufficiently complex computer analysing sensory information and acting on it (as a sufficiently well-designed driverless car control system might) need to explain why computation depends on the medium.

    There is a difference between a brain (or, really, a person with a brain) acting in the world, and a computer acting in the world.

    And I suppose that’s a trivial point, because a computer doesn’t actually have a world. Computation is abstract.

  37. Joe Felsenstein: I am incompetent to discuss the philosophical issues here…

    Wouldn’t worry about that, Joe. Plenty of others are pitching in without the least competence; including me.

    …but I wonder why no one is discussing things from an evolutionary standpoint.Instead of asking how a human consciousness could come into existence, start with (say) a worm, a bilaterally symmetric tubelike thing with muscles, a mouth at the front, and some nerve cells to sense the environment. Are there ways that natural selection could improve the connections between nerves and muscles?Could improve the firing patterns of the nerves to make the worm orient better toward food or away from predators?Could make ganglia bigger and have their nerve connections do a better job of turning the nerve signals into effective behavior?Are there ways that such a nervous system could have states that reflect recent perceptions in addition to immediate ones?

    And does this get us closer to a beast that can process inputs algorithmically?Or have I misunderstood the issues, and failed to understand that the “consciousness” of monkeys, or mice, is irrelevant, that we must be discussing human consciousness and only that?(Or that all this was covered somewhere upthread in the 500 comments not all of which I read?)

    I’m totally in agreement with you that evolution is the only plausble route to sentient organisms such as humans. Regarding consciousness, it’s apparently trolling to ask what people mean when they use the word and what it is about “consciousness” that prevents “it” emerging as have all other aspects of sentient awareness, self-awareness and the ability to think that precludes an evolutionary, incremental pathway.

    (And to anyone who points out that I have started by assuming that evolution happened and that natural selection is the reason we have such effective adaptations, of course I did — this is not addressed to people who want to disbelieve those assumptions).

    🙂

  38. Kantian Naturalist: The evolution of complexity in neurocomputational processes , beginning with an ancestral bilateral worm, still leaves open the philosophical question as to why that process is accompanied by increasing degrees of awareness.

    But there are, in my view, differing levels of awareness and self-awareness across the animal kingdom. Those branch tips can be linked to a convincing extent by bringing in fossils and phylogenetics.

  39. Neil Rickert: And I suppose that’s a trivial point, because a computer doesn’t actually have a world. Computation is abstract.

    But we can feed sensory information into the model, surely?

  40. Joe Felsenstein: I am incompetent to discuss the philosophical issues here, but I wonder why no one is discussing things from an evolutionary standpoint.

    Pretty much all of my thinking about human cognition is from an evolutionary standpoint. That’s probably why just about everyone thinks I am obviously wrong.

    Instead of asking how a human consciousness could come into existence, start with (say) a worm, a bilaterally symmetric tubelike thing with muscles, a mouth at the front, and some nerve cells to sense the environment. Are there ways that natural selection could improve the connections between nerves and muscles?

    Well, no, that’s not how I have been thinking of it. That way of thinking is probably a pathway into the trap of computationalism.

  41. Neil Rickert: That way of thinking is probably a pathway into the trap of computationalism.

    No, it’s the trap into thinking we can think our way through into solving how we think. We are misled by the first-person experience of thinking into thinking that we thus know how we think. At least I think so! 😉

  42. Alan Fox: No, it’s the trap into thinking we can think our way through into solving how we think.

    I was not attempting to solve the problem of how we think. Instead, I was looking at the question of how we learn. And that question seemed to be the key to everything.

Leave a Reply