Bad Dogs and Defective Triangles

Is a dog with three legs a bad dog? Is a triangle with two sides still a triangle or is it a defective triangle? Perhaps if we just expand the definition of triangle a bit we can have square triangles.

There is a point of view that holds that to define something we must say something definitive about it and that to say that we are expanding or changing a definition makes no sense if we don’t know what it is that is being changed.

It is of the essence or nature of a Euclidean triangle to be a closed plane figure with the straight sides, and anything with this essence must have a number of properties, such as having angles that add up to 180 degrees. These are objective facts that we discover rather than invent; certainly it is notoriously difficult to make the opposite opinion at all plausible. Nevertheless, there are obviously triangles that fail to live up to this definition. A triangle drawn hastily on the cracked plastic sheet of a moving bus might fail to be completely closed or to have perfectly straight sides, and thus its angles will add up to something other than 180 degrees. Even a triangle drawn slowly and carefully on paper with an art pen and a ruler will have subtle flaws. Still, the latter will far more closely approximate the essence of triangularity than the former will. It will accordingly be a better triangle than the former. Indeed, we would naturally describe the latter as a good triangle and the former as a bad one. This judgment would be completely objective; it would be silly to suggest that we were merely expressing a personal preference for straightness or for angles that add up to 180 degrees. The judgment simply follows from the objective facts about the nature of triangles. This example illustrates how an entity can count as an instance of a certain type of thing even if it fails perfectly to instantiate the essence of that type of thing; a badly drawn triangle is not a non-triangle, but rather a defective triangle. And it illustrates at the same time how there can be a completely objective, factual standard of goodness and badness, better and worse. To be sure, the standard in question in this example is not a moral standard. But from the A-T point of view, it illustrates a general notion of goodness of which moral goodness is a special case. And while it might be suggested that even this general standard of goodness will lack a foundation if one denies, as nominalists and other anti-realists do, the objectivity of geometry and mathematics in general, it is (as I have said) notoriously very difficult to defend such a denial.

– Edward Feser. Being, the Good, and the Guise of the Good

This raises a number of interesting questions, by no means limited to the following:

What is the fact/value distinction.

Whether values can be objective.

The relationship between objective goodness and moral goodness.

And of course, whether a three-legged dog is still a dog.

Meanwhile:

One Leg Too Few

469 thoughts on “Bad Dogs and Defective Triangles

  1. The “project” meme is also very common with indirect realists, phenomenalists and sense-data theorists. FWIW, I don’t think Harman would be happy with being given “transparent” (as he uses it in that paragraph) in return for also taking seeing the representation as if, and projecting the properties the external object seems to have.

    Note that those aren’t criticisms of your position, just a clarification of what I take Harman’s to be.

  2. walto,

    …representational theories of consciousness often (maybe even generally) do not require that the representations be perceived: in fact, they usually oppose that view.

    At the very least, you’d have to clarify that the word “perceive” has different meanings in the two contexts. To perceive an object is obviously not the same as perceiving the representation of that object. If it were, then you’d have a homuncular regress.

  3. keiths: To perceive an object is obviously not the same as perceiving a representation of that object. If it were, then you’d have a homuncular regress.

    Yes!

    But then, what IS our relation to these representations, exactly? I know what “perceive” means with respect to tables and chairs, but what is this other meaning?

  4. There’s really no distinct English word for it, which perhaps is why we fall back on “perceive”.

    As an engineer, I just think of it as the second half of the perceptual process. The first half involves creating, maintaining and constraining the representation on the basis of sensory input plus top-down information. The second half involves forging the conscious experience from the representation, however that is accomplished.

    In the case of dreams and hallucinations, representations are still being constructed and maintained, but they are no longer being correctly constrained by sensory input.

    I find it useful sometimes to flip this on its head by thinking of everyday waking consciousness as reality-constrained dreaming.

  5. I think I’ve posted this bit of Hall before:

    we must not infer from the ‘undeniable commonsensible fact
    that we perceive tables and chairs out in the room, not in our heads’ that there is some sort of ‘law of projection’ at work, somehow launching our interior ideas or images out into the world. When looking at a sheet of paper in front of him, the naïve realist will simply make ‘the bold assumption that the only thing possessing the congeries of properties [we perceive] is the sheet of paper [rather than] look into our brains for them, or invent some unobservable mental events that display them’ (Hall, 1959: 81).

    –Gospel-thumper Horn

  6. keiths:

    But “transparent” is an unfortunate word choice in that case, because we don’t “see through” the representation to the object being represented.We see the representation as if it were the object out there in the world.It’s not really transparent in the standard sense of the word.

    ETA: A better metaphor might be that we “project” the representation out onto the world.

    When you say “we see the representation” it sounds to me like the Cartesian theater Dennett has issues with. I know you respect Dennett, so I assume you agree with this position against the Cartesian theater. So I guess you don’t mean the phrase to imply a Cartesian theater. But what else could it mean?

    Maybe you wanted to avoid that possibility with the ETA about projecting. But, still, by having us involved in projecting something, it seems to me you still need some Cartesian theater for the representation to be brought together and overlaid on something.

    I think it is better the avoid phrases of the form “we x the representation” in defining seeing. (ETA: Agent-level) seeing just is representing, plus some other stuff to make the representation conscious. At least according to representationalism.

    To Dennett, as I understand him, it is something closer to a series of microjudgings that we tell ourselves a story about by taking the intentional stance wrt ourselves, a story which results in an agent level description of subjective experience. But since it is the microjudgings that are real, the subjective experience is an illusion. At least, that is how I understand him as of midday today EST.

    (And there is also the sense data approach, which Walt describes, but which has a bad rep since the existence of sense data seems to imply dualism of some sort.)

    ETA: some of my points are already addressed in posts while I was editing. So you can stop reading my post now.

  7. I think that at work in the homunculus fallacy and the closely related metonymic fallacy is a difficulty in respecting the difference between agential descriptions and subagential explanations. Here’s how Michael Wheeler puts it nicely:

    Cognitive-scientific explanation is a species of empirical explanation in which the ultimate goal is to map out the subagential elements (e.g. the neural states and mechanisms, or the functionally identified psychological subsystems) whose organization, operation, and interaction make it intelligible to us how it is that unmysterious causal processes (such as those realized in brains) can give rise to the psychological phenomena that are genuinely constitutive of agency and cognition.

    Wheeler goes on to say that we need a mutually constraining relationship between phenomenology and cognitive science.

    As a matter of phenomenology — agential description — direct realism is correct. When I am describing what I perceive, I perceive objects; I do not perceive “sense-data” and then infer that there are objects. What happens at the subagential level that causally implements my perception of object is a different question, but it is nevertheless quite likely ( think) that a Gibson-style explanation is closer to the truth than a Marr-style explanation.

  8. keiths:
    There’s really no distinct English word for it, which perhaps is why we fall back on “perceive”.

    As an engineer, I just think of it as the second half of the perceptual process.The first half involves creating, maintaining and constraining the representation on the basis of sensory input plus top-down information.The second half involves forging the conscious experience from the representation, however that is accomplished.

    In the case of dreams andhallucinations, representations are still being constructed and maintained, but they are no longer being correctly constrained by sensory input.

    I find it useful sometimes to flip this on its head by thinking of everyday waking consciousness as reality-constrained dreaming.

    Well, I’d make all that an interative two way feedback process, not two discrete “halves”. We model-move-getnewdata-adjustmodel-move-getnewdata-adjustmodel-move [loop until whenever].

    In my view.

  9. Bruce,

    When you say “we see the representation” it sounds to me like the Cartesian theater Dennett has issues with. I know you respect Dennett, so I assume you agree with this position against the Cartesian theater. So I guess you don’t mean the phrase to imply a Cartesian theater. But what else could it mean?

    The use of the word “see” is just a convenience (see my discussion of “perceive” with walto above). I am really just referring to the downstream processing of the representation that leads to the conscious experience; thus there is no homuncular regress.

    I think it is better the avoid phrases of the form “we x the representation” in defining seeing. (ETA: Agent-level) seeing just is representing, plus some other stuff to make the representation conscious.

    That’s almost exactly how I described it to walto:

    As an engineer, I just think of it [“perceiving” the representation] as the second half of the perceptual process. The first half involves creating, maintaining and constraining the representation on the basis of sensory input plus top-down information. The second half involves forging the conscious experience from the representation, however that is accomplished.

  10. KN,

    What happens at the subagential level that causally implements my perception of object is a different question, but it is nevertheless quite likely ( think) that a Gibson-style explanation is closer to the truth than a Marr-style explanation.

    Could you explain why you think so?

  11. Elizabeth: I find it useful sometimes to flip this on its head by thinking of everyday waking consciousness as reality-constrained dreaming.

    I recall piloting a 747 through the streets of downtown Cincinnati. (I don’t know why I dream of Cincinnati. I have driven through it a number of times.)

    My co-pilot asked what happens when we come to an intersection. I said, don’t worry; we have the right of way.

    I woke up laughing hysterically. It seemed funny at the time.

  12. Elizabeth: Well, I’d make all that an interative two way feedback process, not two discrete “halves”.We model-move-getnewdata-adjustmodel-move-getnewdata-adjustmodel-move [loop until whenever].

    In my view.

    Which is roughtly what the Bayesian, Predictive Coding view is trying to formalize, as I understand it. It has error between sensory input and prediction propogating up and predictions from representations (of pdfs) propagating down..

  13. Bruce & keiths, it may very well be that what you two are saying is consistent with representationism as well as representationalism! I’m really not sure.

  14. keiths: As an engineer, I just think of it [“perceiving” the representation] as the second half of the perceptual process. The first half involves creating, maintaining and constraining the representation on the basis of sensory input plus top-down information. The second half involves forging the conscious experience from the representation, however that is accomplished.

    That comes across to me as a sub-agential description, since I read it as being about the causal processes among the components of the visual system.

    I believe that most of the philosophy of perception is to be taken at the whole agent level, which I understand as any of the following:
    1. the first person, subjective perspective

    2. the perspective from which one examines transcendental analysis — what must the nature of perception be in order for it to be as we experience it

    3. the perspective targeted by the intentional stance or folk psychology, that is the one we are taking for another person when we talk about that person’s beliefs, desires, reasons, and acts (including ourselves)

    4. the reporter of raw data in Dennett’s Heterophenomenology method for studying subjective experience scientifically

    5. the top level in a system-subystem decomposition analysis of perception, what we called the context-level diagram back when I did IT rather than managing it (which pre-dated OO Analysis and Design, but not keypunches – shout out to Yourden). Actually, this one may be wrong because the analysis uses flows of data or control, and that is more associated with science than philosophy. Maybe it is better thought of as one place the science and philosophy have to come together.

  15. Bruce,

    That comes across to me as a sub-agential description, since I read it as being about the causal processes among the components of the visual system.

    Yes, absolutely. The direct-vs-indirect debate in perceptual psychology is all about what goes on “under the hood”, preconsciously.

    I don’t think anyone denies that we experience ourselves as perceiving the world directly. What would be the evolutionary advantage of perceiving our representations as representations, rather than simply as the objects they represent?

  16. walto:
    Bruce & keiths, it may very well be that what you two are saying is consistent with representationism as well as representationalism! I’m really not sure.

    I hope it is consistent with both, at least for the sake of the editors of Encyclopedia Brittanica

  17. petrushka,

    I’m glad you wake up laughing. I have a similar dream, but it’s always stressful because I worry about the wings getting sheared off by light poles and buildings. Somehow they never do.

    I also have this recurrent dream about taking off from the local airport in a light plane, and only then realizing that I’m in big trouble because I forgot to radio the control tower.

    I suspect that this might be the pilot equivalent of the common dream where you realize, just before finals, that you forgot to attend class all semester. I’ve been meaning to ask my pilot friends if they have the same dream.

  18. I’m not a pilot, although I had six hours of instruction many years ago. Perhaps it still haunts me.

  19. keiths: KN,

    What happens at the subagential level that causally implements my perception of object is a different question, but it is nevertheless quite likely ( think) that a Gibson-style explanation is closer to the truth than a Marr-style explanation.

    Could you explain why you think so?

    Based on what little I’ve read about this so far, a Marr-style explanation of visual processing involves sequential building up within the cognitive system a model of the external visual stimulus, whereas a Gibson-style explanation involves taking the sensory stimulus as a source of information for the cognitive system to take advantage of. It’s a difference between accurately representing the environment and continuously engaging with the environment to maintain coordinated patterns of behavior.

    While classical AI architectures look to be good for accurate representations of their environments, neural networks look to be better for modeling a sensorimotor-environment coordinated system. And neural networks are (with various caveats) more biologically realistic than classical architectures. So it’s more likely that our brains do what neural networks do than what a classical AI architecture does.

  20. walto: Wow, what a mess they made there.

    OK, I’ll bite.

    I did not read the EB article but only confirmed it said the words were synonyms. So I assumed you were making a joke which I did not quite get but figured I’d continue it (I admit the the “for sake of” was not one of my funniest attempts at irony. Or any good at all, really.)

    A little more searching shows Tye calls it representationism and Lycan in SEP calls it representationalism. Those guys ought to know.

    So is it a joke or do you have a real difference in mind?

  21. keiths:
    Bruce,

    Yes, absolutely. The direct-vs-indirect debate in perceptual psychology is all about what goes on “under the hood”, preconsciously.

    I don’t think anyone denies that we experience ourselves as perceiving the world directly.What would be the evolutionary advantage of perceiving our representations as representations, rather than simply as the objects they represent?

    I cannot speak to the psychological argument, but if your second paragraph is asking whether philosophers disagree about transparency and about whether we have introspective access to (aspects of) our representations, then the answer answer is (drum roll) …. “yes”. Shocking, isn’t it? I mean, philosophers? disagreeing?

    The SEP article I linked has details, eg blurry vision, pain under anesthesia.

    Not to mention the issues with perception and the role therein of mental representations themselves: they don’t exist (the radical enactivists, maybe Dennett, adverbialists), they exist but represent action possibilities, not objects in the world (Clark), they exist but differ between hallucinations and vericidal perception (disjunctivists), they exist but don’t exhaust perception, namely they don’t account for qualia (Block). And that is just (some of) the physicalists.

  22. Kantian Naturalist: Based on what little I’ve read about this so far, a Marr-style explanation of visual processing involves sequential building up within the cognitive system a model of the external visual stimulus, whereas a Gibson-style explanation involves taking the sensory stimulus as a source of information for the cognitive system to take advantage of. It’s a difference between accurately representing the environment and continuously engaging with the environment to maintain coordinated patterns of behavior.

    Yes, this is how I see it.

    Or, to put it in different terms, Marr’s theory seems to strongly suggest intelligent design, and seems to require something like a theist’s notion of truth (i.e. truth comes from some external source such as God). Gibson’s theory is what you get from thinking along evolutionary lines. Success is based on pragmatic considerations, and does not have any prerequisites for a notion of truth.

    It’s no surprise that AI folk prefer a Marr approach. AI proponents are intelligent design theorists, in that they see themselves as intelligent designers.

    My own preference for Gibson’s approach comes because I was trying to understand what kind of perceptual system could evolve. I only learned of Gibson’s work later.

  23. Neil Rickert,

    The embodied-embedded cognition people point to Marr’s style of explanation as an example of “Cartesian cognitive science”. Michael Wheeler and Mark Rowlands develop this claim in a lot of detail (Interestingly, however, Rowlands argues that Gibson-style explanations are compatible with giving representations a key role, whereas Tony Chemero disagrees. Citations for all on request.)

    Previously, BruceS and I agreed that enactivism goes too far in exaggerating what can be done with that research program. I now have a slightly better argument. (I ran it past two philosophers of cognitive science today and they seemed inclined to agree.)

    Two dynamical (non-linear) systems are coupled just in case the parameters of one system are variables in the other system. (Hence the two can be considered sub-systems in a single larger dynamical system.) In Andy Clark’s terms, such systems have “continuous reciprocal causation”: the two systems are causally affecting each other.

    But if the systems display CRC, we have the following problem. Intelligent behavior, whatever else it may, involves behavioral fluidity and flexibility whereby the same kind of stimulus may be yoked to different motor outputs and different sensory stimuli may be yoked to the same motor output — all depending on context, long-term goals, immediate needs, etc.

    In order for there to be many-to-one and one-to-many mappings of sensory stimuli and motor responses, there needs to be some functional modularity in the cognitive system — otherwise everything is too closely yoked together and the system can’t respond with intelligence and adaptivity. (Arguably such a system could not even learn.)

    So, if enactivism is right, then continuous reciprocal causation would predominate. And if there is CRC, then there’s no modularity. But modularity is required for creativity, problem-solving, intelligence, and perhaps even learning at all. So enactivism cannot explain intelligence, and in fact is incompatible with it.

    On this basis I conclude that while embodied-and-embedded approaches to cognition are a viable research program for non-Cartesian cognitive science, enactivism per se is not.

  24. Neil,

    Or, to put it in different terms, Marr’s theory seems to strongly suggest intelligent design, and seems to require something like a theist’s notion of truth (i.e. truth comes from some external source such as God).

    It’s been a long time since I read Marr’s book, but I don’t remember anything in it that suggested a need for “something like a theist’s notion of truth”. (If I had run across something like that, it would have gotten my attention!)

    Why do you think this?

  25. KN,

    Based on what little I’ve read about this so far, a Marr-style explanation of visual processing involves sequential building up within the cognitive system a model of the external visual stimulus, whereas a Gibson-style explanation involves taking the sensory stimulus as a source of information for the cognitive system to take advantage of.

    I would say that both approaches “take the sensory stimulus as a source of information for the cognitive system to take advantage of”, no? That’s practically the definition of perception. From Wikipedia:

    Perception (from the Latin perceptio, percipio) is the organization, identification, and interpretation of sensory information in order to represent and understand the environment.

    KN:

    While classical AI architectures look to be good for accurate representations of their environments, neural networks look to be better for modeling a sensorimotor-environment coordinated system. And neural networks are (with various caveats) more biologically realistic than classical architectures. So it’s more likely that our brains do what neural networks do than what a classical AI architecture does.

    But we already know that humans are good at representing their environments cognitively, and they do it using brains based on neurons. Given that, why do you think that neural networks are problematic as a substrate for perceptual representations?

  26. Kantian Naturalist,

    Interesting. The argument seems reasonable, but I would have to think more about it. I’ve avoided a dynamic systems approach because dynamic systems seem to general to answer the kinds of questions that concern us.

    I actually started by trying to understand human learning, and how that could evolve. I took learning to mean something like improving one’s abilities. I considered natural learning (trees learning where to put their leaves (and branches) to get the most light would be one example. And it seemed clear how those would evolve. Then, to get closer to human knowledge, I looked at science as a learning system. And I quickly realized that I needed something better than induction to account for science.

    The most basic learning by humans would require input from the environment, so I looked at how perception might evolve.

    It did not look to me as if just starting with received stimuli could work. That looked as if it could give a whole lot of meaningless input. Some way was needed to be able to connect the input to what it was about. So I guess I was really studying (thinking about) the problem of intentionality in perception from very early.

    The basic principles emerged from my thinking. But when I tried to see how I could program that into an AI system, I ran into a problem. It was basically the problem of where the motivation or directionality comes from. I think it was an on line discussion with Chris Malcolm (if I remember his name) that gave me an important hint. That’s when I realized that you could could get enough direction (which, I suppose is teleology) from the kind of homeostasis that we see in biological systems.

    In any case, the upshot of all of this, is that I was looking at information from early on (from when I started to think about perception). So I needed some sort of representations, at least the low level representations required for information. And, it seemed to me, that Rodney Brooks would also need that in his own research which he touted as not using representations.

  27. Bruce,

    …they exist but differ between hallucinations and vericidal perception

    Truth-killing perception? 🙂

  28. Bruce, check the ‘it’ that Lycan calls ‘representationalism’ and the ‘it’ that Tye calls ‘representationism’ and see if they’re the same. Then, in either case, see if both or either match what the Britannica author was talking about. I guarantee you’ll get at least two different views, probably three.

    The Britannica article looked like it was giving a definition of a sense-data theory, only it calls sense-data ‘representations’. It’s certainly not a view with which either Lycan or Tye sympathizes.

    As I said earlier, the label situation is in total disarray. I can tell you with assurance that that Britannica thing has almost nothing in common with what I (following Block) call ‘representationism.’

  29. keiths:

    I don’t think anyone denies that we experience ourselves as perceiving the world directly.What would be the evolutionary advantage of perceiving our representations as representations, rather than simply as the objects they represent?

    Bruce:

    I cannot speak to the psychological argument, but if your second paragraph is asking whether philosophers disagree about transparency and about whether we have introspective access to (aspects of) our representations…

    No, I’m just saying that I can’t see any reason for evolution to favor the development of such introspective access. Cognition is metabolically expensive, so it makes sense that our brains don’t have access to what our livers are doing. For the same reason, it makes sense that we can’t “see into” the preconscious activities of our perceptual systems.

  30. Neil,

    I’m interested in hearing why you think this:

    Or, to put it in different terms, Marr’s theory seems to strongly suggest intelligent design, and seems to require something like a theist’s notion of truth (i.e. truth comes from some external source such as God).

    I’ve never heard anyone else make that sort of statement about Marr’s theory, so I’m interested in hearing your reasons for doing so.

  31. BruceS: Which is roughtly what the Bayesian,Predictive Coding view is trying to formalize, as I understand it.It has error between sensory input and prediction propogating up and predictions from representations (of pdfs) propagating down..

    Yes, That’s why I mentioned my article about that on page 1 🙂

  32. keiths: No, I’m just saying that I can’t see any reason for evolution to favor the development of such introspective access. Cognition is metabolically expensive, so it makes sense that our brains don’t have access to what our livers are doing. For the same reason, it makes sense that we can’t “see into” the preconscious activities of our perceptual systems.

    Well, we do have interoceptionand it’s quite important. Not only that, but we have the capacity to distinguish between internally and externally generated stimuli. When that goes wrong, we end up with hallucinations, and delusions of control. It’s also why when we make a saccade the world doesn’t appear to move, but it we manually move our eyeball, it does.

    I take your general point though. Meta-cognition is probably a late-evolving capacity 🙂

  33. keiths:
    keiths:

    Bruce:

    No, I’m just saying that I can’t see any reason for evolution to favor the development of such introspective access.

    Some more relevant thoughts then:

    1. We don’t know how (not why) anything becomes conscious, so how the endpoint of perception becomes conscious whereas middle points in the process don’t in general is a secondary question which we probably could not answer fully without an answer to the overall question.

    2. In fact, some turn the question around and say that conscious access is a meta-representation, so by definition consciousness means we do have meta-access to our representations. Mostly a philosopher’s position, not widely held by scientists, as far as I can tell.

    3. Perhaps access to representation depends on the implementation. In the case of pain, the idea is that there are two types of representation: the representation of injury and the representation of the emotional impact (e.g. urgency or depression) associated with pain. With anesthesia, the theory is these come apart: we have the pain but not the emotion about it. So in that sense we have access to the fact there is a representation, or two of them to be precise.

    4. Possibly the issue is a conceptual confusion: that is, distinguishing between seeing x blurrily (as when I put on my glasses and the blurriness goes away) versus seeing blurry x (as when I put them on and I realize the picture I was looking at is blurry still — the blurriness is in the picture, not “me”). If we can distinguish the former from the latter, does that mean seeing x blurrily is giving us access to the representation itself? This seems to more about how to understand the difference between the two cases (from the whole agent’s viewpoint, of course, for the philosophical arguments. )

  34. walto:
    Bruce, check the ‘it’ that Lycan calls ‘representationalism’ and the ‘it’ that Tye calls ‘representationism’ and see if they’re the same.

    As far as I can tell, their positions are the same — strong representationalists — modulo the small differences found between any two philosophers even from the same basic position. (I prefer the “al”, possibly because I am Canadian and so in less of a hurry than Americans).

    Do you think the words have different meaning?

    And a meta-question: are you using this exchange to practice your Socrates imitations (as in his dialog bitl).

    PS: I agree the EB article is junk and more useful for assessing the impact of Wiki than for its actual content.

  35. Kantian Naturalist:

    An interesting approach to justifying representation, KN. Some comments:

    1. I’m used to the approaches which based the advantage of representation on control theory. If perception is first about movement, then fluid and accurate movement requires feedback: “Am I getting to where I wanted to go?”. And feedback requires a representation for comparison of what I am doing versus what I want to do. (using “I” and “want” very loosely, of course).

    So I’d interpret your decoupling idea as providing for the mechanism for that assessment to take place and the movement to be adjusted. Is that the sort of thing you had in mind?

    2. If control theory and representation interests you, Chris Eliasmith, whose background includes engineering of control systems, has as interesting paper on how he incorporates it in his models and how that corrects deficiencies in each of the GOFAI, vanilla connectionism, and DST approaches. Dynamics, control, and cognition (pdf)

    3. How do you get the accuracy conditions associated with representation from your ideas? Control theory would do it through malfunctioning: it is a mis-representation if it is not performing its role in feedback and control. Basically Millikan’s approach.

    4. Technical question. In describing “coupled” systems, you say the parameters of one are coupled to the variables of the other. I’m more used to coupling between variables.

    For example, for two springs, variable coupling would mean they are linked so that the length of one affects the length of the other. Parameter coupling would mean the length of one affects the other’s elasticity (the spring constant parameters; eg, by one spring changing a heating element being applied to the other).

    Coupled neural models, as I understand them, are more about the output variable of one changing the input to another, not about it changing the existing constants associated with the electro-chemical properties of the coupled neuron.

    Any particular reason to say coupling involves parameters?

  36. BruceS,

    I agree that Lycan and Tye have similar views but use different labels for them. The problem is that one of those labels (the one with the AL) is just as commonly used for an almost diametrically opposed view. I had not seen ‘representationism’ used that latter way until I saw that Britannica article you linked. My heart sank.

    Someone could now coin ‘repretism’ for the Hall/Dretske/Lycan anti-sensum, anti-qualia position, but I’m afraid Britannica would simply add ‘repretism’ to its article and claim that all THREE words were synonyms for the view that we perceive representations of the world.

    I guess it’s just hopeless.

  37. walto:

    I guess it’s just hopeless.

    So I should not bring up indirect versus direct representationalism then?

  38. keiths:

    KN:

    But we already know that humans are good at representing their environments cognitively, and they do it using brains based on neurons.Given that, why do you think that neural networks are problematic as a substrate for perceptual representations?

    Here’s how I understand the point KN is making:

    In my limited understanding of Marr versus Gibson, I find it helpful to position their views on representation using the following dimensions:

    1. Detailed versus coarse: is it (the representation) very detailed or is it rough and limited

    2. Active versus Passive: is it updated often or is it built slowly with fewer updates

    3. Abstract versus modal: does it abstract away from the perception, or is it directly related to immediate use of the perception (eg in moving)

    4. Bottom up versus Bottom Up and Top Down

    I am not saying these dimensions are orthogonal.

    So I’d see extreme Marrism as detailed, passive, abstract, bottom up only. Whereas extreme Gibsonianism would be coarse (possibly non-existent), active, modal, N/A BU vs BU/TD. These are meant as caricatures of the positions for comparison, of course.

    Now I understand KN’s position as this: the correct answer will be closer to Gibson’s end than Marr’s end for these dimensions. Something in between the two, but closer to Gibson. So it’s not that there are no representations, but that they differ as described by these dimensions.

  39. Elizabeth: Yes, That’s why I mentioned my article about that on page 1

    I missed that too.

    But now that I’ve looked at it, let me comment.

    It isn’t just time that is being recalibrated. Everything is being recalibrated.

    I buy a new pair of shoes. Walking is a tad awkward for a few day, until I adjust. And the adjustment to the new shoes is surely some form of recalibration.

    I buy a new automobile. I have to adjust to that, which presumably means recalibration of how I judge my position on the road.

    The distance between a child’s eyes lengthens as he grows up (at least for a few years). So distance judgment (3D stereo vision) has to be constantly recalibrated to adjust for that growth. And, as the child grows or as an adult puts on too much weight, the weight distribution changes. So there has to be a recalibration of the feedbacks that we use for walking and maintaining balance.

    So I’ve thought about what that recalibration would look like in neural terms. And it seems to me that it would look exactly like Hebbian learning. So it seems likely, at least to me, that Hebbian learning is calibration and recalibration. And, I’ll add that there has to be short term recalibration (when I switch from one pair of shoes to another, or when I switch from my car to my wife’s car), and long term recalibration (when I replace a pair of shoes or replace a car, or replace my glasses).

    When I deny that the brain is doing computation, I am not leaving the brain with nothing to do. Rather, I see the brain as a system of measuring instruments that are carefully calibrated and are constantly being recalibrated. Here, “measuring instrument” is going to be something like what Gibson called a “transducer”.

  40. BruceS: Possibly the issue is a conceptual confusion: that is, distinguishing between seeing x blurrily (as when I put on my glasses and the blurriness goes away) versus seeing blurry x (as when I put them on and I realize the picture I was looking at is blurry still — the blurriness is in the picture, not “me”).

    Does an “x” really look blurry without your glasses?

    It doesn’t for me, though I sometimes might describe it that way for ease of description.

    Without my glasses, the “x” might be a dark mark on the paper, and I cannot work out what it is. But it never looks blurry. It never looks the same as a blurry image of an “x”.

    If I look out the window at the tree in our back yard, then even without glasses, the tree never looks blurry. It just lacks detail. The edge of the leaves looks sharp, but it does not look serrate (though it does look serrate with glasses). Without glasses, I cannot see rib lines on the leave. The leaves just looks smooth. There are no blurry rib lines. When wearing glasses, the rib lines are quite clear.

    This is part of why I don’t think I am “looking at” the retinal image. For the retinal image would be blurry. And I’m quite capably of seeing blurriness in images.

    This is where I see Gibson as having gone wrong. He talks of “picking up” information from the optic array. I don’t think we are picking up information, and I don’t think there is any information to pick up. Rather, we are constructing information about the environment. We are not picking up what is already there. “Information” is not a natural kind. “Information” does not exist in the world except from how it is created by agents. Constructing information is a creative act by an agent.

    As for consciousness — my view is that this is simply our experience of the information that our perceptual systems are constructing.

  41. Funny that this issue about blurriness–and related things like the number of speckles on hens–has been written about for-freaking-ever. One of my thesis advisors, Rod “the God” Chisholm, wrote a paper on it in 1942:

    http://philpapers.org/rec/CHITPO-7

  42. keiths: No, I’m just saying that I can’t see any reason for evolution to favor the development of such introspective access. Cognition is metabolically expensive, so it makes sense that our brains don’t have access to what our livers are doing. For the same reason, it makes sense that we can’t “see into” the preconscious activities of our perceptual systems.

    Yes, I think that this is 100% right. It entails that any appeal to “intuition” or “introspection” will be reporting on the effects of the cognitive mechanisms, not on the cognitive mechanisms themselves.

    It’s like the blind-spot on the retina — you can’t see it because if you could, the retina woudn’t work as a retina. Likewise, our brains are blind to their own operations — and they have to be blind to themselves in order to function as more or less reliable regulators of world-oriented and world-involving behavior.

    The implications for phenomenology are considerable. Drew Leder (in The Absent Body) argues that one of the deep allures of Cartesianism lies in the fact that, when we engage in phenomenological description of “abstract” thought (e.g. the phenomenology of mathematics), the lived body is not in view. (Contrast with the phenomenology of running!) And the brain is always absent from phenomenology, because the brain is what I am using when I doing phenomenology!

  43. BruceS: An interesting approach to justifying representation, KN.Some comments:

    I’d interpret your decoupling idea as providing for the mechanism for that assessment to take place and the movement to be adjusted. Is that the sort of thing you had in mind?

    Yes, it is.

    2.If control theory and representation interests you, Chris Eliasmith, whose background includes engineering of control systems, has as interesting paper on how he incorporates it in his models and how that corrects deficiencies in each of the GOFAI, vanilla connectionism, and DST approaches. Dynamics, control, and cognition (pdf)

    Thank you! A colleague of mine mentioned Eliasmith to me just yesterday! (Fun fact I learned: before he got married, his name was either “Elias” or “Smith”, and his wife was the other. When they got married they combined their names, so “Eliasmith.)

    3.How do you get the accuracy conditions associated with representation from your ideas? Control theory would do it through malfunctioning: it is a mis-representation if it is not performing its role in feedback and control. Basically Millikan’s approach.

    I think that Millikan is certainly on the right track here but I still haven’t worked through her account and contrasted it with similar proposals, so I can’t say for certain that I’m on Team Millikan.

    4.Technical question. In describing “coupled” systems, you say the parameters of one are coupled to the variables of the other. I’m more used to coupling between variables.

    Yes, I put it that way because that’s how Clark and Wheeler describe coupled systems. I can post some text if you want. It could be that there are other kinds of dynamical coupling besides the kind that they focus on. Please feel free to elaborate at greater length; I just might learn something!

  44. Neil Rickert: So I’ve thought about what that recalibration would look like in neural terms. And it seems to me that it would look exactly like Hebbian learning. So it seems likely, at least to me, that Hebbian learning is calibration and recalibration. And, I’ll add that there has to be short term recalibration (when I switch from one pair of shoes to another, or when I switch from my car to my wife’s car), and long term recalibration (when I replace a pair of shoes or replace a car, or replace my glasses).

    It probably happens faster than Hebbian learning (thought I agree with the principle)

    Experiments with prism lenses are alarming to do. In part one you do a reaching task. Then you put on the lenses. Now when you reach for the target your miss. So you have to recalibrate – it only takes maybe half a minute. Then you can do it fine. The scary part is part 3 – you take the lenses off and do it again. Now you keep missing the target! But you aren’t wearing the lenses! The experimenters have damaged your brain!

    But you get it back again within a few seconds.

  45. Kantian Naturalist: Yes, it is.

    I think that Millikan is certainly on the right track here but I still haven’t worked through her account and contrasted it with similar proposals, so I can’t say for certain that I’m on Team Millikan.

    If you are in the position of needed a primer on her, here are two introductory papers (both pdfs) that helped me. I first tried some of her papers, but they seemed to assume you’d read her original book, which was too big a time investment for me.

    Yes, I put it that way because that’s how Clark and Wheeler describe coupled systems. I can post some text if you want. It could be that there are other kinds of dynamical coupling besides the kind that they focus on. Please feel free to elaborate at greater length; I just might learn something!

    Both kinds work. I was just wondering about the specific reason you had for selecting one.

    I have only very rudimentary knowledge of DST, so read the following with that in mind, but I see the difference between the two possibilities this way:

    – variable to variable could move the target system from one attractor state to another

    – variable to parameter could create a whole new set of attractors in the target system.

    So if is a neural population we are talking about, then
    – variable to variable might activate an existing concept, ie a neural network firing in some previously learned pattern.

    – variable to parameter would be the DST systems level view of learning new concepts (new attractors) which would correspondingly be implemented at in the network by (eg) Hebbian learning.

    As I say, be aware that’s off the top of my head based on a shaky knowledge of DST.

    BTW, I see from Leiter’s blog that Andy Clark got elected to to the British Academy.

Leave a Reply