Bad Dogs and Defective Triangles

Is a dog with three legs a bad dog? Is a triangle with two sides still a triangle or is it a defective triangle? Perhaps if we just expand the definition of triangle a bit we can have square triangles.

There is a point of view that holds that to define something we must say something definitive about it and that to say that we are expanding or changing a definition makes no sense if we don’t know what it is that is being changed.

It is of the essence or nature of a Euclidean triangle to be a closed plane figure with the straight sides, and anything with this essence must have a number of properties, such as having angles that add up to 180 degrees. These are objective facts that we discover rather than invent; certainly it is notoriously difficult to make the opposite opinion at all plausible. Nevertheless, there are obviously triangles that fail to live up to this definition. A triangle drawn hastily on the cracked plastic sheet of a moving bus might fail to be completely closed or to have perfectly straight sides, and thus its angles will add up to something other than 180 degrees. Even a triangle drawn slowly and carefully on paper with an art pen and a ruler will have subtle flaws. Still, the latter will far more closely approximate the essence of triangularity than the former will. It will accordingly be a better triangle than the former. Indeed, we would naturally describe the latter as a good triangle and the former as a bad one. This judgment would be completely objective; it would be silly to suggest that we were merely expressing a personal preference for straightness or for angles that add up to 180 degrees. The judgment simply follows from the objective facts about the nature of triangles. This example illustrates how an entity can count as an instance of a certain type of thing even if it fails perfectly to instantiate the essence of that type of thing; a badly drawn triangle is not a non-triangle, but rather a defective triangle. And it illustrates at the same time how there can be a completely objective, factual standard of goodness and badness, better and worse. To be sure, the standard in question in this example is not a moral standard. But from the A-T point of view, it illustrates a general notion of goodness of which moral goodness is a special case. And while it might be suggested that even this general standard of goodness will lack a foundation if one denies, as nominalists and other anti-realists do, the objectivity of geometry and mathematics in general, it is (as I have said) notoriously very difficult to defend such a denial.

– Edward Feser. Being, the Good, and the Guise of the Good

This raises a number of interesting questions, by no means limited to the following:

What is the fact/value distinction.

Whether values can be objective.

The relationship between objective goodness and moral goodness.

And of course, whether a three-legged dog is still a dog.

Meanwhile:

One Leg Too Few

469 thoughts on “Bad Dogs and Defective Triangles

  1. Elizabeth: I also insist that perception involves models.

    I’m being criticized for denying models. I don’t know that I ever have.

    Consider:
    (a) there are models;
    (b) there aren’t models;

    I’m not at all sure what is supposed to be the difference between those.

    The representationalists argue for models. But they also argue that perception is an entirely mechanistic computation. And they argue that there is no such thing as intentionality.

    The puzzle, for me, is what could it even mean to be using a model without intentionality?

    Oh, and a word of thanks for your distinctions between peripheral and foveal vision. I found that useful. I avoided commenting at the time, because I wanted to avoid getting into another long series of completely unproductive miscommunications with certain commenter.

  2. keiths:
    Lizzie:

    Bruce:

    All of which is further bad news for Neil and his fellow direct perceptionists, since they insist that perception does not involve models.

    What were you linking to there?

  3. Neil Rickert: I’m being criticized for denying models.I don’t know that I ever have.

    Consider:
    (a) there are models;
    (b) there aren’t models;

    I’m not at all sure what is supposed to be the difference between those.

    The representationalists argue for models.But they also argue that perception is an entirely mechanistic computation.And they argue that there is no such thing as intentionality.

    The puzzle, for me, is what could it even mean to be using a model without intentionality?

    Oh, and a word of thanks for your distinctions between peripheral and foveal vision.I found that useful.I avoided commenting at the time, because I wanted to avoid getting into another long series of completely unproductive miscommunications with certain commenter.

    It’s amazing to me how little I understand the expressions not only you, but both your adversaries and your allies having been using in this discussion, although I have studied at least some questions surrounding direct and indirect perception for most of my adult life. I take it that my readings in philosophy are mostly on a plain that is entirely irrelevant to these questions of (I guess?) cognitive psychology or psychology of perception or maybe physiology of perception.

    I really have very little idea of what any of you means by either “direct” or “indirect.” But I suppose that just indicates my own ignorance of these other fields.

  4. walto:

    I really have very little idea of what any of you means by either “direct” or “indirect.” But I suppose that just indicates my own ignorance of these other fields.

    Wow, that point about using language differently from KeithS was deja vu all over again for me! Can it be only about a year ago that we three were spatting about that point?

    Keith quoted the message from me and it looks like that broke the link to in my post, which works when I click on it in the original post. Here it is again.

    Not that it proves much beyond I can Google (actually DuckDuckGo) “saccades predictive coding”. And also look for any excuse to make bad jokes about referendums in Scotland (but please, not Quebec, please not again…)

  5. keiths:

    it is obvious that our ability to evaluate what-if scenarios depends on computation

    It matters how you define computation. There is a received view in philosophy from Fodor that says “no computation without representation”. That does mix in representation, and by implication based on its author, representation using abstract symbols.

    Piccinini does a better job of defining physical computation, I think. He eliminates representation from it.

    A physical system is a computing system just in case it is a mechanism one of whose functions is to manipulate vehicles based solely on differences between different portions of the vehicles according to a rule defined over the vehicles.

    He has a book coming out in a month or so, and the preceding is from the blurb for it.

  6. BruceS: Wow, that point about using language differently from KeithS was deja vu all over again for me! Can it be only about a year ago that we three were spatting about that point?

    Keith quoted the message from me and it looks like that broke the link to in my post, which works when I click on it in the original post.Here it is again.

    Not that it proves much beyond I can Google (actually DuckDuckGo) “saccades predictive coding”.And also look for any excuse to make bad jokes about referendums in Scotland (but please, not Quebec, please not again…)

    Thanks for that link, Bruce. And you’re right: we’ve been through all that directness biz before.

    The music goes round and round oooooh and it comes out here.

  7. walto: t’s amazing to me how little I understand the expressions not only you, but both your adversaries and your allies having been using in this discussion, although I have studied at least some questions surrounding direct and indirect perception for most of my adult life.

    I’m with you on this. But one of the things I like most about TSZ is that it gives philosophically-inclined scientists and scientifically-inclined philosophers a space for (mostly constructive) dialogue. Let’s see what we can do.

    In the days of Russell, Ayer, and the philosophy dynasty of Sellars & Son, there were some who thought that we directly perceive only “sense-data,” and that our awareness of physical objects is an inference to the best explanation (though an unconscious one) as to what is most likely causing our awareness of sense-data. (There is a great deal of British empiricism lurking in the wings here, especially Berkeley.) Only sense-data are given, it was thought, and without something being given, there can be no reliable foundation for anything else we know.

    This view can go by various names — I was taught it as “representationalism” or “representational realism” — but the key point to it is this: sensations function as epistemic intermediaries between minds and objects. Since we are talking here in an epistemological key, we are talking at the personal or agential level of description and explanation, which means that our method is that of reflection and analysis. And the claim is epistemological — it is about the warrant for our belief in physical objects.

    It is a subtle but nevertheless genuine contrast between that sort of view and the alternative, championed by Roy Wood Sellars, Wilfrid Sellars, and Donald Davidson, that sensations are merely causal intermediaries between minds and objects. Thus, sensings of black, warm, and sweet are not my evidence for believing that I am sipping a cup of coffee, but rather the sensings of black, warm, and sweet are how I am causally informed as to the coffee. Post-Dennettt, it is now quite clear that I am causally informed about my environment through a great many subpersonal cognitive mechanisms.

    All this is to say that direct realism at the agential l level is compatible with all sorts of cognitive models, maps, and even representations at the subagential level. (Henceforth I will use agential/subagential in lieu of Dennett’s personal/subpersonal in order to capture the thought that nonhuman animals are cognitive agents even though they are not persons. This allows us to capture Neil’s Gibsonian insistence that direct perception is true when our level of analysis is the whole animal in its environmental milieu.)

    Walto, is that consistent with your understanding of the terms “direct” and “indirect”? Are the rest of us inclined to use these terms differently?

  8. Kantian Naturalist,

    Yes, there’s a lot of talking past one another on this issue.

    But I think you can say the same about evolution, where mutations are said to be copying errors. But that seems to presuppose a purpose of copying exactly, else there could not be errors. Maybe it should be that there’s a purpose of creating the next generation, using the current genome only as a guide.

    We are stuck with the natural language that we have, and it is full of a vocabulary of intention which sometimes doesn’t fit the way that the words are used. Possibly some of the critics of evolution are offended by the idea that they are the result of an accumulation of errors.

  9. Kantian Naturalist:

    Walto, is that consistent with your understanding of the terms “direct” and “indirect”? Are the rest of us inclined to use these terms differently?

    Thanks, KN. I’m not sure I’d use precisely the same words in precisely the same way, but I at least recognize the ballpark.

    My main reluctance to endorsing your whole post centers around your (Sellarsian) “sensings,” which I think you take to be among the causes of perceptual experiences. I’m slightly to the no-vote side of agnostic on those.

    But yes, direct realism is about whether there are epistemic intermediaries. As I think I’ve said here before, nobody but occasionalists and parallelists have thought to suggest there are no causal intermediaries between chairs and perceivings of chairs (or between mirror images of chairs and perceivings of mirror images of chairs). Whether any of these causal intermediaries may be “sensings” as you have used that term, I dunno.

  10. Granted, but I’d still like to know if you, KeithS and Elizabeth are talking about agential or subagential descriptions, and thus whether the mediation or immediacy is epistemic or causal. (It would be problematic if one were to maintain that perception is immediate only if here are no causal intermediaries between mind and object.)

  11. walto, to Neil:

    It’s amazing to me how little I understand the expressions not only you, but both your adversaries and your allies having been using in this discussion, although I have studied at least some questions surrounding direct and indirect perception for most of my adult life. I take it that my readings in philosophy are mostly on a plain that is entirely irrelevant to these questions of (I guess?) cognitive psychology or psychology of perception or maybe physiology of perception.

    I really have very little idea of what any of you means by either “direct” or “indirect.” But I suppose that just indicates my own ignorance of these other fields.

    I learned the terms “direct perception” and “indirect perception” from my readings in perceptual psychology, where the usage matches this description from Direct Perception by Michaels and Carello:

    James Gibson and those who follow his approach adopt an ecological stance: they believe that perceiving is a process in an animal-environment system, not in an animal. Proponents of the ecological view argue that perception is, quite simply, the detection of information. This approach is labeled direct because a perceiver is said to perceive its environment. Knowledge of the world is thought to be unaided by inference, memories, or representations. Conversely, a second family of theories conceives of perception as mediated — or, to contrast it with Gibson’s theory, indirect — and is so called because perception is thought to involve the intervention of memories and representations.

  12. Thanks, that’s helpful. As I’ve said to Neil, both here and elsewhere, I’ve never been able to understand Gibson very well. I always intend to devote more time to him, but….

    I think what I’m unclear (unclearest?) about in the Michaels and Carello quote regards what they mean by “involve the intervention.” Those could be causal, or even some kind of “but for” thing. As I understand the term, perception is indirect if and only if to perceive some external object X we have to PERCEIVE something else. That doesn’t seem to preclude “interventions” of memories or representations between me and X. It just can’t be necessary for me to perceive them.

    As I’ve said before, the philosophical use is, in a sense, entirely uninteresting and scientifically void. It’s basically about what perception means–and nothing else.

  13. I just want to add that I admire the philosophers like KN who want to get a good handle on the scientific side here. It’s important for decent philosophy not to be saying anything that contradicts current scientific findings, and, at least if one tries say more substantive things than I do, that’s a danger if one doesn’t learn some of the science.

  14. keiths,

    That quote from Michaels and Carello seems to conflate the distinction between agential and subagential description, though. That there’s no epistemic intermediary at the agential level doesn’t entail that there’s no causal intermediary at the subagential level.

  15. KN,

    As walto noted above, no one except perhaps for parallelists and occasionalists denies the existence of causal intermediates in perception.

    The debate is over the nature of the causal intermediates. More on this in my upcoming reply to walto.

  16. keiths,

    And presumably whether it makes good explanatory sense to call any of those causal intermediaries “representations”?

  17. KN,

    And presumably whether it makes good explanatory sense to call any of those causal intermediaries “representations”?

    Yes. Or “inferences”, or “models”.

    I like to use the motion illusions as examples because they clearly show an inference being made. There is no motion in the stimulus, yet we perceive motion. Where does it come from if not from an inference? We certainly aren’t directly perceiving the motion, because there is no actual motion available to be perceived.

    I keep asking Neil to explain the motion illusions in terms of direct perception, but he never does.

  18. keiths: I keep asking Neil to explain the motion illusions in terms of direct perception, but he never does.

    I keep wondering what sort of misconception does keiths have, that he supposes that’s a meaningful question.

  19. keiths,

    I agree that hallucinations are a problem for direct realism. It seems likely to me that hallucinations indicate that our cognitive models, which are usually quite good at orienting us towards the affordances in our environments, can nevertheless be tricked under highly specific conditions. That thought right there might be sufficient to put me in the critical realist camp when it comes to the metaphysics of perception. I’m still not entirely sure what the state of play is between direct realists and critical realists in 2015.

    As for how we describe the causal intermediaries . . .

    “Models” I’m perfectly happy with, though we’d want to be a bit careful in distinguishing between models as what scientists build to explain observations and models as what brains do to navigate environments. (There is clearly some interesting relationship here, though — I don’t see how we could generate testable explanations in scientific practice if that brains weren’t already constructing maps or models of their environments.)

    I’m ok with “representations”, though there I’d want to insist on some subtle differences between the account of representations in “Cartesian cognitive science” (both cognitivist or Fodorian theories and connectionist or Churchlandian theories) and the account of representations in “non-Cartesian cognitive science” (embodied-embedded cognitive science, which may or may not also be enactivist). Wheeler, Rowlands, and Chemero are superb on what’s at stake here.

    I’m deeply unhappy with “inferences,” because I want to keep the notion of inference at the level of agential description and explanation — inferences are what animals (and humans) do, not what their (and our) brains do. At this point my anxieties about falling afoul of the metonymic fallacy kick in with full force. But I’m open to persuasion if it can be shown that I’m being unreasonably conservative in my choice of language here.

    I do think that animal and human inferences involve building “offline” multimodal and/or amodal models of both physical and intentional domains, and that those models are almost certainly exapted from the cognitive models that constrain sensory processing and coordinate sensory information with behaviors.

  20. Neil Rickert: I keep wondering what sort of misconception does keiths have, that he supposes that’s a meaningful question.

    You don’t think it’s possible to have a direct perception of an illusion? You probably don’t believe in unicorns either.

    😉

  21. Kantian Naturalist: I agree that hallucinations are a problem for direct realism.

    I’m not seeing that as a problem.

    It seems likely to me that hallucinations indicate that our cognitive models, which are usually quite good at orienting us towards the affordances in our environments, can nevertheless be tricked under highly specific conditions.

    Sure. But why is that any more surprising than that an undamped piano string can reverberate even when no piano key has been used?

    I don’t see how we could generate testable explanations in scientific practice if that brains weren’t already constructing maps or models of their environments.

    I think of maps as very different from models.

    I agree with your view of representations, with the distinction between the Fodor idea of representation, and the much simpler idea of information being represented in physical signals. And I agree with you on inference.

  22. Neil,

    Indirect perceptionists can explain the motion illusions. How do you explain them?

  23. Mung: You don’t think it’s possible to have a direct perception of an illusion?

    That’s hard to say without getting into pointless arguments about semantics.

  24. KN,

    I’m deeply unhappy with “inferences,” because I want to keep the notion of inference at the level of agential description and explanation — inferences are what animals (and humans) do, not what their (and our) brains do. At this point my anxieties about falling afoul of the metonymic fallacy kick in with full force. But I’m open to persuasion if it can be shown that I’m being unreasonably conservative in my choice of language here.

    I don’t know if this will assuage your anxieties, but the idea of unconscious inferences in the visual system goes all the way back to Helmholtz in 1867.

  25. I think there’s a similar issue with “inference.” What has to be the case for somebody to be said to “make an inference”? E.g., do they have to believe some premises and then conclude something from them?

    IMO, those kinds of (boring) “semantic” questions will, in the end determine whether it’s correct to say that perception is inferential–whether or not it’s direct in the sense in which I use “direct.”

  26. KN,

    It seems likely to me that hallucinations indicate that our cognitive models, which are usually quite good at orienting us towards the affordances in our environments, can nevertheless be tricked under highly specific conditions. That thought right there might be sufficient to put me in the critical realist camp when it comes to the metaphysics of perception.

    Be careful to distinguish cognitive models from perceptual models. For instance, suffers of Charles Bonnet syndrome experience vivid hallucinations, but they know that they’re hallucinating. The problem is perceptual, not cognitive.

    It’s similar with optical illusions. We know they’re illusions, but we can’t will ourselves out of seeing them.

  27. I think the direct realist has to be comfortable saying that in the case of (total) hallucinations, nothing is perceived.

  28. Kantian Naturalist: How do you understand that difference?

    A model is a representation of some kind. A map is more of a guide. So a map tends to be iconic, to exaggerate the details that are most important for guidance, and to omit details that are not important.

  29. walto,

    I think what I’m unclear (unclearest?) about in the Michaels and Carello quote regards what they mean by “involve the intervention.” Those could be causal, or even some kind of “but for” thing.

    It’s probably best understood in terms of the distinction between “bottom-up” and “top-down” processing. Gibson saw perception as a bottom-up process in which the information flow was one-way, with sensory information entering at the “bottom” and perceptions popping out at the “top”.

    Indirect perceptionists hold that information also flows in the opposite direction. Example: once you’ve “seen” the dog in this classic image, you immediately “re-see” it when the image is presented to you again. (Even years later, as I can attest.)

    The memory clearly influences the perception, but this should not happen according to direct perceptionists.

  30. Reposting from the Bad Materialism thread:

    walto,

    I don’t remember the example you are discussing, but if you are talking about apparently perceiving motion in some object that isn’t actually moving (with respect to you), why call it perception rather than illusion? Direct perception is generally a theory of what happens in veridical perceptual experiences.

    I call it perception because the inference is usually veridical in “real life”. For example, suppose you are looking at a tall fence, behind which there is a sunlit background. The fence slats are closely spaced but not overlapping, so you can see a sliver of light between each pair.

    Now suppose someone is walking behind the fence and parallel to it. You’ll see the slivers darken and light up again in sequence, and your visual system will perceive motion. It’s a veridical perception, because a person really is moving on the other side of the fence.

    Why bring up the illusions? Well, a proponent of direct perception might try to argue that in the fence example, the visual system isn’t inferring motion from the temporal variation in brightness of the slivers, but rather by detecting motion within the slivers. By setting up a motion illusion in which artificial “slivers” on a screen darken and lighten in sequence, we can show that motion is inferred even when it is lacking in the stimulus.

    Ditto for the red ball illusions.

    In both cases, the visual system is “betting” on the persistence of objects. In the case of the fence, it’s far more likely that something is moving behind the fence than that objects are poofing into and out of existence in just the right sequential pattern. In the red ball case, it’s far more likely that the ball moved from A to B than that a ball poofed out of existence at point A while an identical ball poofed into existence at point B.

    The inference can be wrong in rare cases, but it’s usually right, so selection favors it.

  31. Yeah, ‘indirect’ is used differently: the meanings are probably related, but I better leave that to KN to suss out. Seems kind of hairy.

    Or maybe the four of us (including Bruce) should try to write a paper on those connections/disconnects some day!

  32. walto: I think there’s a similar issue with “inference.” What has to be the case for somebody to be said to “make an inference”? E.g., do they have to believe some premises and then conclude something from them?

    You could probably have an entire issue of a philosophy journal filled with arguments about this.

    Here are a few examples:

    Consider an ordinary traditional analog thermometer.
    (a) is the thermometer making an inference;
    (b) is the person reading the thermometer making an inference simply by virtue of reading the thermometer.

    I say “no” to (a), and “maybe” to (b). The maybe means that I haven’t made up my mind.

    A computer solves an equation:
    (a) is the computer making an inference;
    (b) is the computer user making an inference.

    I think this question is a lot more controversial than the thermometer question. And I should stipulate that here “computer” means something like that box on your desk, and does not mean a human who is doing computation.

    I’m inclined to say “no” for (a), and “it depends” for (b) — depends on the relation between the user and the problem.

  33. Neil,

    As I’ve mentioned before, the label you choose is far less important than the activity being labeled.

    In the case of the motion illusions, motion is perceived despite being entirely absent from the stimulus. I call that an inference. You may choose to call it something else. Either way, the (illusory) motion is being created by the visual system. You can’t directly perceive motion when there is no motion there to perceive. Hence your inability to answer my question:

    Indirect perceptionists can explain the motion illusions. How do you explain them?

    Think about my fence example. Indirect perception explains how a mechanism that is responsible for the veridical perception of motion in one case (the person walking behind the fence) gets fooled into perceiving illusory motion in another case (artificial “slivers” on a computer screen darkening and lighting back up in sequence).

    How do you, as a direct perceptionist, explain that?

  34. keiths: As I’ve mentioned before, the label you choose is far less important than the activity being labeled.

    Indeed. “What’s going on here” is a much better approach to understanding than deciding what to call it.

    In the case of the motion illusions, motion is perceived despite being entirely absent from the stimulus. I call that an inference.You may choose to call it something else.

    I think you could also use the word “interpretation”. The raw data arriving at the visual interface is in 2D if you (for the moment and for simplification) discount time and point-of-view change [ETA and binocular vision]. The visual system/brain interprets the raw data as a 3D “perception”.

    Either way, the (illusory) motion is being created by the visual system. You can’t directly perceive motion when there is no motion there to perceive.

    I suggest movement tells a sentient organism (prey or predator) a whole lot about it’s environment and what to do next. Interpreting a 2D image that is changing over time into “food” or “run and hide” more efficiently than your niche competitor is a trait worth inheriting.

  35. Neil Rickert: I’m being criticized for denying models.I don’t know that I ever have.

    I didn’t think you had either. Hence my odd phraseology – I thought you were proposing a model-based model.

    Consider:
    (a) there are models;
    (b) there aren’t models;

    I’m not at all sure what is supposed to be the difference between those.

    The representationalists argue for models.But they also argue that perception is an entirely mechanistic computation.And they argue that there is no such thing as intentionality.

    The puzzle, for me, is what could it even mean to be using a model without intentionality?

    Oh, and a word of thanks for your distinctions between peripheral and foveal vision.I found that useful.I avoided commenting at the time, because I wanted to avoid getting into another long series of completely unproductive miscommunications with certain commenter.

    Vision is interesting, and often very counter-intuitive. There’s a fantastic book by Findlay and Gilchrist, called Active Vision, which restores the motor component that had been largely missing from fixation-dominated empirical paradigms in vision science for decades.

    I

  36. Kantian Naturalist:
    Granted, but I’d still like to know if you, KeithS and Elizabeth are talking about agential or subagential descriptions, and thus whether the mediation or immediacy is epistemic or causal. (It would be problematic if one were to maintain that perception is immediate only if here are no causal intermediaries between mind and object.)

    I’d love to be able to answer, but I’m afraid I don’t understand the question! En Anglais, s’il vous plait?

  37. Neil Rickert:
    Kantian Naturalist,

    Yes, there’s a lot of talking past one another on this issue.

    But I think you can say the same about evolution, where mutations are said to be copying errors.But that seems to presuppose a purpose of copying exactly, else there could not be errors.Maybe it should be that there’s a purpose of creating the next generation, using the current genome only as a guide.

    We are stuck with the natural language that we have, and it is full of a vocabulary of intention which sometimes doesn’t fit the way that the words are used.Possibly some of the critics of evolution are offended by the idea that they are the result of an accumulation of errors.

    Exactly.

  38. Dennett has a nice bit of Dennetty fun with the idea that “active vision” involves “filling in” the scene with a series of fixations, which is a kind of non-active visionists idea of what active vision might be (given the pretty incontrovertible evidence that we are only able to process colour and high spatial frequency informatin from only a couple of degrees of visual arc, yet have the percept of entire detailed and coloured visual scene), referring to the stuff we “fill in” the scene with as “figment”.

    I think it’s possibly better to think of vision as knowledge , including the knowledge that what we don’t know we can instantly find out. So if I stare at the screen in front of me, I can see a big green dragon plant to my left. Only my traning tells me that I can “see” no such thing – all I can “see” in “reality” is the word I am typing (I touch type). However, not only do I know (because I’ve seen it before) that the dragon plant is there, and is green, I also know (and my visual system knows, at a very “low” (“subagential?”) level that if I need to check that it is green (and not, say, brown and in need of watering) I can instantly make a saccade to it to check.

    And I also know that even by preparing such a saccade (but not actually executing it), my LIP neurons (well the human equivalent of the macaque LIP neurons) will shift their receptive fields so that even while staring at the screen in front, I can process more fine-grain information from the plant. Which is why, if I do make the saccade, I won’t have the sensation that the entire world has lurched – my system has already “modelled” the way the world will look when I have made the saccade, and only needs to make minor adjustments to the model once I get there, at which point, the retinal image will be remapped in the same world coordinates as my screen was (and not in eye-position, or head position, or body position coordinates).

    I find it hard to try to describe all that without using the word “model” and the term “forward-model” is standard language in the field.

    The analogy I like best is the fridge light – because the fridge light is always on when we want to look in the fridge, we don’t think of the fridge as being dark. If we want to check that the light is on, it will be on. It’s not that we “fill in” the fridge with light. We don’t need to, because we already know that it will be light whenever it matters. I’d call that a [forward] model of a lit fridge.

  39. Elizabeth: I think it’s possibly better to think of vision as knowledge , including the knowledge that what we don’t know we can instantly find out. So if I stare at the screen in front of me, I can see a big green dragon plant to my left. Only my traning tells me that I can “see” no such thing – all I can “see” in “reality” is the word I am typing (I touch type). However, not only do I know (because I’ve seen it before) that the dragon plant is there, and is green, I also know (and my visual system knows, at a very “low” (“subagential?”) level that if I need to check that it is green (and not, say, brown and in need of watering) I can instantly make a saccade to it to check.

    The late Fred Dretske wrote a ton of stuff, including his early Seeing and Knowing (which I think started as his dissertation), discussing the relations between seeing and knowing. One of the questions here is whether seeing X can occur without seeing that X is something or other.

    Another book I like on vision is the one by…Gerald Vision! FWIW, Dretske and Vision are two of my faves.

  40. Neil Rickert:

    Consider an ordinary traditional analog thermometer.
    (a) is the thermometer making an inference;
    (b) is the person reading the thermometer making an inference simply by virtue of reading the thermometer.

    I say “no” to (a), and “maybe” to (b).The maybe means that I haven’t made up my mind.

    A computer solves an equation:
    (a) is the computer making an inference;
    (b) is the computer user making an inference.

    I’m inclined to say “no” for (a), and “it depends” for (b) — depends on the relation between the user and the problem.

    I’d say that if a physical systems is doing inference, then necessarily it is doing physical computation in Piccinini’s sense (as linked above). So I’d reject thermometer too.

    Is computation sufficient for inference as well? My intuition is yes, it is. So for your second example, I’d say both are making inferences. (That also allows me to say inferences are possible at all subpersonal levels in perception.)

    I conclude from your second example that you do not agree that computation suffices for inference. That would mean you think something else is needed, something about the “user and the problem.”. Could it be related to “original intentionality”? That is, do you think a computer “computing” an inference is manipulating symbols whose meaning is defined by the human programmer whereas a human with the “right relation to the problem” is manipulating symbols whose meaning is originates in him or her.

    I am just taking a flyer on what might underlie your position in differentiating the two situations; I personally don’t accept original/derived intentionality in Searle’s sense.

  41. walto: One of the questions here is whether seeing X can be occur without seeing that X is something or other.

    Happens to me a lot with distant objects in busy backgrounds. I have to get closer or shift my point of view to clarify what I’m seeing. Don’t recall noticing the problem before losing the sight in my right eye some years ago. For example, I have to concentrate when driving to decide if a very distant vehicle is parked or moving towards me.

  42. Alan Fox,

    IIRC, Dretske has a paper in which he specifically discusses seeing something on the road. (I think it was a lemon colored VW.)

  43. BruceS: I’d say that if a physical systems is doing inference, then necessarily it is doing physical computation in Piccinini’s sense (as linked above).So I’d reject thermometer too.

    Is computation sufficient for inference as well? My intuition is yes, it is.So for your second example, I’d say both are making inferences.(That also allows me to say inferences are possible at all subpersonal levels in perception.)

    I conclude from your second example that you do not agree that computation suffices for inference.That would mean you think something else is needed, something about the “user and the problem.”.Could it be related to “original intentionality”? That is, do you think a computer “computing” an inference is manipulating symbols whose meaningis defined by the human programmer whereasa human with the “right relation to the problem” is manipulating symbolswhose meaning is originates in him or her.

    I am just taking a flyer on what might underlie your position in differentiating the two situations; I personally don’t accept original/derived intentionality in Searle’s sense.

    Bruce, do you deny that making inferences requires having beliefs or are you saying that the computing devices do have beliefs?

  44. walto: As I’ve said before, the philosophical use is, in a sense, entirely uninteresting and scientifically void. It’s basically about what perception means–and nothing else.

    walto:

    I just want to add that I admire the philosophers like KN who want to get a good handle on the scientific side here.It’s important for decent philosophy not to be saying anything that contradicts current scientific findings, and, at least if one tries say more substantive things than I do, that’s a danger if one doesn’t learn some of the science.

    I see a tension in those two comments.

    Specifically, if philosophy both should be informed by science but also analyse conceptual issues in science, shouldn’t philosophers make sure they align with scientific meanings for terms?

    Maybe this simply involves separating philosophy about meanings in scientific usage versus meanings in everyday usage. But I doubt it is that simple.

    We’ve been down this road several times in other threads, I realize. But the topic continues to interest me. Just ignore this reply if you do not want to go there again.

  45. walto: Bruce, do you deny that making inferences requires having beliefs or are you saying that the computing devices do have beliefs?

    I deny inference involves beliefs.

    Having beliefs would involve having representations (of propositions, for example), I think, and as per Piccinini’s analysis, I want to separate representations from physical computing. I also want to say that physical computing is enough for inference so I can use that term at all levels of subpersonal perception.

    It comes down to a whether one thinks it is useful to be able to use “inference” that way, I suppose.

    Computers can do the manipulations to successfully reproduce proofs in formal logic, so I guess it you could also say it depends on whether you think that is enough to said to be doing inference.

    Late ETA: When I say beliefs involve, eg, representations of propositions, I don’t mean to imply that one has to take a Fodorian LOT approach. One could take the intentional stance and say the real patterns of regularities, which the stance calls beliefs, themselves have scattered causes involving neural representations like the ones in predictive coding. And possibly the vehicles for those representations extend beyond the brain, so as to leave my options with respect to Clark’s view open.

Leave a Reply