Bad Dogs and Defective Triangles

Is a dog with three legs a bad dog? Is a triangle with two sides still a triangle or is it a defective triangle? Perhaps if we just expand the definition of triangle a bit we can have square triangles.

There is a point of view that holds that to define something we must say something definitive about it and that to say that we are expanding or changing a definition makes no sense if we don’t know what it is that is being changed.

It is of the essence or nature of a Euclidean triangle to be a closed plane figure with the straight sides, and anything with this essence must have a number of properties, such as having angles that add up to 180 degrees. These are objective facts that we discover rather than invent; certainly it is notoriously difficult to make the opposite opinion at all plausible. Nevertheless, there are obviously triangles that fail to live up to this definition. A triangle drawn hastily on the cracked plastic sheet of a moving bus might fail to be completely closed or to have perfectly straight sides, and thus its angles will add up to something other than 180 degrees. Even a triangle drawn slowly and carefully on paper with an art pen and a ruler will have subtle flaws. Still, the latter will far more closely approximate the essence of triangularity than the former will. It will accordingly be a better triangle than the former. Indeed, we would naturally describe the latter as a good triangle and the former as a bad one. This judgment would be completely objective; it would be silly to suggest that we were merely expressing a personal preference for straightness or for angles that add up to 180 degrees. The judgment simply follows from the objective facts about the nature of triangles. This example illustrates how an entity can count as an instance of a certain type of thing even if it fails perfectly to instantiate the essence of that type of thing; a badly drawn triangle is not a non-triangle, but rather a defective triangle. And it illustrates at the same time how there can be a completely objective, factual standard of goodness and badness, better and worse. To be sure, the standard in question in this example is not a moral standard. But from the A-T point of view, it illustrates a general notion of goodness of which moral goodness is a special case. And while it might be suggested that even this general standard of goodness will lack a foundation if one denies, as nominalists and other anti-realists do, the objectivity of geometry and mathematics in general, it is (as I have said) notoriously very difficult to defend such a denial.

– Edward Feser. Being, the Good, and the Guise of the Good

This raises a number of interesting questions, by no means limited to the following:

What is the fact/value distinction.

Whether values can be objective.

The relationship between objective goodness and moral goodness.

And of course, whether a three-legged dog is still a dog.

Meanwhile:

One Leg Too Few

469 thoughts on “Bad Dogs and Defective Triangles

  1. Kantian Naturalist:
    keiths,

    That quote from Michaels and Carello seems to conflate the distinction between agential and subagential description, though. That there’s no epistemic intermediary at the agential level doesn’t entail that there’s no causal intermediary at the subagential level.

    Clark accuses Hohwy, another philosopher, of the same confusion near the end of the paper I linked.

    He quotes Hohwy as saying

    one unfashionable thing that this theory tells us about the mind is that perception is indirect…[…]…what we perceive is the brain’s best hypothesis, as embodied in a high-level generative model, about the causes in the outer world.

    — Hohwy, J. (2007). Functional Integration and the mind Synthese 159:3: 315-328

    So, according to Hohwy, perception is indirect because we perceive, at least in part, our internal representations.

    Clark replies that this does happen, but it happens at the subpersonal level. At the personal level, we have direct contact with the world.

    He uses the extensive analysis of illusions under the Predictive Coding (PC) model to justify this. These analyses show that we have illusions because top down models conflict with bottom up sensory input, but based on the context provided by the model and some of the input, we perceive the illusion. Now the PC analysis shows we could only have these illusions if our neural models accurately reflected overall statistical regularity in the world; further work shows they do. Hence the very nature of illusions means we must usually have direct, accurate contact with the regularities of the world.

  2. Kantian Naturalist:

    That quote from Michaels and Carello seems to conflate the distinction between agential and subagential description, though. That there’s no epistemic intermediary at the agential level doesn’t entail that there’s no causal intermediary at the subagential level.

    Suppose one is well-informed about the Müller-Lyer illusion. Yet still, as KeithS has noted, when one looks at it the lines appear to be of different lengths.

    Are the following statements correct for agent-level:
    1. I perceive the lines have different lengths.
    2. I believe the lines have different lengths.
    3. I know the lines have the same length.

    If you think both 1 and 2 should be “the same length”, is there any verb x such that one can say “I x the lines have different lengths”.

    ETA: If knowledge is justified true belief, then 2 should have been “I believe lines are same length”. So what about “perceive”?

  3. I know from recent personal experience that seeing and knowing can be different. One of my eyes has been repaired after being functionally blind for a number of years.

    The acuity was restored within hours, but it took days to make sense of complex objects. The first skill to return was navigation. It took longer for abstract objects like words. It was rather odd to be able to see the image clearly but not be able to interpret it.

  4. BruceS,

    As you say, it doesn’t much matter how one defines ‘inference’ so long as we’re clear. It could be, though, that the claim that computers make inferences in the paper you cite is question-begging. That is, ‘inference’ may just be used to mean something like ‘whatever computers do when they do X.’ In that case, it won’t be particularly interesting to hear, ‘See? Computers DO make inferences! I mean, mine is actually doing X right now!!’

  5. Elizabeth: I think it’s possibly better to think of vision as knowledge , including the knowledge that what we don’t know we can instantly find out.

    Yes, I agree. At one time, I thought of using the expression “perceptual knowledge”, but then I found out that expression was already in use for something that seemed far less important.

    Those who favor direct perception also consider “perceptual learning”. There’s a book on that by Eleanor Gibson (JJ Gibson’s wife). JJ Gibson’s idea is that we have transducers for directly perceiving specifics, those transducers being a result of perceptual learning.

    One way of determining the temperature, is to use a whole lot of experienced data as the basis for inferences about temperature. The other way is to build a goddamn thermometer which directly reads the temperature. A whole lot of experience may have gone into designing the thermometer. But, once we have that thermometer, we can directly read temperature.

    The whole idea of direct perception, at least as I see it, is that we don’t store data and use it for inference. Instead, we build special purpose neural appliances which we can then use to get the information directly without any need for inference.

    The analogy I like best is the fridge light – because the fridge light is always on when we want to look in the fridge, we don’t think of the fridge as being dark.

    Yes, an excellent example. We don’t expect the refrigerator to be doing complex inferences as to when the light needs to be on. Instead, we create a simple appliance (a door switch) which automates it.

    My view of direct perception is based very much on this building of neural appliances. My view of learning puts most of the emphasis on perceptual learning (which amounts to the build, testing and fine tuning of appliances). My view of memory, is that our experience of memory comes from testing and reproducing some of the capabilities of those neural appliances.

    Neuronal speeds are very slow, compared to the speeds of our digital devices. This building of neural appliances is probably the only way that we could make decisions rapidly enough.

  6. petrushka:
    I know from recent personal experience that seeing and knowing can be different. One of my eyes has been repaired after being functionally blind for a number of years.

    The acuity was restored within hours, but it took days to make sense of complex objects. The first skill to return was navigation. It took longer for abstract objects like words. It was rather odd to be able to see the image clearly but not be able to interpret it.

    That’s really interesting.

  7. BruceS: I conclude from your second example that you do not agree that computation suffices for inference.

    As I see it, inference involves making considered judgments. The computer is just doing mechanistic rule following, so isn’t really making considered judgments, though we sometimes talk as if it is.

  8. petrushka:
    I know from recent personal experience that seeing and knowing can be different. One of my eyes has been repaired after being functionally blind for a number of years.

    The acuity was restored within hours, but it took days to make sense of complex objects. The first skill to return was navigation. It took longer for abstract objects like words. It was rather odd to be able to see the image clearly but not be able to interpret it.

    There a philosophical problem relating to a similar of situation: Molyneux’s_problem.
    The Wiki article outlines a recent experimental analysis involving people in India who suffered from total congenital blindness but were cured by an operation. Could they visually recognize objects they’d known only by touch right away? No, it took time.

    I’m glad the operation worked out for you.

  9. Another Phenomenon I think is more common. When you first try glasses with a strong prescription, you experience a lot of distortion. I went from being nearsighted to being presbyopic. My old glasses were concave. Reading glasses are convex. When I first put on reading glasses, everything was radically distorted. Rectangles were barrels. After a couple of weeks, shapes are. Normal. Moving my head no longer makes things swim.
    Any philosophizing that does not account for the ability to learn how to interpret what is seen is incomplete.

  10. Neil Rickert: As I see it, inference involves making considered judgments.The computer is just doing mechanistic rule following, so isn’t really making considered judgments, though we sometimes talk as if it is.

    It’s probably too much caffeine getting the better of me, but can I ask if the concept of “considered judgments” is reducible in any way? Can you explain the meaning in terms of other simpler concepts?

    I read Keith as also wanting to use the word “inference” for subpersonal processing, so you and he are arguing from different conceptual schema. Not that I’m claiming anything new or surprising that conclusion.

  11. petrushka:
    Another Phenomenon I think is more common. When you first try glasses with a strong prescription, you experience a lot of distortion. I went from being nearsighted to being presbyopic. My old glasses were concave. Reading glasses are convex.When I first put on reading glasses, everything was radically distorted. Rectangles were barrels. After a couple of weeks, shapes are. Normal. Moving my head no longer makes things swim.
    Any philosophizing that does not account for the ability to learn how to interpret what is seen is incomplete.

    There is a lots of philosophical speculation about adaption to inverting lenses and perception. Also about what blurry vision says about representationlist theories of qualia and perception. But I’m not sure they speak to your experience exactly.

  12. Bruce, to Neil:

    I read Keith as also wanting to use the word “inference” for subpersonal processing, so you and he are arguing from different conceptual schema.

    Again, the word choice doesn’t really matter. An earlier comment of mine:

    Neil,

    As I’ve mentioned before, the label you choose is far less important than the activity being labeled.

    In the case of the motion illusions, motion is perceived despite being entirely absent from the stimulus. I call that an inference. You may choose to call it something else. Either way, the (illusory) motion is being created by the visual system. You can’t directly perceive motion when there is no motion there to perceive. Hence your inability to answer my question:

    Indirect perceptionists can explain the motion illusions. How do you explain them?

    Think about my fence example. Indirect perception explains how a mechanism that is responsible for the veridical perception of motion in one case (the person walking behind the fence) gets fooled into perceiving illusory motion in another case (artificial “slivers” on a computer screen darkening and lighting back up in sequence).

    How do you, as a direct perceptionist, explain that?

  13. Bruce,

    So, according to Hohwy, perception is indirect because we perceive, at least in part, our internal representations.

    Clark replies that this does happen, but it happens at the subpersonal level. At the personal level, we have direct contact with the world.

    Or perhaps more accurately, we experience ourselves as having direct contact with the world. The direct/indirect debate in perceptual psychology is all about what’s going on “under the hood” before percepts become conscious.

  14. petrushka,

    The acuity was restored within hours, but it took days to make sense of complex objects. The first skill to return was navigation. It took longer for abstract objects like words. It was rather odd to be able to see the image clearly but not be able to interpret it.

    It must have been especially odd to look at the same object with both eyes, but with only one visual pathway being able to interpret the object.

    How long did it take for stereo depth perception to return?

  15. BruceS:
    I see a tension in those two comments.

    Specifically, if philosophy both should be informed by science but also analyse conceptual issues in science, shouldn’t philosophers make sure they align with scientific meanings for terms?

    Maybe this simply involves separating philosophy about meanings in scientific usage versus meanings in everyday usage.But I doubt it is that simple.

    We’ve been down this road several times in other threads, I realize.But the topic continues to interest me.Just ignore this reply if you do not want to go there again.

    Dunno if I’ve posted this before, and I apologize in advance for my perpetual “gospel thumping,” but here’s something from Hall’s Philosophical Systems that I believe addresses your question:

    If one uses [common sense] terms as his indefinables and then builds the latter from them, he is finding a place for modern science but not using it as his (sole) external ground; he is certainly using ordinary thought as something given. Technically, the converse is possible; just as one could define “field,” for example in terms of events that things with such and such properties enter, so one could define “thing” as a certain class of fields or of classes of fields. But now if this latter is done (and done consistently and categorially for all such terms) how could we ever identify what we are talking about, how could we interpret our language?

    I think Hall is here saying something like Neil has said above, that formal and natural systems of symbols are importantly different, but Hall adds that the formal one isn’t really a language unless it’s reducible in some manner to a natural one. That’s because Hall takes languages to be essentially intentional (referring), and takes the intentionality of (natural) language to derive from the intentionality of thought–something he believes to be not reducible at all.

  16. walto:
    BruceS,

    As you say, it doesn’t much matter how one defines ‘inference’ so long as we’re clear. It could be, though, that the claim that computers make inferences in the paper you cite is question-begging. That is, ‘inference’ may just be used to mean something like ‘whatever computers do when they do X.’ In that case, it won’t be particularly interesting to hear, ‘See? Computers DO make inferences! I mean, mine is actually doing X right now!!’

    I am taking Piccinini’s definition of computation, so it is not just what computers do. Any physical device that meets that criterion works.

    But I do admit that I am having second thoughts about computation being sufficient for inference; there is likely only a subset of computations that deserve the term and you are right that it does not seem easy to define that subset without begging the question.

    First, to make computation necessary for all inference, including inductive and abductive, I think I need to specify that “deep learning” algorithms cover induction and Bayesian analysis covers abduction (inference to best explanation) .

    Given that, it might be possible to define the subset of computations for inference by detailing the nature of those the computations corresponding to deduction, induction, and abduction.

    I think computations corresponding to deduction should be definable as syntactic manipulations that preserve truth value. So that would do for that subset.

    You’d need to assume the priors are given as input to the Bayesian computations, I think, and lying outside of inference, at least outside of the current inference. But given that, the nature of Bayesian computations are well specified.

    Induction assumes similarity of cases can be recognized, common properties identified, and conclusions drawn from that recognition. Computationally, it would involve recognizing clusters and classifying based on those clusters, and there are “deep learning” neural algorithms which attempt to do that without any human training.

    But now my head hurts.

    So I’ll leave specifying the details as an exercise for the reader.

  17. walto: :

    I think Hall is here saying something like Neil has said above, that formal and natural systems of symbols are importantly different, but Hall adds that the formal one isn’t really a language unless it’s reducible in some manner to a natural one.That’s because Hall takes languages to be essentially intentional (referring), and takes the intentionality of (natural) language to derive from the intentionality of thought–something he believes to be not reducible at all.

    This idea of formal languages being reducible to natural languages came up in another thread in a conversation you had with KN.

    I think Hall’s idea fails for computer languages, at least when it comes to understanding written communication. I do mean communication between people; I don’t mean “communication” between people and computers.

    People read other people’s programs and understand what they do. There are even formal programming processes called “code walkthroughs” where groups of people read and review code. No one translates the code into natural language as part of reading and understanding it. The syntax, semantics and pragmatics are in the code. They depend on the language definition in context, eg what computer, compiler, and libraries the code assumes.

    I suspect that also applies to professional mathematicians reading proofs. But I don’t have the first hand experience with professional mathematics that I have with programming.

    Now you could say that computer languages are documented and explained in natural language. But I think that would be just confusing the metalanguage (English) with the object language (the programming language).

  18. Neil,

    If you’re comfortable with the idea of “neural appliances”, why not a neural appliance that indirectly detects motion?

  19. BruceS: No one translates the code into natural language as part of reading and understanding it.

    I don’t think it has to be translated: it just must be translatable in principle. That’s what’s meant by “reducible.”

  20. BruceS: It’s probably too much caffeine getting the better of me, but can I ask if the concept of “considered judgments” is reducible in any way? Can you explain the meaning in terms of other simpler concepts?

    I’ll take it that you are looking for a very weak form of “considered” such as does not require conscious deliberation.

    The simplest that I can think of is categorization. Or, in more detail, drawing a (perhaps imaginary) line that divides the world into two parts (each side of the line).

    Simple examples: a logic gate deciding to count its input as a logic 1 (or a logic 0) — that is, entering one of its two stable states based on the input voltages.
    A thermostat switching the air conditioner on (or off) because of the temperature sensed in its bi-metallic strip.
    A neuron transitioning to a conducting state because its inputs have reached a threshold.

    Looked at as physical events, these seem to all be chaotic transitions. But then an alien, attempting to make sense of human behavior, would probably see our considered decisions as chaotic events.

    A categorization event as described (drawing an imaginary line to divide the world into two parts) is a geometric operation, not a logical operation. I see such geometric operations as at the heart of science and at the core of human cognition. And I see philosophy pretending to ignore their geometric nature so that it can treat everything as logic.

    I read Keith as also wanting to use the word “inference” for subpersonal processing, so you and he are arguing from different conceptual schema.

    Yes, we have been talking past one another. So I’m trying to ignore him, because carrying on a miscommunication is pointless.

  21. The main reason why I’m unhappy with the thought that computers make inferences is that, in inferentialist semantics, inference is a central concept for the account of meaning in a natural language. (The concept of reference is either explained in terms of inference (strong inferentialism) or reference is explained alongside inference (weak inferentialism).)

    I would rather say that the formal languages in which we program computers can work as cognitive extensions for expanding our inferential abilities — in a sense, “logical prostheses” — but a computer doesn’t infer any more than a car walks. (Along similar lines: just because we can model a brain as if it were a computer doesn’t make it one, any more than a computer simulation of a tornado means that weather systems are computational.)

    Despite my admiration for Dennett, I actually agree with Searle on one major point: I think that there is original intentionality. I accept the original/derived distinction. (Perhaps that’s because I’m less of a verificationist than Dennett is?) I just think that Searle goes wrong in locating original intentionality in the brain; rather, I’m with the enactivists in locating original intentionality in the brain-body-environment causal nexus.

    I say that even though there are problems with the enactivist criticism of representationalism, as Andy Clark and Michael Wheeler have pointed out. Wheeler’s book is reviewed here. Wheeler suggests that it only makes sense to talk about representations when we’re talking about functionally modular cognitive systems; in cases where is too much influence between systems (“continuous reciprocal causation”) the modularity condition cannot be satisfied, and so there aren’t any representations to be individuated. But even when there are representations, Wheeler insists that they are “action-oriented representations”. Clark’s theory of predictive coding might give us a more fine-grained account of how action-oriented representations work.

    Whether representationalism ultimately survives the enactivist critique is an interesting question. I suspect it will, but of course be transformed in the process into action-oriented representations, which are much more “map-like” than they are “model-like” and certainly not theory-like.

  22. Kantian Naturalist: I actually agree with Searle on one major point: I think that there is original intentionality. I accept the original/derived distinction.

    Me too. But of course this view antedates Searle.

  23. BruceS: People read other people’s programs and understand what they do.

    Yes, but it is difficult to do that.

    There are even formal programming processes called “code walkthroughs” where groups of people read and review code.

    They use groups of people, so that they can catch each others errors.

    Having taught computer programming, I can tell you that students often have difficulty correctly reading their own program that they have just written.

  24. keiths: If you’re comfortable with the idea of “neural appliances”, why not a neural appliance that indirectly detects motion?

    But what does that mean?

    If it is an appliance, then what it does is being done directly (no inference involved, just mechanistic rule following). And, of course, there might be ways that it can be fooled (i.e. that it will detect motion when there is none).

  25. Neil Rickert: Having taught computer programming, I can tell you that students often have difficulty correctly reading their own program that they have just written.

    FWIW, I myself often have difficulty understanding comments I have just made here.

  26. walto: Me too. But of course this view antedates Searle.

    I would guess that no one would have thought that the original/derived distinction needs to be labeled until Dennett criticized it. Before that the distinction was too obvious to need a label.

    Earlier in the thread, the question was raised about whether inference is used in reading a thermometer. I think that this is actually a tricky question because it depends on the shifting relation between perception (esp. sensorimotor skills) and discursive knowledge.

    When one is taught being what temperature is, there’s actually involves a lot of training involved in setting up the correlation between how warm and cold it feels and the numerical value of the temperature. To an American, it takes some inference in order to know that 30 C is pleasantly warm — we have to convert to Fahrenheit first — whereas to someone who has been taught Celsius, no inference is necessary. Once a conceptual system has been mastered, it can be used non-inferentially — someone who has mastered Celsius can report on the temperature based on how it feels to her, because she just “feel” it, in just the same way that a trained physicist can just “see” the pathway in a wire chamber as a subatomic particle.

    (In fact, “seeing as” is a very complicated notion, and despite having read Wittgenstein and Sellars on it, I don’t think I have a fully adequate grasp on how, at the agential level of description, discursively-mediated concepts can permeate one’s sensorimotor skills.)

  27. Neil,

    Yes, we have been talking past one another. So I’m trying to ignore him, because carrying on a miscommunication is pointless.

    No, I’ve understood the claims you’ve made in this thread, but I disagree with some of them. It’s disagreement, not miscommunication. For example, this paragraph is perfectly clear:

    The eye moves in saccades. As the eye moves, the path to a particular retinal receptor sweeps across the visual field. This results in sharp signal transitions as the path crosses an edge. My view is that the perceptual system uses these transitions to locate features in the visual field. I don’t see how vision would be possible without that. The designers of bar code scanners use the same idea to locate bar codes.

    It’s clear, but it’s wrong, as Lizzie and I have explained. Do you understand your error now?

  28. keiths [quoting Neil]: The eye moves in saccades. As the eye moves, the path to a particular retinal receptor sweeps across the visual field. This results in sharp signal transitions as the path crosses an edge. My view is that the perceptual system uses these transitions to locate features in the visual field.

    What, specifically, do you find incorrect about this?

  29. Alan Fox: What, specifically, do you find incorrect about this.

    Might I suggest, quoting Forrest Gump, that both things could be happening at the same time.

    Some part of the retinal processing “anticipates” the shift of data caused by the eye movement, and some part uses the change of data to detect edges.
    It is my understanding, from a 40 year old phys psych course, that frog’s retinas suppress any signal that isn’t consistent with something to eat or a predator. This is before the signal is passed on to the brain. Retinas are part of the brain and are not just sensors.

  30. petrushka: It is my understanding, from a 40 year old phys psych course, that frog’s retinas suppress any signal that isn’t consistent with something to eat or a predator. This is before the signal is passed on to the brain. Retinas are part of the brain and are not just sensors.

    I recall something similar being taught to me about the frog retina and movement. (I’m sure the feisty little jumping spiders we get round here also have some kind of movement tracking system built in). And totally agree about sensory processing and the retina being brain tissue on a stalk.

  31. keiths: It must have been especially odd to look at the same object with both eyes, but with only one visual pathway being able to interpret the object.

    How long did it take for stereo depth perception to return?

    I was able to drive within 5 days. Very carefully. It is not an all or nothing process and is still going on after eight weeks.

    The difficulty I had was that I could not wear glasses on my unoperated eye, because it was also severely nearsighted. You cannot merge the visual field if one eye is significantly different in “power” from the other. So I was in the water, sink or swim, with the corrected eye from day one.

    And it was roughed up a bit from surgery, because the cataract was pretty advanced. It is still a bit shimmery after two months. The acuity is corrected, but some parts that are supposed to be smooth and transparent are still a bit ragged. I can only compare it to looking through dirty glass. Things are in focus, but not completely clear. My second eye, done a month later, was nearly perfect as soon as the dilation drops wore off.

    Stereo vision presents an interesting question. By some very heavy duty magic, 3Dness doesn’t completely go away when one eye doesn’t work. In fact, one eye can be significantly uncorrected and some 3D movie effects still work. I was able to see 3D effects in theaters, but not on 3D TVs. There’s a research project in there somewhere.

    Four weeks after the second operation I am still learning to see things. It’s difficult to separate the changes caused by learning from those caused by gradual improvement in my worse eye. There’s also a persistent tendency to look for my glasses when I wake up and to take them off before going to sleep.

  32. Alan Fox: keiths [quoting Neil]: The eye moves in saccades. As the eye moves, the path to a particular retinal receptor sweeps across the visual field. This results in sharp signal transitions as the path crosses an edge. My view is that the perceptual system uses these transitions to locate features in the visual field.

    What, specifically, do you find incorrect about this?

    Well, the it’s not supported by any evidence that I’m aware of. Edge detection is quite well understood in neuroscience, and it’s not done at the retina (although clearly you could design a system that does do this) but quite a long way upstream, by populations of neurons that aggregate inputs from retinotopic neurons – in other words, the edge is computed from simultaneous signals from different neurons, not the temporal difference from the same neurons. Similarly, neural populations, also upstream, are specialised to detect the orientation of the edge.

    In fact, there is evidence that retinotopic signals are around the time of saccade, a phenomenon called “saccadic suppression”, although we now know that the suppression must be at a fairly high level. At one time it was thought that saccades were completely ballistic, but with modern video eyetrackers we now know that a saccade path can be quite markedly curved, if a distractor is presented around the time of the saccade (the time when “suppression” is supposed to occur). Sometimes the saccade veers towards the distractor, sometimes away.

    But what is pretty incontrovertable is that the saccade planning system and the visual attention system are actually inseparable. “Covert attention” i.e. attending to something we aren’t foveating – “seeing in the corner of yor eye” is now widely described as an “unexecuted saccade”. When activation on the saccade map reaches a threshold, we make a saccade – and when we attend to something in peripheral vision, we increase activation on the saccade map in a manner that will generate a saccade if it rises far enough.

  33. petrushka: Some part of the retinal processing “anticipates” the shift of data caused by the eye movement,

    Yes, except I’m not sure it happens in the retina – but certainly in the data processing from the retina. The visual system uses the motor command to move the eye to adjust the way the retinotopic data is processed.

    The classic illustration of this is the experiment you can do yourself, whereby you manually move your eyeball with your finger (preferably through the lid!) – and you see the world move. Make the same movement as part of a saccade (or smooth pursuit) eye movement, and the world stays still.

    The difference is that in the first case there is no motor command from the visual system to move the eye, so the brain doesn’t have the data it needs to do the translation from retinotopic to world coordinates.

  34. Elizabeth: “Covert attention” i.e. attending to something we aren’t foveating – “seeing in the corner of yor eye” is now widely described as an “unexecuted saccade”.

    I’ve noticed sitting on my back porch and suddenly seeing an anole lizard fifty feet away. If it hadn’t moved, I would never have been able to see it. I think people have a bit of residual frog brain.

  35. Yes, peripheral vision is specialised for movement. Basically, it’s specialised for finding things that you might find interesting (with minimal info – rough shape, does it move) and eliciting an orienting reaction i.e. saccade, head movement, ears turned too if you are a cat) that lets you check out the fine detail and find out WHAT it is.

    So peripheral vision is wired to a fast “where” pathway (big fast “magnocells”), and fovea to a slower “what” pathway (smaller detail-transmitting but slower “parvocells”). The where pathway is more dorsal, and goes to things like the parietal cortex, which is why parietal strokes, especially right parietal, affect orientation in space. The what pathway goes to ventral temporal regions, and, specifically, to the left temporal lobe where in humans we can give them names.

    What’s really interesting is that in literate humans there is a “visual word form area” – and as we haven’t been literate for very long, it presumably utilises an area evolved for something else. It’s close to an area specialised for telling biological from non biological movement – so you can quickly tell a lizard from a wind-blown leaf! (except from those lizards that evolved to move like wind-blown leaves…)

  36. I think people have a bit of residual frog brain.

    In my own case, more than “a bit.”

  37. Elizabeth: Edge detection is quite well understood in neuroscience, and it’s not done at the retina (although clearly you could design a system that does do this) but quite a long way upstream, by populations of neurons that aggregate inputs from retinotopic neurons…

    OK, though I didn’t read that into Keith’s quote of Neil. I read it as the processing of signals from the retina happening somewhere in the perceptual system.

  38. petrushka: I’ve noticed sitting on my back porch and suddenly seeing an anole lizard fifty feet away. If it hadn’t moved, I would never have been able to see it. I think people have a bit of residual frog brain.

    It’s great how much field biology you can do just sitting on your own back porch or terrace.

  39. Alan Fox: OK, though I didn’t read that into Keith’s quote of Neil. I read it as the processing of signals from the retina happening somewhere in the perceptual system.

    I’m not sure if I’m reading Neil correctly or not, but keiths is correct, as far as we know, that the edge-detection system works on static data, not data collected during an eye movement.

    You could imaging a system that “swept” across an image, and recorded the changes of light intensity as it went, and where the temporal change was rapid, returned “edge”. But that isn’t how it seems to work. Rather there is an array of retinotopic neurons that receive signals from fovea during a fixation, some of which will be receiving signals from a light portion of the tiny part of the image that is at fovea (about 2 degrees of arc), and some from a dark. If these retinal neurons are close together, then the system returns “edge”.

    Outside fovea, edge detection is poorer, or at least tuned to much lower spatial frequencies, so more gradual spatial changes of intensity register “edge” – but not high frequency changes.

    So fovea “sees” the image in the centre (except only a tiny part at a time), while peripheral vision “sees” the image on the right (except where we are foveating).

    But, like the fridge light, because we can immediately investigate, by making a saccade, any part of the image in fine spatial detail that takes our fancy from the large-spatial frequency information, we don’t know we aren’t seeing the left hand image “all the time”.

    So our percept is as the image on the left.

  40. Neil Rickert: Yes, but it is difficult to do that.

    They use groups of people, so that they can catch each others errors.

    Having taught computer programming, I can tell you that students often have difficulty correctly reading their own program that they have just written.

    Sure, but people have difficulty reading each others English too!

    So I’m not sure what that argument buys you. Just because people can write obscurely in something does not mean it is not a language.

  41. walto: I don’t think it has to be translated: it just must be translatable in principle. That’s what’s meant by “reducible.”

    But then the in-principle English translation could be translated back to computer language. So the two cases are symmetric: you could go English to computerese or computerese to English. So why is one a language and not the other?

    Maybe you need to talk about the domain of applicability: it’s only a language if it can be used to by a community of agents to communicate about everyday things in the real world.

  42. Kantian Naturalist:
    The main reason why I’m unhappy with the thought that computers make inferences is that, in inferentialist semantics, inference is a central concept for the account of meaning in a natural language.

    You’ve described Brandom’s version of this in a previous post

    For a community of speakers, each speaker holds herself and the others accountable for what they say by keeping track of the compatibility and incompatibility of their commitments and entitlements. (If I assert p, and p implies q, then I am committed to q. If I assert p, and p implies q, but I am already committed to ~q, then I am not entitled to assert p. And so on.)

    I think that if that kind of reasoning was spelt out fully and explicitly, it could be formalized as a computation.

    In general, if inference means intuition is not permitted but that reasoning must be spelled out explicitly if asked for, then that explicit chain of reasoning could be translated into a computation of some sort.

    Despite my admiration for Dennett, I actually agree with Searle on one major point: I think that there is original intentionality.

    I only brought that up as a guess at what Neil was looking for. It was not part of my considered argument for inferring being a certain type of computing.

    (Along similar lines: just because we can model a brain as if it were a computer doesn’t make it one, any more than a computer simulation of a tornado means that weather systems are computational.)

    That analogy fails, I think, if minds are computation ( in Piccinini’s sense) performed the brain/body (and possibly involving outside-the-body vehicles for representation.). It fails because a simulation of such computation is also a computation.

  43. keithsOr perhaps more accurately, we experience ourselves as having direct contact with the world. The direct/indirect debate in perceptual psychology is all about what’s going on “under the hood” before percepts become conscious.

    I’m not sure what you mean to add by “we experience ourselves”.

    If you mean we don’t have introspective access to the subpersonal mechanisms that give us direct access to the world, then I think that works for me. That’s what I’d take from your second sentence.

  44. Neil Rickert:

    A categorization event as described (drawing an imaginary line to divide the world into two parts) is a geometric operation, not a logical operation.I see such geometric operations as at the heart of science and at the core of human cognition.And I see philosophy pretending to ignore their geometric nature so that it can treat everything as logic.

    Thanks for that explanation, Neil.

    What can’t geometry underlying categorization be computation? I’m thinking of algorithms for cluster analysis, for example.

  45. From last year:

    Neil thinks that brains don’t primarily compute. Instead, he thinks that the major component of thought is categorization, which he sees as a non-computational process implemented via our sensory interactions with the world. He also seems to see categorization as a geometric process, perhaps because when you create a category, you divide the world into two pieces: inside the category and outside, which can be visualized geometrically.

    With that background, you can begin to see what he’s getting at in this thread. If thinking is mostly categorization, and if categorization is a geometric process, then most of our thinking is essentially geometric. However, axiomatic geometry is of course computable. Since Neil wants our intelligence to be beyond the reach of mere computation, he needs parts of geometry to be non-computable.

    Hence his reference to “Geometry (broadly conceived)” as being non-computable.

    It doesn’t make a lot of sense to me, but that’s what he’s trying to say, as far as I can tell. I’m sure he’ll correct me if he thinks I got his viewpoint wrong.

  46. BruceS: I only brought that up as a guess at what Neil was looking for. [note that BruceS was referring to a remark about Searle on original intentionality]

    I wasn’t specifically looking for that, though I’ll agree that it is relevant. And, incidentally, I also agree with Searle that there is original intentionality. For that matter, I think he is right that you cannot get semantics from syntax, though I didn’t find the “Chinese Room” of any value.

    As a mathematician, if I am doing a computation or doing strict formal logic, I do that entirely in accordance with rules of syntax and ignoring any semantics. This ability to stick to syntax and ignore semantics is one of the skills that the mathematician (and the computer programmer) must acquire.

    I would count such a computation as a formal inference (assuming that I didn’t make mistakes, and that’s where considered judgment comes in for formal inferences). But it isn’t an ordinary inference until I then apply the semantics to the computed answer (another considered judgment) to see what it says in real world terms.

    On Searle’s CR, I saw his thought experiment as an attempt to show computer scientists (AI folk) that computation only required syntax. But they already knew that, without Searle having to explain it to them.

  47. BruceS: What can’t geometry underlying categorization be computation?

    Because the geometry is a step in providing a symbolic representation of reality (or some part of reality). And having a symbolic representation is prior to computation.

  48. keiths: Hence his reference to “Geometry (broadly conceived)” as being non-computable.

    I don’t think I said anything about “non-computable”.

    By “Geometry (broadly conceived)” I intended geometry starting before there are axioms.

Leave a Reply