Bad Dogs and Defective Triangles

Is a dog with three legs a bad dog? Is a triangle with two sides still a triangle or is it a defective triangle? Perhaps if we just expand the definition of triangle a bit we can have square triangles.

There is a point of view that holds that to define something we must say something definitive about it and that to say that we are expanding or changing a definition makes no sense if we don’t know what it is that is being changed.

It is of the essence or nature of a Euclidean triangle to be a closed plane figure with the straight sides, and anything with this essence must have a number of properties, such as having angles that add up to 180 degrees. These are objective facts that we discover rather than invent; certainly it is notoriously difficult to make the opposite opinion at all plausible. Nevertheless, there are obviously triangles that fail to live up to this definition. A triangle drawn hastily on the cracked plastic sheet of a moving bus might fail to be completely closed or to have perfectly straight sides, and thus its angles will add up to something other than 180 degrees. Even a triangle drawn slowly and carefully on paper with an art pen and a ruler will have subtle flaws. Still, the latter will far more closely approximate the essence of triangularity than the former will. It will accordingly be a better triangle than the former. Indeed, we would naturally describe the latter as a good triangle and the former as a bad one. This judgment would be completely objective; it would be silly to suggest that we were merely expressing a personal preference for straightness or for angles that add up to 180 degrees. The judgment simply follows from the objective facts about the nature of triangles. This example illustrates how an entity can count as an instance of a certain type of thing even if it fails perfectly to instantiate the essence of that type of thing; a badly drawn triangle is not a non-triangle, but rather a defective triangle. And it illustrates at the same time how there can be a completely objective, factual standard of goodness and badness, better and worse. To be sure, the standard in question in this example is not a moral standard. But from the A-T point of view, it illustrates a general notion of goodness of which moral goodness is a special case. And while it might be suggested that even this general standard of goodness will lack a foundation if one denies, as nominalists and other anti-realists do, the objectivity of geometry and mathematics in general, it is (as I have said) notoriously very difficult to defend such a denial.

– Edward Feser. Being, the Good, and the Guise of the Good

This raises a number of interesting questions, by no means limited to the following:

What is the fact/value distinction.

Whether values can be objective.

The relationship between objective goodness and moral goodness.

And of course, whether a three-legged dog is still a dog.

Meanwhile:

One Leg Too Few

469 thoughts on “Bad Dogs and Defective Triangles

  1. walto: FWIW, I myself often have difficulty understanding comments I have just made here.

    I always thought you were just writing in code.

  2. keiths: why don’t you stop trying to explain what Neil is trying to say, and just say what you think is wrong with what you think he’s saying?

    Because a lot of the time he doesn’t think he’s saying what you think he’s saying.

    And from my reading of what he’s saying, I don’t think he’s saying that either.

  3. Neil Rickert:

    I would count such a computation as a formal inference (assuming that I didn’t make mistakes, and that’s where considered judgment comes in for formal inferences).But it isn’t an ordinary inference until I then apply the semantics to the computed answer (another considered judgment) to see what it says in real world terms.

    I was wondering when someone would explicity bring in semantics as the distinguishing feature between computation and ordinary inference.

    Maybe, if we follow (my understanding of KN’s summary of) Brandom’s inferentialism, then semantics reduces to computation when “deontic scorekeeping” is spelled out explicitly. Maybe.

    In any event, the issue of semantics occurred to me when thinking about the role of models in interpretation of logic syntax. I said it was enough to count as inference if the computation preserved truth, and that would require including the effects of the model on the inference, I believe.

    So to make it ordinary inference, I am guessing that you think it matters how the model is generated, and that you want models to be generated by real-world agents, possibly living (in KN’s sense) agents, who are also interacting with other agents within a community and a common world so they can develop a conceptual scheme which underlies the agent’s model.

    Is that anything close to what you think?

    I’ve not tried to address “considered judgement” ; I am presuming the meaning of that phrase needs to be specified explicitly to count as inference and not intuition and so would be subject to algorithmic implementation.

    My use of inference assumes the model is given. Although I suppose one could be generated by learning algorithms interacting with the world which the inferences are to be made about.

  4. BruceS: But then thein-principle English translation could be translated back to computer language.So the two cases are symmetric:you could go English to computerese or computerese to English.So why is one a language and not the other?

    They’re both languages, but one is like English and the other is like pig-Latin English or Ubby-Dubby. You can translate them back-and forth, but one gets its meaning from the other, and not vice-versa.

    Maybe you need to talk about the domain of applicability:it’s only a language if it can be used to by a community of agents to communicate about everyday things in the real world.

    No, they’re all languages if they refer, can be understood, etc. But some derive their meaning from others.

  5. Neil Rickert: Because the geometry is a step in providing a symbolic representation of reality (or some part of reality).And having a symbolic representation is prior to computation.

    There we disagree as per my comments on Piccinini.

  6. walto: They’re both languages, but one is like English and the other is like pig-Latin English or Ubby-Dubby.You can translate them back-and forth, but one gets it’s meaning from the other, and not vice-versa.

    No, they’re all languages if they refer, can be understood, etc. But some derive their meaning from others.

    Hey, Ubby-dubby is one of my favourite programming languages. Why the gratuitous insults to ubby-dubby aficionados?

    Computer languages do refer, and they do so without involving English, but to a very restricted domain. Also, just to be clear, let me repeat that it is people-to-people communication I am talking about, so the issue of whether computers can refer is not relevant.

    Computer languages even have their own version of Frege’s puzzles about identity in intensional contexts in the form of “aliasing”. Well, its sort of like Frege’s problems. And in any event, that’s beside the pointer (inside joke).

  7. BruceS,

    I’ve said (several times) that computer languages refer–why wouldn’t they? But I don’t see how they can do so without being reducible to some natural language. (It doesn’t have to be English, of course.) So please explain how/why you think this is possible.

    PS: Ubby-Dubby is fun.

  8. walto:
    BruceS,

    I’ve said (several times) that computer languages refer–why wouldn’t they?But I don’t see how they can do so without being reducible to some natural language.(It doesn’t have to be English, of course.)So please explain how/why you think this is possible.

    PS: Ubby-Dubby is fun.

    Can French refer without being reducible to English (even though it is translatable to English). I think the answer is yes and the reasoning for ubby dubby is the same.

    ETA: Maybe Esperanto is a better analogy since it was artificially created. But when two Esperanto speakers communicate, they don;t need to be surreptitiously reducing to English.

  9. You think Ubby-Dubby can refer without being reducible to some other language, which is ultimately reducible to some natural language?

    Why do you think that? It seems highly implausible to me.

    And of course French can refer without being reducible to English–whether or not it is translatable into English (see Quine et al on that). They’re both natural languages.

  10. walto:
    You think Ubby-Dubby can refer without being reducible to some other language, which is ultimately reducible to some natural language?

    Why do you think that?It seems highly implausible to me.

    And of course French can refer without being reducible to English–whether or not it is translatable into English (see Quine et al on that).They’re both natural languages.

    Since reducibility is symmetric on the domain where they co-refer, I don’t see how it’s a distinguishing criterion. Unless you are arguing that English came first so it has priority?

    ETA: But if so, then maybe English is not a language because it is reducible to Latin?

  11. It’s not a matter of temporal priority either (where are you getting this stuff?)
    It’s simply that I don’t agree that reducibility is symmetric, in spite of translatability being so. There are natural languages and then there are languages, like pig-latin or computer code, that are constructed out of them. It’s a one way thing, analogous to baseballs/molecules and molecules/atoms.

  12. Lizzie,

    why don’t you stop trying to explain what Neil is trying to say, and just say what you think is wrong with what you think he’s saying?

    Orfax wanted to know what Neil meant, and Neil’s terse answers weren’t helping, so I stepped in. Why wouldn’t I?

    I concluded my paraphrase with this:

    It doesn’t make a lot of sense to me, but that’s what he’s trying to say, as far as I can tell. I’m sure he’ll correct me if he thinks I got his viewpoint wrong.

    Lizzie:

    Because a lot of the time he doesn’t think he’s saying what you think he’s saying.

    Much of the time he claims that I don’t understand what he’s saying, when in fact it’s clear that I do. The saccade comment is one example. The heliocentrism/geocentrism debate is another. He seems to invoke “miscommunication” in order to avoid admitting error.

    I would prefer for Neil to take responsibility for his positions and defend them instead of falsely claiming that he is being misunderstood. I would especially prefer that he refrain from falsely accusing others of misrepresenting him, then refusing to back up his claims.

  13. BruceS: Piccinini does a better job of defining physical computation, I think. He eliminates representation from it.

    A physical system is a computing system just in case it is a mechanism one of whose functions is to manipulate vehicles based solely on differences between different portions of the vehicles according to a rule defined over the vehicles.

    BruceS referred to this in a more recent comment.

    It seems to me that Piccinini is smuggling in representations when he mentions “differences”, “different portions” and perhaps also “a rule defined …”.

    Maybe he is not requiring that the computer see them as representations, but he seems to be requiring that of us when we decide if something is computation.

  14. Bruce:

    So, according to Hohwy, perception is indirect because we perceive, at least in part, our internal representations.

    Clark replies that this does happen, but it happens at the subpersonal level. At the personal level, we have direct contact with the world.

    keiths:

    Or perhaps more accurately, we experience ourselves as having direct contact with the world. The direct/indirect debate in perceptual psychology is all about what’s going on “under the hood” before percepts become conscious.

    Bruce:

    I’m not sure what you mean to add by “we experience ourselves”.

    I’m suggesting that while perception is indirect, we experience it as direct. When we look at the Müller-Lyer illusion, we simply see that one line is longer than the other, out there in the world. In reality, the length disparity is only in our heads. We are perceiving a representation, not the actual lengths of the lines. The perception is indirect.

  15. def keiths_isa?(dork)
    if dork == ‘dork’; true; else; true; end;
    end
    keiths_isa?(“nice guy”) # => true

  16. Neil Rickert:Looked at as physical events, these seem to all be chaotic transitions.

    “The question I want to raise is whether this concept of the discrete switch will prove sufficient as a basis for understanding intelligence in brains and creating intelligence in computers.”

    Discrete and continuous processes in computers and brains

    “Insofar as a physical system can be recognized as a switch it is not an objective element, but is itself a model – a subjective interpretation made through some external system with which it interacts.”

  17. walto: I’ve said (several times) that computer languages refer–why wouldn’t they?

    They do only in a limited way. They can refer to computer components (registers, input-output devices, etc). They can refer to memory slots that are defined elsewhere in the program. But, no, they do not refer to anything outside the computer. Such reference depends on assumptions about the I/O assocations and the intended use of a program. Hackers break into computers by finding ways of using programs that are other than their intended use.

  18. I had a thought today that bears on some of the confusions between direct and indirect perception: it might be conflation of two different dimensions of explanation.

    Firstly, we might ask, at what level are we looking for cognitive phenomena: agential or subagential? (I leave aside the superagential, since that does not seem to apply to perception.) Secondly, we might ask where does cognition happen?: internal to the body (or brain) of the agent, or is more external, in the brain-body or brain-body-environment?

    Agential internalism: at the level of first-personal awareness and knowledge, psychological phenomena are all or mostly internal to the agent itself. Historical examples: Descartes, Locke, Russell. Contemporary examples: Nagel, Searle.

    Agential externalism: at the level of first-personal awareness and knowledge, psychological phenomena are all or mostly external to the agent itself, by virtue of being either (1) socially distributed, (2) environmentally distributed or (3) both. Historical examples: Hegel, Dewey, Heidegger, Merleau-Ponty. Contemporary examples: Bob Brandom, Evan Thompson

    Subagential internalism: at the level of empirically confirmed (or confirmable) causal explanations of underlying cognitive machinery, all or most cognitive operations are internal (to the brain) of the cognitive system to be explained. Example: David Marr, Jerry Fodor.

    Subagential externalism: at the level of empirically confirmed (or confirmable) causal explanations of underlying cognitive machinery, at least some (and perhaps most) of the cognitive operations are external to the brain of the cognitive system to be explained. Examples: J. J. Gibson, Andy Clark.

    Generally speaking, agential-externalist descriptions and explanations stand in need of subagential-externalist descriptions and explanations, which is why Merleau-Ponty and Gibson seem like a natural fit, and likewise on the internalist side of the story.

    Thus, the debate between Cartesians and pragmatists is a debate at the agential level, whereas the debate between cognitivists and enactivists is a debate at the subagential level.

    Likewise, the difference between Marr and Gibson is a difference as to where visual cognitive processing happens: is it the construction of an inner representation based on scanty data (Marr), or is the intermittent sampling of visual information in the environment (Gibson)? That’s a debate between subagential internalism and subagential externalism, which is a bit different from asking as to how “direct” or “indirect” perception is.

    Now that I’ve muddied the waters even further, I depart!

  19. BruceS: So to make it ordinary inference, I am guessing that you think it matters how the model is generated, and that you want models to be generated by real-world agents, possibly living (in KN’s sense) agents, who are also interacting with other agents within a community and a common world so they can develop a conceptual scheme which underlies the agent’s model.

    I haven’t thought about that. But then the term “ordinary” does usually suggest real-world agents (such as people).

  20. Mung: “The question I want to raise is whether this concept of the discrete switch will prove sufficient as a basis for understanding intelligence in brains and creating intelligence in computers.”

    I’m not sure why you brought that up.

    I hope it is clear enough that I am saying that discrete switches are not enough.

    Discrete switching (as in a computer) can deal well with symbolic expression. But the world comes to us unsymbolized and without symbols. What I have been trying to suggest, in many threads, is that how we symbolize is important.

  21. Neil Rickert: I’m not sure why you brought that up.

    Did you actually read the article? Do you think he argues that a switch is not something physical but rather something logical?

    Given the section of your post I quoted (“Looked at as physical events, these seem to all be chaotic transitions”) did you not find anything in that article at all relevant to what you wrote?

    I brought it up because I thought you might find it interesting. If you didn’t actually read it then I guess I wasted my time. (otoh, maybe someone else read it so it wasn’t such a waste of time.)

  22. Neil Rickert: I hope it is clear enough that I am saying that discrete switches are not enough.

    There’s no such thing as a [physical] discrete switch. I thought you knew that.

    “Looked at as physical events, these seem to all be chaotic transitions.”

    Clearer now?

  23. Mung: Did you actually read the article?

    Yes. I did not download. I did read what I can see in the browser, though I’m not sure if that is the whole article or just a lengthy summary.

    It is mostly off on a tangent, so not really relevant.

  24. keiths:

    If you’re comfortable with the idea of “neural appliances”, why not a neural appliance that indirectly detects motion?

    Neil:

    If it is an appliance, then what it does is being done directly (no inference involved, just mechanistic rule following).

    You’re assuming that inference cannot be accomplished mechanistically. But that’s beside the point. As I (apparently need to) keep stressing, the word “inference” doesn’t matter — it’s the process itself that matters.

    And, of course, there might be ways that it can be fooled (i.e. that it will detect motion when there is none).

    Indirect perceptionists can explain both the veridical perception and the illusion.

    Motion leads to displacement. By detecting displacement, rather than the motion itself, the “appliance” indirectly detects motion — and it’s correct most of the time. Sometimes, though, the displacement is only apparent. In those cases motion is erroneously “detected” despite being absent.

    How do you, as a direct perceptionist, explain both the veridical perception and the illusion? If the motion is being detected via the resulting displacement, then it is not being detected directly.

  25. keiths: If the motion is being detected via the resulting displacement, then it is not being detected directly.

    If the temperature is being detected by expansion of the mercury column, then it isn’t being detected directly.

    But it is. I haven’t a clue as to what you mean by “direct perception”, but it isn’t what I mean.

    I’m really getting tired of this harassment. You read what I say as meaning something different from what I intend. And then you repeatedly demand that I defend your reading.

  26. Neil:

    If the temperature is being detected by expansion of the mercury column, then it isn’t being detected directly.

    A thermometer is an instrument, not part of our perceptual machinery. And it’s a direct indicator of the temperature, because the mercury column doesn’t expand or contract unless the temperature increases or decreases.

    On the other hand, suppose a “neural appliance” observes a red ball disappearing from position A and an identical red ball appearing at nearby position B. Maybe the ball moved from A to B, but all our eyes actually see is the (apparent) displacement. If we perceive the ball as moving from A to B, then the motion was perceived indirectly by detecting the displacment.

    Hence my question:

    How do you, as a direct perceptionist, explain both the veridical perception and the illusion? If the motion is being detected via the resulting displacement, then it is not being detected directly.

    Neil:

    I’m really getting tired of this harassment.

    What harassment? I’m disagreeing with you and explaining exactly why, at a site called The Skeptical Zone. Why do you think you should be exempt?

    Meanwhile, you are making false accusations and refusing to back them up — and then going on to accuse me of harassment. Don’t be a hypocrite, Neil.

    Neil:

    You read what I say as meaning something different from what I intend. And then you repeatedly demand that I defend your reading.

    Then defend your own statement:

    The eye moves in saccades. As the eye moves, the path to a particular retinal receptor sweeps across the visual field. This results in sharp signal transitions as the path crosses an edge. My view is that the perceptual system uses these transitions to locate features in the visual field. I don’t see how vision would be possible without that. The designers of bar code scanners use the same idea to locate bar codes.

    What you wrote is wrong, and Lizzie and I have explained why. If you disagree with us and the neuroscientific community, then make your case. Cite some studies. Provide some evidence. Present an argument.

    Or if you agree that you were wrong, then say so. Take responsibility for your claims even when they turn out to be incorrect.

  27. keiths:
    A thermometer is an instrument, not part of our perceptual machinery. And it’s a direct indicator of the temperature, because the mercury column doesn’t expand or contract unless the temperature increases or decreases.

    A thermometer is an instrument designed for a purpose. It was designed for the purpose of becoming part of our perceptual machinery. A thermometer is not a direct indicator of temperature.

  28. Mung: keiths:
    A thermometer is an instrument, not part of our perceptual machinery. And it’s a direct indicator of the temperature, because the mercury column doesn’t expand or contract unless the temperature increases or decreases.

    Hmm.

    I’d say a mercury thermometer is a direct indicator of the temperature of the mercury in the thermometer, assuming it is at thermal equilibrium. It is an indirect indicator of the temperature of the medium surrounding it, assuming thermal equilibrium.

    Where does that get us?

    I’m away for a day or two

  29. keiths: What you wrote is wrong, and Lizzie and I have explained why. If you disagree with us and the neuroscientific community, then make your case. Cite some studies. Provide some evidence. Present an argument.

    As I said, Neil’s example is probably wrong, but his point is sound. Eye movements are critical to perception – not for “edge detection”, as it happens, but for perceiving the scene, as opposed to identifying small parts of it. You even need a saccade to perceive a longish word.

    When the saccadic system doesn’t function properly, reading becomes difficult, and the letters jumbled. This sometimes happens in dyslexia, for instance (though it is not as typical a feature as sometimes claimed). In fact it’s what my PhD was on – anomalies of visual attention in dyslexia, including saccadic abnormalities..

    And not just eye movements, I would argue – I’d say that our whole perceptual and attentional system entails the making of forward models, which then generate further-data-procuring action (feeling, touching, reaching, grasping, head turning, even ear-turning). Without action, perception would be extremely limited.

    Even where no action is possible (in paralysis, for instance), or in contexts where no action takes place (an intended but unexecuted saccade, for instance), motor programs are involved. We know that the brain areas involved in perception include those involved in motor control. One of the things we are seeing very clearly in our current batch of studies is the extent to which the motor system is involved in attention even where no motor response turns out to be required. We deliberately designed a paradigm in which the difference between “relevant” and “irrelevant” stimuly was not confounded by having to make a motor response to one and not to the other. Yet we still see a massive motor-response-like brain response to the “relevant” stimulus – i.e. to the stimulus that might require a response. Because that response is part of the mechanism of perception (much less perception is required for the “irrelevant” stimuli.

  30. I’m not sure if Neil agrees with my view above.

    But I certainly agreed with him when he said:

    Neil Rickert: I’m closer to the first of those positions. No, I don’t say that perception is action, but I do say that perception is behavior and involves actions.

    At the very minimum, I would say, that perception involves the activation of motor programs for action, even if those do not reach execution threshold. And normally at least some of them do. The “orienting” response is a motor response, and without orienting, perception would be limited thing. We do not merely perceive small objects – we perceive entire visual scenes. Indeed, we perceive not only visual scenes but 3D world, full of objects with sounds, textures, properties like weight and graspability and reachability.

    And, again like Neil (if I’ve got his view right) – I think this is what we perceive because this is how we parse the world. I don’t think we can say the world IS like this – it’s our model of it. It doesn’t come apart at joints until we approximate some joints.

  31. @KN
    Your “account of cognition” and terminology in general ceased to be interesting when it became clear how little you care about solving your self-contradictions. I can conclude that you have nothing coherent to say in support of your thesis of “heterosexual privilege” and why it’s “good” to get rid of that supposed privilege.

    What’s still somewhat interesting is how you reject “dogmas of traditional philosophy” while operating within the very same categories. You are groping in the dark, in other words.

    Kantian Naturalist:
    If my account of the grades of objective cognition is adequate, then we can systematically exploit our own cognitive structures and processes in order to disclose the underlying real patterns…

    The funny thing about this statement is that natural kinds were not supposed to be real according to you, yet here you are trying to “disclose the underlying real patterns”. Care to explain how “underlying real patterns” are something totally different from Aristotelian natural kinds, Platonic forms, and other such traditional categories?

  32. Neil Rickert:

    It seems to me that Piccinini is smuggling in representations when he mentions “differences”, “different portions” and perhaps also “a rule defined …”.

    Maybe he is not requiring that the computer see them as representations, but he seems to be requiring that of us when we decide if something is computation.

    Yes, we have to use representations based on the definition to understand where it applies.

    But as I think you hint in your second sentence, that is not the same thing as the computing mechanism itself necessarily involving representations.

    It is use of representations by the computing mechanism that is the key point, I think.

    I don’t think Piccinini is saying that computations don’t use representations in some cases. He is saying t it is better to separate the concepts rather than mix representation into the definition of computation.

    He has a second reason for not mixing two separable concepts. Mathematics already has a significant body of theory relating to abstract computation which does not use representation in the definition. This should be a starting place and respected for definitions of physical computation.

  33. Neil Rickert: But, no, they [computer languages] do not refer to anything outside the computer.

    In order to refer, words (or thoughts) need only refer to things outside themselves–those words (or thoughts).

    ETA: It would have been better if I’d put that as “In order to be a language, something must contain words that need only refer….”) In any case I wasn’t disagreeing with Neil about the limitation of computer language reference, just commenting that I don’t see why that should count as a disqualification for being a genuine–even though derivative–language.

  34. Kantian Naturalist:

    Subagential externalism: at the level of empirically confirmed (or confirmable) causal explanations of underlying cognitive machinery, at least some (and perhaps most) of the cognitive operations are external to the brain of the cognitive system to be explained. Examples: J. J. Gibson, Andy Clark.

    I think that Clark believes some of the vehicles are external to the brain/body, but I’m not so sure about the operations. Maybe if you include the operations of computers being used as cognitive scaffolding.

    Also, I think he restricts the constitutive elements of phenomenal experience to the brain/body. He has a paper speculating they are something like “practical knowledge of our possibilities for action in an action-space of potential, coarse-grained action dispositions, at the level of intention, not motor control”. (paraphrased!).

    The actions in the action-space are representations, likely of the Bayesian pdfs as discussed in his other paper.

  35. keiths:
    We are perceiving a representation, not the actual lengths of the lines.The perception is indirect.

    In the following, I’m assuming “we” is the agential level (as per KN’s posts) and perceiving includes the phenomenal experience we have. I’m not talking about any processing that happens sub-agentially.

    Saying that we perceive representations is a standard issue in the philosophy which is trying to explain phenomenal experience at the agent level. Can we perceive representations? Or do we always look through them to what we are perceiving: this being called “transparency”.

    I’m still working through my views, so I won’t comment further. SEP article has details if you are interested.

  36. That’s exactly right, Bruce. That’s the diaphanousness question. I think the phrase came from G.E. Moore, but the classic paper on it is by Harman. FWIW, I don’t think it’s correct to say that we perceive representations–whether or not the things that seem to us to be moving are actually moving.

  37. walto: It’s not a matter of temporal priority either (where are you getting this stuff?) It’s simply that I don’t agree that reducibility is symmetric, in spite of translatability being so. There are natural languages and then there are languages, like pig-latin or computer code, that are constructed out of them. It’s a one way thing, analogous to baseballs/molecules and molecules/atoms.

    OK.

    I’m going to make one more comment and then let you have any last word if you want.

    It’s the word “reducible” where I am hung up, I think.

    I agree that people have to have natural language first and so formal languages could only be constructed “out of” that pre-existing situation. So if that is what you mean by reducible when you say “Hall adds that the formal one isn’t really a language unless it’s reducible in some manner to a natural one.” then I am fine with that.

    But if you don’t mean that for “reducible” then I don’t understand you. I take reducible as meaning translatable to simpler concepts. And I don’t see that as applying, as long as we restrict the domain of translation to the one the computer language applies to. Anyway, over to you if you want.

  38. walto:
    That’s exactly right, Bruce.That’s the diaphanousness question.I think the phrase came from G.E. Moore, but the classic paper on it is by Harman.FWIW, I don’t think it’s correct to say that we perceive representations–whether or not the things that seem to us to be moving are actually moving.

    Yup, read the Harman one and some Tye stuff as well as various intro level and SEP stuff.

    It’s the Block “mental paint” material I need to work through, since he (despite Harman) claims mental paint exists.

    Another philosopher I find congenial, Prinz, also has some concerns with argument that transparency leads to the conclusion that “the content of qualia is exhausted by representation”. I need to spend more time with him to understand that though. I find his views generally appealing (like Clark’s) so I will get around to that.

  39. Yes, I think Block’s work is the most important to deal with for those wanting to hold to the transparency claim. He convinced Putnam on this issue (which I take to be a fairly big deal). When I retire (in about a year and a half), I hope to concentrate on those Block papers as well as Putnam’s congratulatory work–I think he’s got a book he’s coauthoring on perception in the works. I should note, though that the Brits working on this stuff, like Travis, don’t seem to agree with the “Block is important” assessment. I can’t really understand that (or them, honestly). I correspond with a couple of them (Guy Longworth and Keith Wilson), but it’s like we’re in different worlds.

    One thing I mentioned in my disjunctivism paper that I want to reiterate here: if direct perception is supposed to require that perception can NEVER be indirect, I don’t think it’s true. That this is so we can learn from mirrors, TVs, recordings, etc.

  40. BruceS: OK.

    I’m going to make one more comment and then let you have any last word if you want.

    It’s the word “reducible” where I am hung up, I think.

    I agree that people have to have natural language first and so formal languages could only be constructed “out of” that pre-existing situation.So if that is what you mean by reducible when you say“Hall adds that the formal one isn’t really a language unless it’s reducible in some manner to a natural one.” then I am fine with that.

    But if you don’t mean that for “reducible” then I don’t understand you.I take reducible as meaning translatable tosimpler concepts.And I don’t see that as applying, as long as we restrict the domain of translation to the one the computer language applies to.Anyway, over to you if you want.

    “Reducible” is a toughee, certainly. I’m not sure I can define it, anyhow. I have a sense that your first paragraph isn’t quite sufficient to capture it, but in your second one, the “simpler” concerns me. I don’t know that the language to which something is reduced must be “simpler”–I also don’t know how I’d define “simpler.” I think we’ve discussed this before, and I put in a passage from a paper in my book. It’s a really hard issue I think–for me, anyway.

    But I think we’re generally on the same wave-length here. Just take the difference between English and pig-latin English. It’s…..THAT.

  41. keiths:
    Neil:

    A thermometer is an instrument, not part of our perceptual machinery. And it’s a direct indicator of the temperature, because the mercury column doesn’t expand or contract unless the temperature increases or decreases.

    On the other hand, suppose a “neural appliance” observes a red ball disappearing from position A and an identical red ball appearing at nearby position B. Maybe the ball moved from A to B, but all our eyes actually see is the (apparent) displacement. If we perceive the ball as moving from A to B, then the motion was perceived indirectly by detecting the displacment.

    Hence my question:

    Neil:

    What harassment?I’m disagreeing with you and explaining exactly why, at a site called The Skeptical Zone. Why do you think you should be exempt?

    Meanwhile, you are making false accusations and refusing to back them up — and then going on to accuse me of harassment.Don’t be a hypocrite, Neil.

    Neil:

    Then defend your own statement:

    What you wrote is wrong, and Lizzie and I have explained why.If you disagree with us and the neuroscientific community, then make your case. Cite some studies.Provide some evidence. Present an argument.

    Or if you agree that you were wrong, then say so.Take responsibility for your claims even when they turn out to be incorrect.

    I honestly don’t think you can enjoy yourself on the internet unless you are haranguing someone. Why not just put up a little sign by your computer that says “Hah, they just won’t admit that I’m right and they’re wrong, even though they know it!” Then, instead of posting you can just stare at your sign or read it aloud or something. I have the distinct sense it would make the world a slightly more pleasant place for everyone.

  42. walto:
    Yes, I think Block’s work is the most important to deal with for those wanting to hold to the transparency claim.He convinced Putnam on this issue (which I take to be a fairly big deal). When I retire (in about a year and a half),

    One thing I mentioned in my disjunctivism paper that I want to reiterate here: if direct perception is supposed to require that perception can NEVER be indirect, I don’t think it’s true.That this is so we can learn from mirrors, TVs, recordings, etc.

    I cannot really follow the nuances of Putnam’s color posts in his blog, but I hope Block will be simpler. AFAIK, he is a brain/mind identity theorist, at least for qualia, and he thinks we don’t (yet) have the concepts for explaining that identity. It’s brute. We can only investigate it by identifying correlations between 3rd-party measurements of brain states/processes (eg with fMRI) and simultaneous subjective reports.

    I don’t know if that is part of Putnam’s assent to Block’s views.

    On alternatives to representationlism: the person who is trying to revive adverbialism is interviewed in a Nautlilus article. Something more to read at some point.

  43. walto,

    I honestly don’t think you can enjoy yourself on the internet unless you are haranguing someone.

    Yeah, what’s wrong with that keiths guy? He actually thinks it’s okay to be skeptical at The Skeptical Zone, even of the people on our side. He also has the gall to respond to false allegations. The next thing you know, he’ll be haranguing us by presenting arguments for his positions!

  44. Bruce,

    Saying that we perceive representations is a standard issue in the philosophy which is trying to explain phenomenal experience at the agent level. Can we perceive representations? Or do we always look through them to what we are perceiving: this being called “transparency”.

    I would say that we indirectly perceive objects in the world by directly perceiving their representations, and that the representations aren’t “transparent” — we don’t “see through them” to the actual objects. It’s just that in a healthy person, the representations are constrained by the objects they’re representing.

    When we have dreams or hallucinations, the representations are there, but the objects are not. If our representations were transparent, then a hallucinating person would always be able to look “through” them and see the reality on the other side.

  45. BruceS: I don’t think Piccinini is saying that computations don’t use representations in some cases. He is saying t it is better to separate the concepts rather than mix representation into the definition of computation.

    On first read of your quote from Piccinini, I thought he was trying to do away with representation. But then, on second thought, it occurred to me that maybe he was only trying to give a physical characterization of what counts as computing. That’s why I added that second sentence.

    I’m not sure that I buy it. On the other hand, I don’t think it matters. Some people say “Computers don’t computer; they are just electrical appliances.” And I’m sympathetic to that view. On the other hand, I know perfectly well what people mean when they describe a computer as computing — and I use that language myself.

    My disagreement with computationalism is on an entirely different point.

  46. walto: ETA: It would have been better if I’d put that as “In order to be a language, something must contain words that need only refer….”) In any case I wasn’t disagreeing with Neil about the limitation of computer language reference, just commenting that I don’t see why that should count as a disqualification for being a genuine–even though derivative–language.

    In ordinary life, taking both natural languages and computer languages to be language is entirely reasonable. It is when one gets technical, that the huge differences become important.

    When teaching a formal languages class (in the computer science department), I typically tell students that if a language is what I have just defined, then English isn’t a language. I think they get that this is intended as irony, not as a denial that natural languages are languages.

  47. keiths: I would say that we indirectly perceive objects in the world by directly perceiving their representations, and that the representations aren’t “transparent” — we don’t “see through them” to the actual objects. It’s just that in a healthy person, the representations are constrained by the objects they’re representing.

    That’s the classic position of sense-data theorists, indirect realists, and phenomenalists. It’s a very respectable neighborhood. The problem is to figure out what the position is called, and that’s been sort of a horror story on wheels. Neil (along with others) has called it “representationalism,” but representational theories of consciousness often (maybe even generally) do not require that the representations be perceived: in fact, they usually oppose that view.

    I called my book “The Roots of Representationism” and much of it is devoted to Hall’s critique of representationALism as Neil is using it above. This problem of naming the school is, I think, one of the reasons that the Brits and Yanks often have no idea what the other guys are talking about.

  48. keiths:
    walto,

    Yeah, what’s wrong with that keiths guy?He actually thinks it’s okay to be skeptical at The Skeptical Zone, even of the people on our side.He also has the gall to respond to false allegations.The next thing you know, he’ll be haranguing us by presenting arguments for his positions!

    For the record, keiths, repeating “I’m right and you’re wrong! Admit it!” is nothing anybody but you would call a sign of skepticism.

  49. And for the record, walto, I don’t do that.

    When I disagree with someone’s position, I ask them to defend it.

    Example from this thread:

    How do you, as a direct perceptionist, explain both the veridical perception and the illusion? If the motion is being detected via the resulting displacement, then it is not being detected directly.

  50. Bruce,

    Looking at the SEP article, I see Harman’s characterization of transparency:

    Look at a tree and try to turn your attention to intrinsic features of your visual experience. I predict you will find that the only features there to turn your attention to will be features of the presented tree, including relational features of the tree ‘from here’”

    If that is what is meant by “transparency”, then I agree that representations are transparent. It’s what I was getting at earlier when I wrote that “we experience ourselves as having direct contact with the world.”

    But “transparent” is an unfortunate word choice in that case, because we don’t “see through” the representation to the object being represented. We see the representation as if it were the object out there in the world. It’s not really transparent in the standard sense of the word.

    ETA: A better metaphor might be that we “project” the representation out onto the world.

Leave a Reply