The title is from a blog post by Brian Leiter. Leiter links to an article in the LA Review of books: Imitation and Extinction: The Case Against Reality. The article is written by Donald Hoffman.
We have discussed the general topic before, in several threads. So maybe this is a good time to revisit the topic.
Hoffman asks: “I see a green pear. Does the shape and color that I experience match the true shape and color of the real pear?”
My take is that there is no such thing as the “true shape and color of the pear.”
It is a common presumption, that there is an external standard of truth. Here, I mean “external to humans”. Truth is presumed to come from somewhere else. And our perceptual systems evolved to present us with what is true.
As I see it, this is backwards. Yes, our perceptions are mostly true. But this is not because perception is based on truth. Rather, it is because our human ideas of truth are based on what we perceive.
Open for discussion.
I’m not sure what you mean by “arbitrary”. If you mean, why do we experience some objects as spherical and some as cubical, or some as blue and some as red, then that gets into the hard problem. I see that as separate from veridical perception for the fitness payoff arguments in his paper.
See Figure 5 in his Psychonomic paper for why fitness payoff functions can come apart from veridical perceptions of reality.
I understand the desktop icon illustration this way: we can change files (patterns of magnetism on hard drives) successfully by manipulating icons on computer screens, without having any notion of the real world of hard drives. The same applies to acting successfully in the world.
He has a counter to people l who say that successful action can only be explained by veridical perception, or for me by perception that creates neural structures that are similar to causal structures in the world.
In reply, he says something like all we can draw conclusions about is the functional composition of acting and perceiving. We cannot draw separate conclusions about perception from successful action (at least, that is how I understand him). He uses that to get around the objection that he is depending on evolution being a theory about reality; he says it need only be a theory covering action-perceiving together.
The hard problem of consciousness is in connecting the biology with the philosophy. The biology is hard, but that’s an ordinary hard. The real problem, and the real difficulty, is due to the philosophy.
First, I should clarify that isomorphism of structures is not a standard way to approach truth in philosophy. Instead, truth is usually taken to be a property of language, either ordinary written/spoken language, or a theoretical language of thought used in mental representations.
I brought up Hoffman’s use of isomorphism to point out it is a binary notion in the same way that true/false is.
Your example mentions sensations, which again brings in the hard problem. So leave that aside and focus instead on frequency, and consider it as being solely a mind-independent property of sound waves. In that case, I would say that, yes, that notion of isomorphism is what Hoffman was referring to in his “Critical Realism” definition of veridical perception.
(Just for fun: there are philosophers who think that phenomenal experience is nothing more than a representation in some technical sense discussed here:
https://plato.stanford.edu/entries/qualia/#Repqualia
)
Since the people that talk about the hard problem contrast it with the one you think is unsolvable, I can see why you don’t understand the hard problem in the way they do. I guess it cannot get harder than unsolvable in principle.
(Perhaps your view is C. McGinn’s mysterianism; it says that the qualia are nothing more than brain processes, but the human intellect is incapable of showing that scientifically. )
Here’s an article by the neuroscientist Anil Seth on how scientists should approach both:
https://aeon.co/essays/the-hard-problem-of-consciousness-is-a-distraction-from-the-real-one
I did not find that at all useful. But I expect that you won’t be surprised.
You clearly haven’t listened to or read anything Hoffman says.
Neil:
I hope it at least taught you and Alan what the hard problem is:
Do you think I should just buy whatever the guy wrote because he wrote it?
keiths,
Good grief!
keiths already mentioned the obvious objection: The world is not homogeneous. The dividing up uses information from outside us.
Then I should have used a different word, but that doesn’t affect my argument. We get information from our senses: light, sound, touch. It becomes “cat”, “car”, “cake” etcetera in our minds. On that we agree. But we still need there to be cats, cars and cakes (or whatever you call them or divide them up) outside of us for us to sense them.
What source? There is an interaction between us and the outside world. We need both. I don’t need to be a theist or dualist to recognize that fact.
I’d agree it is pragmatic, but don’t see why that excludes veridical.
That’s already been mentioned: because it allows animals to correctly interact with their environment during foraging, hunting, mating, etc.
To me, it more read like the problem is the inaccurate mapping of the unimportant part (e.g. the multiple red, green and yellow bars in figure 3 of his manuscript).
Alan,
Why the exasperation? Based on what the two of you wrote, neither of you understands what the hard problem is.
Neil:
Alan:
Seth explains it succinctly:
Corneel,
Figure 3 is veridical with respect to payoffs but non-veridical with respect to resource quantity. Figure 2 is non-veridical with respect to payoffs but veridical with respect to resource quantity.
It can be seen either way, but Hoffman privileges resource quantity over payoffs when determining veridicality. He never justifies that choice.
Given that you hold neuroscientists in the same high regard that you hold philosophers: no, your reaction does not surprise me at all.
Ok, you and Entropy both seem to not be paying attention to what Hoffman is claiming.
What Hoffman is saying is the exact opposite of what you just said. He says that we CAN’T know reality, because knowing reality is a fitness DISADVANTAGE not a fitness advantage! So whatever you think you know is wrong, because natural selection made it so what you think is reality is always going to be wrong.
So now rethink this. Entropy’s answer to this is, well, then we just have to learn more about reality, that’s all, which is what Hoffman is saying we can’t do, and never will do. What you think you see is wrong, and actually doesn’t even exist when you are not perceiving it. We construct what we think is reality inside our brains but its a false reality.
So acting correctly, according to Hoffman, doesn’t mean acting accurately. Nothing is accurate. Its just convenient for survival, but it is not real.
The problem for Hoffman then becomes, why does he think his analysis of this is correct, since we can never be correct, we can only think we are correct-which is illusory.
This presupposes there is a coherent thing called consciousness.
IMO it is meaningless to talk about veridical perception. The process of perception can no more be true or false than eating an apple can be true or false. What can be true or false is our interpretation of that which is perceived. Our cognition of and our thinking about the experience may or may not accord with reality.
I agree with you except for perception being on a scale of accuracy rather than bring true or false.
Can someone point to the exact link in the great chain of being where consciousness ceases?
Neil is to Gibson as Charlie is to Steiner (and maybe as I am to Hall?). There’s stuff they (we?) can’t give up without loss of….too much.
11/9/2016
If there is a hard problem of consciousness, it depends on how far one is willing to accept Chalmers’ intuition — and that’s all it is! — that we can conceive of beings that are computationally identical to us but which lack “qualia.”
The entire problematic rests on (1) Turing machine functionalism as the correct framework for explaining cognitive processes such as perception, memory, decision-making, and acting and (2) the conceivability of beings that are identical to us in terms of their computational functions but which lack our “qualia”.
This of course also depends on our ability to coherently grasp what the heck “qualia” are — an assumption that Dennett quite nicely calls into question.
Only with those assumptions firmly in place does it then make sense to ask how qualia fit into a natural world that can otherwise be explained in terms of structures and functions (physical, chemical, metabolic, computational, etc.)
And that is Chalmers’ question.
The reason why Chalmers thinks that qualia cannot be explained in terms of computational functions is that he has an extremely demanding standard for what counts as an explanation. On his view, he and other Australian metaphysicians (David Lewis, Frank Jackson, etc) explanations involve laws, and laws involve necessity. So if we could explain qualia in terms of structures and functions, then we would understand why certain structures and functions necessarily give rise to qualia.
But if that were the case, then we would not be able to conceive of case in which those same structures and functions did not give rise to qualia — because the contrary of a necessity is an impossibility, and you can’t conceive of things that are impossible.
In other words “the hard problem of consciousness” as Chalmers invented it depends on all sorts of really complicated and problematic assumptions — all of which are explicit in his book, but it’s a dense piece of philosophical reasoning from beginning to end.
Unlike Chalmers, I don’t think that explanations must disclose necessary laws, and I don’t think that Turing machine functionalism is the right story of cognition. I don’t think we have a good theory yet that grounds cognition in neurobiology. Hierarchical predictive processing is better than the old-style machine functionalism of Putnam and Fodor, but it has really serious theoretical flaws and we still don’t know if its biologically plausible.
So I think that Seth is right to ignore the hard problem of consciousness — it hearkens back to a time when much more was assumed to be settled in cognitive science than is currently the case.
I read that figure as saying that some of our perceptions do not accurately report the true structure of the world, becuase they map onto multiple world states. This is expected to concern world states that are not associated with high pay-offs; the non-interesting parts of the world, in fitness terms. I don’t know about you, but this is very abstract to me, and I would have appreciated some examples.
No, that simply conveys that we are unaware of certain true aspects of the world. For example, we cannot see that many flowers have patterns in the ultraviolet. Pollinating insects do see those colours, because it is important for their foraging behaviour. But that is just missing information, it is not misrepresented.
Sorry if I appear to be a bit dumb here, but I really have trouble seeing the link between the mathematical treatment of perceptual strategies and the relevance to our perception.
I’m speculating that the more we learn about brains, the less we will know.
I suppose there could be a tipping point.
But this is a mistake.
A photon does not come with a tag saying “this is a cat photon”.
Saying that there are cats does not help at all in deciding whether you are looking at a cat.
The only thing that can work, is for you to divide the world up into parts (cats, cars, etc). That it can be said that cats and cars exist, is of no help in doing this dividing.
The world must be such that we can find reliable (or repeatable) ways of dividing it up into parts. But we don’t even need to assume that. We just go about finding reliable ways of dividing up the world as best we can.
KN,
We can certainly conceive of them, and we call them “p-zombies”. The question is whether they are actually possible.
Even if you could prove their impossibility, you’d still be faced with the hard problem.
Neil,
It’s structured, in other words. Contrary to your claim.
I have not read that paper closely enough to discuss that in detail. Possibly his discussion of Figure 2 and critical realism will help you, assuming you want to spend more time with the paper than I do.
I take his point to be that all of our perception is likely to work that way, and so his analogy and his conclusions apply to all of our perception as far as we can tell.
I think he claims that link is through the EGT models and the simulations using the GA that he describes on 1487 and which rely on those math models of perception and payoff. But, sure, his reliance on math instead of empirical evidence is troubling; it reminds me of the ID info theorists who disprove biology based on math theorems. One of the critiques in the Psychonomic issue takes him to task on this deficiency (it’s about our perception of shapes and the experiments which test our perception of them).
Just to be clear: I am only trying to explain his reasoning as I understand it. I don’t agree with him.
That’s it for me on Hoffman.
Corneel,
If you look more closely at the figure, it isn’t just low the payoff (red) states that get lumped together. That’s true for intermediate and high payoff states too, which is why Hoffman writes:
Note the reference to “objective” world states. He’s claiming that world states are somehow objective while payoffs are not. In reality, the payoffs are just as objective as the world states.
Maybe you can conceive of them. I certainly can’t, and I’ve read Chalmers’ book.
I don’t know what it would take for someone to demonstrate that I really can conceive of them if I just took the right steps. Conversely) what it would take to show that people who think they are conceiving of p-zombies (like you and Chalmers) aren’t really conceiving of them.
I don’t think this is entirely right: the conceivability of p-zombies is what generates the hard problem, given some additional assumptions: that whatever is conceivable is possible and that explanations track necessities.
In other words Chalmers’s argument comes down to this: if we could explain qualia in terms of structures and functions, then we would understand why zombies are not possible. But zombies are conceivable, and therefore are possible. Therefore we cannot explain qualia in terms of structures and functions.
Turing couldn’t conceive of p-zombies.
KN,
I think you already did conceive of them when you wrote “beings that are computationally identical to us but which lack ‘qualia.'” That’s the concept, and you held it in your mind. Therefore you conceived of it.
KN,
Not really. Even if no one had ever conceived of p-zombies, we still would have noticed that some physical systems are apparently conscious (in the required sense) while others apparently aren’t. We would still need to explain why it’s like something to be an oriole but not to be a lug wrench. That’s the hard problem.
I don’t think it’s difficult to conceive of impossibilities such as perpetual motion machines.
petrushka,
Source?
Good point!
Therefore “qualia” are another example of a reified concept.
ETA:
Maybe “a quale” is an example etc!
Neil,
Obviously, but photons nevertheless carry large amounts of information. People use that information to identify cats and other objects in the outside world.
If you stop the flow of photons, you stop the flow of information they carry.
The photons in the optical fiber from my ISP contain information. But most of the photons that we interact with contain zero information — they are not part of a commucation channel.
That’s silly.
Information isn’t limited to artificial communications channels. Our senses also deliver it to us.
That would be a nomological impossibility — impossible under the laws of nature in our universe. The zombie argument against physicalism can accept that situation and still conclude physicalism is false.
What about a circle with a rational ratio between circumference and diameter? Is that conceivable?
If the answer is yes, it is conceivable to have such a circle, then there can be no argument from conceivability to possibility because of that counter-example. So if one wants to argue for zombies by the logical chain conceivability->possibility->zombies are possible, you need a stricter definition of conceivability.
More at SEP
https://plato.stanford.edu/entries/zombies/
I assume we are talking about Shannon information.
What do you mean by “our senses deliver it to us”? If by “our senses” one means a bottom-up causal chain from photons interacting with the retina to neurochemical processes in the brain, where does the probability distribution come from?
That reads as if you are characterizing Chalmers argument as about the explanatory gap, ie epistemic. But I think he means it as metaphysical.
Good, that at least makes sense to me. The computer screen icons analogy confused me in thinking otherwise, because we always need some representation (icons) to represent the true structure of the world. There is no other way to conceive of them.
So I understood. Much appreciated.
It seems Lizzie has appointed a zombie to be a moderator.
Perhaps that explains her disappearance?
He indeed claims that our perceptions have been shaped to hide the truth, by positing some fitness cost to the processing of irrelevant information.
That’s black-and-white. Note that perception of the structure of the world that is important for fitness does get selected for accurate (veridical) representation, and my gut feeling tells me a lot of non-fitness related stuff will be represented accurately as well because it requires the same type of processing. Truth to be told, apart from some trivial examples like optical illusions, I can’t think of any examples where our perception hugely misguides us.
That objection is addressed explicitly in the paper. I believe it echoes Entropy’s response:
and
Doesn’t that simply follow from defining W to be the space of true states of the world and the 1:1 mapping of those states to be veridical perception? It’s sort of baked in his definition.
But why would that be a succesful strategy if it was not grounded in some matching patterns in the world outside us? I simply have a hard time conceiving of a different scenario than that there really is correspondence between perceived objects and true structures in the world.
That is a good point.
Asking myself, ‘what is the difference between my visual perception and that of, say, an eagle?’ The primary difference is that the eagle has far sharper vision than I do. But when we move beyond basic sense perception to how each subject deals with perception there is a far greater difference. Raptors remain in a state of what Barfield termed, ‘original participation’. They act on their perceptions without any reflective thinking. But because we humans have reached the stage of ‘onlooker consciousness’ we do not just act in the moment which is of the nature of ‘original participation’, we reflect on our perceptions.
Because I am a self-conscious, reflective human being I know that I have the ability to increase my range of vision to a far greater degree than any raptor. With the right instrument (eg. telescopes and microscopes) I can peer at objects outwith the range of any bird.
But we can go even further in the way we deal with our perceptions. We are able to hold them in our minds, group and combine them in sequences that do not just remain within the moment but give us awareness of their reality in relation to past, present and future (I am what was, what is, and what is to come). Our self-conscious, thinking egos allow us to have this ability if we would but use it. This leads us along the road to what Barfield termed, ‘final participation’. And it is in this way that Goethe said he was able to perceive the archetypal plant, the plant in its reality not just as the static entity that comes to mind when we think of an organism’s name under the Linnaeun classification system. We can think of an organism as a time being with a truly dynamic, ever changing, physical nature.
We share our sense perceptions with animals, but as individuals we can achieve higher perceptions that are not available to animals. And language is the outer expression of this inner ability which we have.
Asserting that misperceptions prevent science is a bit like saying noise prevents useful radio broadcasting, or useful sound and video recording.