The title of this thread is one of my non-traditional ideas about cognition. And if I am correct, as I believe I am, then our relation to the world is very different from what is usually assumed.
The traditional view is that we pick up facts, and most of cognition has to do with reasoning about these facts. If I am correct, then there are no facts to pick up. So the core of cognition has to be engaged in solving the problem of having useful facts about the world.
I’ll start with a simple example. I typed “Chicago Coordinates” into Google, and the top of the page returned showed:
41.8819° N, 87.6278° W
That’s an example of what we would take to be a fact. Yet, without the activity of humans, it could not exist as a fact. In order for that to be a fact, we had to first invent a geographic coordinate system (roughly, the latitude/longitude system). And that coordinate system in turn depends on some human conventions. For example, the meridian through Greenwich was established as the origin for the longitudes.
That fact also depends on the naming convention, which designates “Chicago” as the name of a particular town. And, it depends on a convention specifying a particular location within Chicago (probably the old post office, though I’m not sure of that).
I won’t go through a lot of examples. I think the one is sufficient to illustrate the point. Everything that we call a fact depends, in some way, on human conventions. So facts are artifacts, in the sense that we must first develop the conventions necessary for us to have the possibility of their being facts.
A number of years ago, I made a usenet post in comp.ai.philosophy. I couldn’t find it in a google search. The idea of the thought experiment was that someone (an airplane pilot) dropped me off in the middle of the Nullarbor plain (in Southern Australia), with a lunch pack, pencil and note paper. I was to record as much information as I could about that location, before I was picked up in the evening.
The thing about the Nullarbor plain, is that it is desert. But it is not sandy desert like the Sahara. There are many plants — desert scrub — that grow in the occasional rain, then dry out and look dead most of the time. So perhaps I could record information about the plants, such as their density. But, how could I do that. Everything looked the same in every direction.
So I used the soda can from the lunch pack as a marker. And I used a couple of other items as markers. That enabled me to fix a particular region where I could start counting, in order to be able to write down some information. The markers broke up the sameness, and allowed me to have a sense of direction. In effect, those markers established conventions that I would use in counting the number of plants.
As I recall, others in the usenet discussion did not like that post. They saw my use of the soda can as something akin to making an arbitrary choice (which it was). There appears to be some unwritten rule of philosophy, that anything depending on arbitrary choices must be wrong. (Oops, there goes that meridian of longitude, based on the arbitrary choice of Greenwich).
The traditional account, from epistemology, is that knowledge is justified true belief. Roughly speaking, your head is full of propositions that you are said to believe, and you are good at applying logic to those propositions.
You can be a highly knowledgeable solipsist that way. And, as a mathematician, I suppose that term “solipsist” fits some of what I know.
My sense is that acquiring knowledge is all about anchors. We must find ways of anchoring our propositions to reality. That’s roughly what the system of geographic coordinates does. That’s what the soda can did in my thought experiment. And that, anchoring ourselves to reality, is what I see our perceptual systems to be doing.
AI and autonomy
AI researchers often talk of autonomous agents. And that’s the core of my skepticism about AI. What makes us autonomous, is that each of us can autonomously anchor our thoughts to reality. The typical AI system uses propositions that are anchored to reality only by the auspices of the programmer. So the AI system has no real autonomy. And, when boiled down, that is really what Searle’s “Chinese Room” criticism of AI is about — Searle describes it as an intentionality problem.
I’ll stop at this point, to see if any discussion develops.