BruceS suggested that I start a thread on my ideas about human cognition. I’m not sure how this will work out, but let’s try. And, I’ll note that I have an earlier thread that is vaguely related.
The title of this thread is one of my non-traditional ideas about cognition. And if I am correct, as I believe I am, then our relation to the world is very different from what is usually assumed.
The traditional view is that we pick up facts, and most of cognition has to do with reasoning about these facts. If I am correct, then there are no facts to pick up. So the core of cognition has to be engaged in solving the problem of having useful facts about the world.
I’ll start with a simple example. I typed “Chicago Coordinates” into Google, and the top of the page returned showed:
41.8819° N, 87.6278° W
That’s an example of what we would take to be a fact. Yet, without the activity of humans, it could not exist as a fact. In order for that to be a fact, we had to first invent a geographic coordinate system (roughly, the latitude/longitude system). And that coordinate system in turn depends on some human conventions. For example, the meridian through Greenwich was established as the origin for the longitudes.
That fact also depends on the naming convention, which designates “Chicago” as the name of a particular town. And, it depends on a convention specifying a particular location within Chicago (probably the old post office, though I’m not sure of that).
I won’t go through a lot of examples. I think the one is sufficient to illustrate the point. Everything that we call a fact depends, in some way, on human conventions. So facts are artifacts, in the sense that we must first develop the conventions necessary for us to have the possibility of their being facts.
A number of years ago, I made a usenet post in comp.ai.philosophy. I couldn’t find it in a google search. The idea of the thought experiment was that someone (an airplane pilot) dropped me off in the middle of the Nullarbor plain (in Southern Australia), with a lunch pack, pencil and note paper. I was to record as much information as I could about that location, before I was picked up in the evening.
The thing about the Nullarbor plain, is that it is desert. But it is not sandy desert like the Sahara. There are many plants — desert scrub — that grow in the occasional rain, then dry out and look dead most of the time. So perhaps I could record information about the plants, such as their density. But, how could I do that. Everything looked the same in every direction.
So I used the soda can from the lunch pack as a marker. And I used a couple of other items as markers. That enabled me to fix a particular region where I could start counting, in order to be able to write down some information. The markers broke up the sameness, and allowed me to have a sense of direction. In effect, those markers established conventions that I would use in counting the number of plants.
As I recall, others in the usenet discussion did not like that post. They saw my use of the soda can as something akin to making an arbitrary choice (which it was). There appears to be some unwritten rule of philosophy, that anything depending on arbitrary choices must be wrong. (Oops, there goes that meridian of longitude, based on the arbitrary choice of Greenwich).
The traditional account, from epistemology, is that knowledge is justified true belief. Roughly speaking, your head is full of propositions that you are said to believe, and you are good at applying logic to those propositions.
You can be a highly knowledgeable solipsist that way. And, as a mathematician, I suppose that term “solipsist” fits some of what I know.
My sense is that acquiring knowledge is all about anchors. We must find ways of anchoring our propositions to reality. That’s roughly what the system of geographic coordinates does. That’s what the soda can did in my thought experiment. And that, anchoring ourselves to reality, is what I see our perceptual systems to be doing.
AI and autonomy
AI researchers often talk of autonomous agents. And that’s the core of my skepticism about AI. What makes us autonomous, is that each of us can autonomously anchor our thoughts to reality. The typical AI system uses propositions that are anchored to reality only by the auspices of the programmer. So the AI system has no real autonomy. And, when boiled down, that is really what Searle’s “Chinese Room” criticism of AI is about — Searle describes it as an intentionality problem.
I’ll stop at this point, to see if any discussion develops.
I’d read that and admit I was puzzled by it too.
Now I think I have a better understanding of what Neil is saying there
– the need for reward systems and motivation I now understand to be related to the need to be an autonomous agent in the world and not be a passive device
– the concerns about raw data I think are related to the need to have a set of anchors to be such an agent in the world; I do agree that the perceptual system (including the artificial brain processing) needs to have similar capabilities to what evolution endowed us with to extend the anchors we start with
– unlike Neil, I think that that those needs might be realizable (in the functionalism sense) with logic gates, except the qualia of such an AI might differ from ours, if it makes sense to say that).
– Whether the approach is computational depends what one means by computation.
This philosopher seems to have made a career about trying to define computation and how it relates to what the brain does; as best I understand him, brain processes are computation according to the model he develops, but they are neither digital nor analog.
Gualtiero Piccinini’s Works
I would say the brain does chemistry. Some of it appears to be consistent with a computational model, but I believe the “higher” brain functions are orthogonal to anything we do with computers. We understand neural networks to about the same degree that chemists understand protein folding.
We can observe brain activity, but we can’t emulated it. Not thinking, nor feeling, nor conching.
Where I still have trouble with scientific realism is understanding what those real things are in relation to the theories of science.
I start with the understanding that the real things in scientific realism are the entities picked out by terms of the theory. I further understand that the references of those terms are captured from the way the terms are used in the theory (eg, roughly, a gene is whatever satisfies the statements of the biological theory of inheritance which include “gene”)
So that seems to imply that the things of the world cannot be separated from the level of scientific explanation that picks them out. Genes are real if we are talking biology, but not if we are talking physics. Only quantum fields (perhaps) are real at that lower level.
Is there a better way of understanding the “real things” of scientific realism?
Who does the dividing and how it is done are irrelevant to the question at hand. The fact that the world is divisible is enough.
Since the world is divisible, it consists of things. That means that “how things are” is the same as “the way the world is”.
Yes, that about what I am saying.
Some perspective. There’s an old saying among mathematicians, that a topologist is someone who cannot tell the difference between a donut and a coffee cup.
Keep in mind that this is a comment about two-dimensional surfaces, rather than solid objects.
We certainly distinguish between them. We distinguish on the basis of properties such as curvature and shape. But those are metric properties, and the metric used is a matter of convention. So, what distinguishes a donut from a coffee cup (again, the two-dimensional surfaces) is entirely conventional. No doubt we have good pragmatic reasons for our choice of conventions. But that make pragmatic judgment particularly important.
The trouble with the picture theory, is that there is no possibility of pictures. Or, to say it differently, if you want to go by a correspondence theory of truth, then you have the problem that there is no correspondence available.
One of the first things that a developing perceptual system must do, is construct some kind of correspondence between the world and perceptual representations. So, in some sense, constructing intentionality is job 1. It’s the same for science. What science is most importantly doing, is constructing and expanding intentionality — our ability to refer. Once science has done that, what’s left is mostly engineering.
Part of this construction of intentionality, by both science and perceptual learning, amounts to what we might call “thingifying the world.” So that’s where things come into play.
I’m pretty sure I did say at some point, probably in the earlier thread, that I have a non-traditional view of what cognition is doing.
Me too. I think the biggest problem for Neil in that thread was that he kept conflating the properties of a system with the properties of its components.
Individual logic gates aren’t adaptable, don’t seek out goals, don’t achieve homeostasis, etc. Neil took that to mean that systems based on logic gates would necessarily lack those properties as well. Which is, of course, false. It’s a confusion of levels.
That’s a reasonable summary.
To be crude about it, computation is something that you do with numbers. So, what the perceptual system has to do, is somehow map the world into numbers. That mapping into numbers is prior to computation.
The result is, that I see the brain as mainly engaged in measurement and categorization. (An aside – I think of measurement as something like categorization into a continuum of categories). And I see Hebbian learning as the cross calibration of all of these measuring and categorization systems. We need that cross calibration so that, for example, what the left eye sees is about the same as what the right eye sees.
There’s some related mathematics. We can look at the construction of measuring and categorization systems, as the construction of continuous function on the world. According to the mathematics, the topological structure of the world is implicit in those functions. You can find this in Gillman and Jerison, “Rings of continuous functions”. I’m not suggesting that you read it — the mathematics is likely a bit heavy. But it is at least suggestive that if the perceptual system engages in the kind of measuring/categorization activity that I am suggesting, then something like qualia will be an automatic consequence.
No, it is not enough.
If the world is canonically divisible, that would be enough (in the mathematical sense of “canonical”). That is, if the world dictates how best to divide it, that would be enough.
But it doesn’t. As best I can tell, how to divide up the world is vastly underdetermined.
Yes, you said that in the OP.
What I still don’t get is why you think that “facts are human artifacts” is a non-traditional idea about cognition.
Facts, by your definition, are just true statements or propositions, so your assertion translates to “true statements/propositions are human artifacts.” That doesn’t seem very controversial or non-traditional.
Why do you think that it’s non-traditional?
Whether there is a “canonical” way of dividing the world into things is immaterial.
Regardless of how we carve up the world, “how things are” and “the way the world is” refer to the same underlying reality — a reality that is independent of the categories we impose upon it.
As an experiment, ask several people to tell you “how things are in the world”. After they’ve answered, ask them to tell you “the way the world is”. Observe their puzzled faces and report back to us on how many of them say, “I just told you!”
Looking back over the thread
Do you think it might be because we can’t look at ourselves as purely biological entities (no God’s eye) that we don’t see the possibility of developing analogies to brain function in another medium? I realise that one (maybe the most) important way that organisms learn is by growing new neurons to reinforce pathways, Hebbian learning – Lizzie already mentioned this. I see there is a fair bit on unsupervised neural networks as a way to go in robotics.
You might want to look at structural realism, which is supposed to address some of the criticisms of scientific realism, or alternatively, it could be seen as what scientific realism is really about. I am rather partial to it.
In ordinary conversation, “the way the world is” might well be taken to be a reference to a scientific account. But when asking whether scientific accounts report “the way the world is”, then something else is intended.
I can’t answer for petrushka, but I can give my opinion.
My concern has been with the underlying principles. Biology seems to be the best medium to implement these. The thing about biology, is that there is continuous growing/replacing of parts. So if there’s a need to adjust the sensory hardware, that can be done just as easily as changing an algorithm.
Maybe with the 3D printing, well get to the same point with computer based technology, but we aren’t there yet.
Thanks to phoodoo (indirectly via Gary Gaulin and Wesley Elsberry) for pointing me to Hopfield neural networks.. I notice the wikpedia article refers to Hebbian learning.The article is a bit too mathy for me though. Does it make any sense to you as a way forward, Neil?
The problem I have with connectionist accounts, is that they are too internalist. I prefer something externalist.
For example, the Wikipedia article that you linked to, describes a Hopfield network as providing memory in the form of a signaling pattern, with Hebbian learning adjusting the network to get the right pattern.
I prefer to see the network as getting information from the external world. So the Hebbian learning is a kind of tweaking, for maximal performance. There is a resulting implicit memory, in the sense that the network can be said to remember the external world events for which it is optimized. But the emphasis is different. I see the brain as primarily concerned with solving problems of interaction with the external world, rather than with constructing an internal memory.
Thanks for the suggestion. I was vaguely aware of the idea, but I had not connected the dots to see how it could address my concerns.
To act as well as perceive, I think brains would also have to do at least as much modelling, predicting, and control of actions or internal body states.
But what you say and what I’ve added all seem to be computation.
Is it the fact that it involves continuous variables the reason you don’t like the digital computer model?
Many years ago, the undergrad work I did may have allowed me to start into such a book, but not now. I see that it involves rings of functions mapping topological spaces to real numbers. You’ve explained how such functions fit into what you are saying, but I am not clear on what a ring structure adds.
More to the point, I don’t see at all how specifying any mathematical structure can explain the raw feels of qualia.
Any hints welcomed, with the understanding that they will lower my final score on the test.
I don’t understand why connectionism implies it must be only internal. Aren’t the two concepts independent?
I ‘m thinking of an artificial autonomous agent interacting with the world and with an artificial brain based on connectionist architectures.
I agree with what you are saying about logic gates and levels, but, going mainly by this thread, I’m not sure about Neil’s position.
I understand, as you imply, that logic gates can do anything that is Turing-computable, which as far as I know, still is believed to cover any computation that could happen in the real world.
Of course, it is a different question to specify an architecture for a practical implementation or which helps one to understand what brains are doing (and here my current preference is for connectionist , not rule based, at least for the computations involved in basic interaction with the world and control of the body).
Bruce, to Neil:
Indeed they are. Even by itself, the task of maintaining a 3D model of one’s surroundings, based on incomplete and noisy 2D retinal images, requires a staggering amount of computation. It’s amazing to me that Neil, a computer scientist, doesn’t see this.
This seems unlikely. The brain has to be measuring internal body states, and changing actions so as to achieve the appropriate states. I think that’s called “perceptual control theory.” It does not require modeling.
Of course, thinking can involve some modeling in our thoughts. But I don’t see the need for it elsewhere.
If you were to design a robot to do the same actions, you would probably use modeling. But such modeling requires a lot of knowledge and introduces sources of error. From the perspective of an evolving, learning organism, it is simpler, better and more reliable to do without the modeling.
I gleaned it from the earlier thread. Here’s the the first exchange from that thread:
As you can see, Neil is confusing stasis in the controlled variable with stasis in the controller’s component gates. The thread is full of similar confusions on Neil’s part.
Yes, I think it still is.
Yes, but Neil’s complaint is not that logic gates are inefficient for this application, but that they are insufficient.
Though I think you’ll agree that connectionist architectures can be implemented using logic gates as the substrate.
These illusions (be sure to check out all of them) make perfect sense if the visual system is engaged in modeling.
How do you explain them if modeling is absent?
(Especially the last one.)
Some of them don’t seem to play properly on linux.
In any case, I’d say that they fit just as well with J.J. Gibson’s “direct perception.”
No, they clash with the idea of direct perception.
The perception of motion cannot be direct in these cases because there is no motion at all — just an alternation of two different images. The perceived motion is fictional, constructed by the perceptual system to explain the relationship of the alternating images in a plausible way.
In other words, the motion is modeled.
If Gibson were correct, then we would directly experience these as they really are — alternating images — and not as cases of apparent motion.
I’m taking modeling to include using an algorithm that makes predictions from current state to a (possibly hypothetical) future state.
In my original comment, I did have in mind the type of modeling that occurs in conscious thinking, eg planning. It can also be used to explain what social primates do when they try to understand the actions of others. One explanation involves theory of mind (X desires D and X believes act A will achieve D, therefore X will do A.) I don’t think one would need to implement this type of modeling explicitly in AI, it may emerge from a connectionist implementation in the sense of weak (not week!) emergence. Of course, this is speculation.
I had come across PCT before and a quick review at Wiki confirms that it involves comparing predicted and actual to control action. I would say that the predicting requires a model. Further, it may be that there are multiple levels of control (analogous to black-box subroutines in software, as I understand PCT), so there would be a model at each level.
As you can see, Neil is confusing stasis in the controlled variable with stasis in the controller’s component gates.
KN and Neil often post things that puzzle me.
With KN, it’s usually because I lack the philosophical background to even start to understand what he means or to understand his references.
With Neil, though, I do have IT and mathematics background which should help me understand his ideas, although the mathematics is very rusty.
So I think should be able to at least get a start on why ring structures of certain functions help explain qualia, or why Neil does not think that brain processes are computational (if that is what he thinks). But in fact I don’t understand his positions here at all.
However, Neil seems too smart and subtle a thinker for me to attribute this to simple confusion, at least on his part.
Likewise. However, I’m somewhat doubtful that Neil will actually make the case.
He really does think that:
I thought Gibson’s thesis was that we directly perceive affordances.
You must have misunderstood the notion of direct perception. There’s no claim that there could not be misperception.
The view is that the perceptual system is tuned to recognize “invariants” of what it is looking for. But something could match those tests for invariants without being the real thing. That seems to be what is happening here.
Let’s consider a particular example. A basketball player wants to put the ball in the hoop.
The model view would be that the brain somehow knows Newton’s laws, computes a model, and determines the force and direction with which the ball should be propelled.
The perceptual control theorist, if he/she support modeling, would say that the perceptual system is measuring the force and direction of the thrust applied to the ball, and using those factors in the model.
My view is similar. The perceptual system is observing the thrust applied to the ball. But, instead of measuring force and direction, the measuring system is directly calibrated to give where the ball will be when it is at about the height of the hoop. So knowledge of Newton’s law is not required, and modeling is not needed except perhaps in the form of practice moves in the player’s thoughts. However, the player needs lots of practice to get the calibration right.
Gibson thought that “optic flow”, the movement of features across the field of view, was a key part of perceiving affordances.
To create optic flow, the features have to move across the retina. In the illusions I linked to, the features do not move at all. There is no optic flow.
Instead, the brain models different motion scenarios and picks the one that can be most plausibly matched to the alternating images.
P.S. As a pilot, I can attest to the usefulness of optic flow. It helps in judging one’s likely touchdown spot on final approach. However, as the linked illusions demonstrate, it is far from the only cue that we use to infer motion.
Could you elaborate, using the linked illusions as an example?
Firstly, the brain does not need to know any of the mathematics for this to work. The mathematics tell us what is important (something like the algebraic structure of those measuring functions). A program of careful cross-calibration should be enough to provide what is needed.
There’s an alternative way of developing the mathematics, which is perhaps a bit more intuitive. It can be found in Dunford & Schwartz and in many textbooks on functional analysis. You take the space of bounded continuous functions, and treat it as a Banach space. Then you look at its dual (another Banach space). The original space on which the functions act (what we could call the world) is a distinguished subset of that dual space.
My preferred way of thinking about it, is that our perceptual system is getting lots of information. I tend to look at what people call “qualia” as just the experience of having that information. It’s important to note that it is semantic information. A computer only has syntactic information. For us, the semantics comes from how we got the information (the procedures we followed).
Patterns of apparent position change would be sufficient invariants to recognize motion.
As I’ve said earlier, I’m not sure if I am fully seeing the illusions. The “chromium” browser tells me that it cannot load a quicktime plugin. I don’t recall what the “rekonq” browser showed, but nothing interesting. I was able to see something in firefox, but some of them looked like static images that just went away very quickly. I’m pretty sure I saw what was intended for at least some of them.
I agree that we don’t apply the full generality of Newtons Laws to do the modeling used in everyday action.
It’s something much simpler and somewhat task-specific that allows a prediction to be made and then compared to an actual so as to allow ongoing adjustment to action.
When you say “directly calibrated to give where the ball will be” it sounds like what I would mean by modelling. Practice is needed in order to debug that model.
Another example is the way baseball players catch fly balls. According to this link, they are predicting where the ball would be in their field of vision if its motion relative to their motion were to appear as a non-accelerating change. They adjust their running speed and direction to adjust for deviations from that prediction.
How Fly Balls are Caught
Thanks Neil. I think with a bit of work I could begin to understand that. Coincidentally, I’m about to embark on a Coursera offering on functional analysis which seems like it will help.
“the experience of having the information” sounds close to some philosophy I’ve read on the topic, especially if I interpret “having” as “representing”.
Whenever you say “computer”, I’m now understanding you to mean a passive, disconnected device, rather than some kind of computing done as part of implementing a artificial, autonomous agent which also would have to involve perception and action. If that is what you mean, I agree with your comment.
Jerry Coyne has an interesting thread up about hymenoptera and haplodiploidy.
It’s not so much debugging, as it is setting the calibration point.
Some perspective. These days aircraft designers use computational modeling. But, until a few decades ago, they would make a mockup and test it in a wind tunnel. Now maybe that’s still modeling, but it is not computational modeling.
In my view, the kind of modeling that occurs in thought is more like the use of the wind tunnel, than it is like computational modeling. For modeling geometric ideas, we carry around with us some geometric appliances (known as arms).