# Categorization and measurement

In a recent comment, BruceS suggested that I do an OP on categorization and measurement, so here it is. I’ll try to keep this short, but later extend it with posts in the thread. If my post in the thread begins with a bold title, then it is intended to extend the OP. And I may then add a link, if I can figure out this blocks editor.

Cognition is categorization

Harnad’s paper is not quite how I am looking at it, but it gives a good introduction to the idea.

Red, blue or green? That’s a question about categorizing by color. We also categorize by size. Measurement is just a mathematical way of categorizing by length or by voltage or by pressure — by whatever we are measuring.

We categorize as a way of getting information. Within science, information is often acquired by measuring, which is a type of categorizing. I can get information with a digital camera. The digital camera, in effect, categorizes the world into pixel sized chunks and provides data for each of those.

## What is categorization?

Categorization just means dividing up into parts. We divide up in accordance with features. Harnad discusses this in an appendix near the end of the linked paper.

There’s an alternative view of categorization, suggested by Eleanor Rosch, where categorization amounts to grouping things in accordance with their similarity to some sort of prototype or family resemblance. Harnad looks at that in the appendix, and he does not agree with Rosch. I concur with Harnad on that.

## Perception

Perception is a process of getting information about the environment. It works by categorizing, for that is the way of generating information. We perceive cats, dogs, trees because those are categories. We perceive the dog’s eyes, ears, mouth because perception divides the larger categories into smaller categories.

We also perceive individuals. But, as best I can tell, an individual is only a very small category. We cannot perceive individuals in the logical sense of “individual”. In the logical sense, X and Y are identical if “X” and “Y” are different names for the same entity. But, in ordinary life, we take X and Y to be identical if we are unable to distinguish them. That is, we take them to be identical if they are not separate categories. This is why we can be confused by identical twins. Family members usually are able to distinguish the twins, but most people find it difficult.

I drive my car to the store. By the time that I return, some of the rubber has worn off the tires. In a strict logical sense, it is now a different logical object. But I still see it as the same car, because I still place it in the same category.

We perceive a world of objects, because objects are categories and what we perceive are categories. A typical robotic AI system categorizes the world into pixels. And then it attempts to compute which sets of pixels correspond to objects. So the way that the AI system “sees” the world is very different from the way that we see it. This is probably why a self-driving Uber car killed a pedestrian. It could not track objects as well as we can.

## Note:

This entry was posted in Uncategorized by Neil Rickert. Bookmark the permalink.

Mathematician and computer scientist who dabbles in cognitive science.

1. ## Computation and categorization

This comment is intended to extend the OP.

We usually think of computers as logic machines. We think of the underlying operations as being logic. But it isn’t. The underlying operation that makes it all possible, is categorization. Without categorization, there would be no computation.

A logic gate is a core component. A logic gate has inputs and outputs. The input is a variable electrical signal, best described as either an electric voltage or an electrical current. Which of those is the better description depends on the particular technology.

The logic gate categorizes its inputs as either logic 0 or logic 1 (we can consider those to be logical false and logical true). Then it produces an output signal — a variable voltage or current, designed to be easily categorized as logic 0 or logic 1.

In terms of logic operations, the gate is just a simple combinatoric device. Simple logic tables describe that. There isn’t very much to logic. Most of what the logic gate has to do, is categorization.

A computer is really a categorizing system, with a virtual logic device built on top of that categorization.

2. i agree computers are not logic machines. they are memory machines or as you put it ITS categories. Yes this makes more sense in this detail in your thread.
So perception is also, it seems, just making categories. We don’t see a individual. We just have memorized a great number of details for categories.

3. Thanks Neil.

I’ve read the Harnard article, but I want to focus on what you say in the OP. If the answer is in Harnard, let me know where. I’ll just ask a series of questions on the OP.

How do concepts fit into your ideas? How do they relate to categories and categorization?

What is a feature for you? Can you provide more detail on how features relate to measuring relate to categories and concepts? For example, science says light frequencies are continuous, but people refer to discrete categories like red. How do your ideas describe the process by which that happens?

For further complication, color is one of those categories where language seems to affect human categorization. How does that happen in your ideas?

Is measurement just the assignment of value to a feature by the organism?

I’m not clear on what you mean by “acquiring information”. In other threads, I read you as saying information is created by the organisms and does not exist externally.

What is implied by “acquiring”? Is it just copying or is their processing of some sort?

Does a digital camera create information or does a human have to be involved for information creation?

Is information an outcome of measuring something, possibly features? If so, can you provide a more specific sequence of steps in how this happens, starting with the niche and then involving sensorimotor systems and any other neural systems that are relevant to your ideas.

Why is Harnard’s view of categories as opposing Rosch’s important to your ideas (I think Harnard is referring to a past controversy in psychology, but I want to stick to your ideas for now).

Why does the discussion of logic gates matter to your views on perception? Is that how you think human perception works? I don’t want to get into a side discussion of AI; I am just trying to understand if your discussion is meant to say you see them as a model useful for understanding for human perception.

More generally, Harnard spends a lot of time on AI learning mechanisms. Do AI learning techniques have any bearing on your ideas?

Does action/movement have any role in your ideas?

ETA: In terms of your ideas, what is the “god’s eye view” and how does your model oppose it?

4. BruceS: I’ve read the Harnard article, but I want to focus on what you say in the OP. If the answer is in Harnard, let me know where.

The Harnard article is mostly philosophy (in my opinion), so short on detail.

How do concepts fit into your ideas? How do they relate to categories and categorization?

“Concept” is one of those vague terms. It is hard to know what people mean when they use that word. There is a lot of variation between people. I’m inclined to say that a concept starts as a category, and develops as it is expanded with sub-categories.

What is a feature for you?

Anything that can be used to discriminate — to divide the world into parts.

Can you provide more detail on how features relate to measuring relate to categories and concepts?

Start with a ruler to measure length. That ruler has lines on it — calibration marks. Those are features. In this case, they are artificial features.

We align one of those calibration marks with the edge of what we are measuring. That edge is another feature. In this case we might consider it a natural feature. But it is only a feature if we can detect it. So before we can do anything with that ruler, we have to come up with ways of detecting natural features. And the only natural features that matter, are the one that we are able to detect.

To detect an edge, we can move our eyes around. As we move them (change how they are pointing), there will be a sharp transition in the light received, as the direction crosses an edge. This does not require that we know what direction the eye is pointing (external status). It only requires that we know how we are changing that direction (internal status). That allows us to pick up that edge as a feature. And that’s a starting point for perception. And, by the way, we do know that eyes make these movements (saccades), and that they seem important to vision.

As for relating measurement to features — measurement is mostly using artificial features (calibration marks) as a way of interpolating between natural features. It allows us to divide into smaller subcategories.

For example, science says light frequencies are continuous, but people refer to discrete categories like red. How do your ideas describe the process by which that happens?

In this case, we do have some innate color detection abilities that divide up the color range. This is a bit like having innate calibration marks on the frequency spectrum.

I should mention, at this point, that I have abnormal color vision. I can see colors, but probably not in the same way as others. I fail color vision tests.

For further complication, color is one of those categories where language seems to affect human categorization. How does that happen in your ideas?

That’s the Sapir-Whorf thesis. I’m inclined to disagree with the strong form of that thesis, but agree with the weak form. I would say that it is culture, more than language, that affects categorization. And it affects all categorization, not just color categorization. There’s that thing down the end of the street that I categorize as a STOP sign. And that obviously comes from culture (from social conventions). We are social animals, which requires that we attempt to adjust our categories to be reasonably consistent with the categorization of others in our culture.

Is measurement just the assignment of value to a feature by the organism?

No. As I think I have already mentioned above, measurement depends on artificial features (calibration marks).

I’m not clear on what you mean by “acquiring information”. In other threads, I read you as saying information is created by the organisms and does not exist externally.

Yes, we acquire it by creating it. It is information because it informs us, typically about the natural world. I’m going by the Shannon view of information as what is transmitted in a communication channel.

I should add that there seem to be many different (often incompatible) meanings of “information” out there. I’m using what works best for my understanding of cognition.

What is implied by “acquiring”?

Categorizing, and communicating the category.

Is it just copying or is their processing of some sort?

No, it is not copying. There is nothing to copy. The world in itself does not come to us already categorized.

Take time, as an extreme example. We use clocks to provide time information. The clocks are manufacturing that information. The clocks run a local oscillator, and they count the oscillations. The time they report amounts to a report of that count. Time, as we broadly understand it, is relatively featureless. We do have the variation from day to night and the seasonal variation. The oscillator in the clock provides artificial features (calibration marks) to interpolate between the natural features that we use.

Does a digital camera create information or does a human have to be involved for information creation?

Yes, a camera creates information. We know this because it is used to feed a communication channel. But, from our point of view, it is only syntactic information. When we look at the images, we have to add the semantics. The camera is doing a very mechanical operation. A perceptual system, by contrast, is categorizing parts of the world and gets the semantics from how it is doing that categorizing.

Is information an outcome of measuring something, possibly features?

Information, as I am using the term, is a reporting of categories.

Why is Harnard’s view of categories as opposing Rosch’s important to your ideas (I think Harnard is referring to a past controversy in psychology, but I want to stick to your ideas for now).

Rosch’s view depends on comparison (as in comparison to prototypes). It would require an account of “similar” that is prerequisite to any possibility of categorization. We do not have such an account, as far as I can tell. I see comparison as dependent on categorization. We say X is similar to Y to the extent that we categorize them in the same way. We categorize the world in multiple ways. The more overlap there is between the ways that we categorize X and the ways that we categorize Y, the more similar they will seem to be.

Why does the discussion of logic gates matter to your views on perception?

That was mostly to illustrate the point that categorization is prior to data.

Do AI learning techniques have any bearing on your ideas?

I have studied them. They are a continuing hot topic in cognitive science. But I have not found them particularly useful. They do not seem to correspond to how humans learn.

Understanding how science works has been far more important. Roughly speaking, I see science as using the same kind of methods as perception, but it is done in public where, at least in principle, we can see how it works.

Does action/movement have any role in your ideas?

Yes. It is fundamental. I already mentioned eye movement above. It all starts with trial and error to see what works. And our biology provides the way of deciding what works. If it helps us meet our biological needs, then that’s an indication that it works.

In terms of your ideas, what is the “god’s eye view” and how does your model oppose it?

It amounts to the view that categories are determined by reality rather than by humans. I cannot find any basis for that. When I look at how we categorize, I can see how it depends on our biology. But if it depends on our biology, then it is not human-independent. And I can see how some of our categorization depends on culture.

Even looking at dependence on biology, our biology does not dictate how we categorize, except perhaps in cases such as color. For the most part, biology dictates the underlying basis for pragmatic decision making. But, as a social species, culture also affects the decisions that we make. So biology and culture are both involved in how we categorize.

5. Neil Rickert:
“Concept” is one of those vague terms.It is hard to know what people mean when they use that word.

I’ll have more tomorrow, but on concepts and categories:

In the literature I read, concepts are internal to an organism and categories are external. Concepts are what are used by the organism to recognize and build categories. (Nothing in that is meant to imply a god-eye view of categories; if ou think it does, please explain why.)

These are from the Barrett book that Klein podcast covers:

“Philosophers and scientists define a category as a collection of objects,
events, or actions that are grouped together as equivalent for some purpose.
They define a concept as a mental representation of a category. Traditionally,
categories are supposed to exist in the world, while concepts exist in your
head. For example, you have a concept of the color “Red.” When you apply
this concept to wavelengths of light to perceive a red rose in a park, that red
color is an instance of the category “Red.”* Your brain downplays the differences between the members of a category, such as the diverse shades of red
roses in a botanical garden, to consider those members equivalent as ‘red.'”
— p 87

rates others. You can look at three mounds of dirt and perceive two of them
as “Hills” and one as a “Mountain,” based on your concepts. Construction
treats the world like a sheet of pastry, and your concepts are cookie cutters
that carve boundaries, not because the boundaries are natural, but because
they’re useful or desirable.”
— page 27

6. If you are bored, how about a compare and contrast exercise? How do your ideas compare to Barsalou’s summary of embodied (grounded) concepts:

“Here I assume that a concept is a dynamical distributed network in the brain coupled with a category in the environment or experience, with this network guiding situated interactions with the category’s instances The concept of bicycle, for example, represents and guides interactions with the category of bicycles in the world.

Across interactions with a category’s instances, a concept develops in memory by aggregating information from perception, action, and internal states. Thus, the concept of bicycle develops from aggregating multi-modal information related to bicycles across the situations in which they are experienced. As a consequence of using selective attention to extract information relevant to the concept of bicycle from the current situation (e.g., a perceived bicycle), and then using integration mechanisms to integrate it with other bicycle information already in memory, aggregate information for the category develops continually. As described later, however, background situational knowledge is also captured that plays important roles in conceptual processing.

Although learning plays central roles in establishing concepts, genetic and epigenetic processes constrain the features that can be represented for a concept, and also their integration in the brain’s association areas. For example, biologically-based neural circuits may anticipate the conceptual structure of evolutionarily important concepts, such as agents, minds, animals, foods, and tools.”

Cognitively Plausible Theories Of Concept Composition http://barsaloulab.org/Online_Articles/2017-Barsalou-chap-compositionality.pdf

[I’ve removed the citations and added some paragraphing]

7. By the way, thank you for this detailed questioning. It helps me understand where the communication problems are.

BruceS: In the literature I read, concepts are internal to an organism and categories are external. Concepts are what are used by the organism to recognize and build categories. (Nothing in that is meant to imply a god-eye view of categories; if ou think it does, please explain why.)

Mostly okay. Maybe concepts don’t really exist. Perhaps they are something created by reifying based on observed behavior.

I actually did a series of related posts on my own blog, starting (I think) with Carving up the world. That was last year. And I probably used different terminology.

Roughly, the categories that we use are more-or-less observable by other people. The methods that we use to categorize are private. Those are not observable. I’m inclined to say that “concept” comes from reifying that unobservable behavior based on what we can guess from observable behavior. And it’s a vague term because it is trying to refer to what is private.

“Philosophers and scientists define a category as a collection of objects, events, or actions that are grouped together as equivalent for some purpose.”

Yes. I see that as based on the Rosch account of categories (the one that Harnad criticizes). And, by the way, it is “Harnad” not “Harnard”.

This seems to be the most popular view of categories. But I find it unworkable. And I think that disagreement about “category” is part of why it is difficult to communicate those ideas. On postings to my own blog, I have use “carving up the world” rather than “categorizing”, in an attempt to reduce that miscommunication. But “carving up the world” is an awkward phrase.

“They define a concept as a mental representation of a category.”

I have never liked that. If a concept is a representation, then I have no idea what it is supposed to represent. A lot of talk about concepts seems confused and confusing. We would probably be wiser to avoid talk of concepts.

“Using your concepts, your brain groups some things together and separates others. You can look at three mounds of dirt and perceive two of them as “Hills” and one as a “Mountain,” based on your concepts. Construction treats the world like a sheet of pastry, and your concepts are cookie cutters that carve boundaries, not because the boundaries are natural, but because they’re useful or desirable.”

That would make more sense if we replaced “concept” by “category” throughout that paragraph. And I would then mostly agree with it. “Hills” and “mountains” refer to different categories. But those are not disjoint categories. They overlap.

8. BruceS: If you are bored, how about a compare and contrast exercise? How do your ideas compare to Barsalou’s summary of embodied (grounded) concepts:

I’ll try. But it fits very poorly.

“Here I assume that a concept is a dynamical distributed network in the brain coupled with a category in the environment or experience, with this network guiding situated interactions with the category’s instances The concept of bicycle, for example, represents and guides interactions with the category of bicycles in the world.”

As Harnad mentions, anything can be seen as a dynamical system. So the mention of “dynamical” does not seem at all useful.

Barsalou is obviously using the Rosch account of category. And that does not work at all well. So he sees a category as something like a container for instances. By contrast, I see a category as part of a step in dividing up the world. And an instance would then be a sub-category derived by an even finer dividing up of the world.

For my own study, I have been looking at what biological systems can reasonably achieve and what can plausibly evolve. Barsalou’s account does not fit well with that requirement.

I am reminded of user fifthmonarchyman who used to post here. He was using the Rosch version of categorization. He saw categorization as a kind of data compression. By describing in terms of categories instead of listing individuals, we reduce the amount of data used, which he saw as data compression. Barsalou’s version fits better with that than with how I am looking at things.

I look at science. The prevailing view, at least from philosophy, seems to be reductionist. We start with atoms, then explain everything in terms of those small parts. That is to say, we start with small things, then reduce the big things to the small things. But when I look at how science was actually done, it was the other way around. Scientists started with larger things. Then they subdivided into parts. Then then further divided those parts into smaller subparts. My view of categorization really comes from that. We start with big things, because those are the easiest to perceive. Then we divide down to smaller part. So we see the world as in terms of intersections of categories rather than in terms of unions of sets.

By the way, I take a similar view of mathematics. These days, people will learn about foundations. And that usually starts with sets. Then the numbers are defined in terms of sets. And then the Peano axioms are derived from that. And we eventually get arithmetic. But that’s not how it actually happened. Arithmetic was around long before there were Peano axioms. And the Peano axioms were around before most of modern set theory. Again, we start with the bigger things, and divide down to get more detail. We do not start with detail and build up.

9. Neil Rickert:
Barsalou is obviously using the Rosch account of category

Thanks for the feedback, Neil.

Barsalou is not using the Rosch prototype theory of concepts. Rosch is a psychologist. Her prototype theory of concepts is covered from a philosopher’s perspective in the SEP article on concepts here
https://plato.stanford.edu/entries/concepts/#StrCon

There is lots of other material on Rosch’s prototype theory on the web. Her original experiments were done in the 70s, I believe, and remain influential.

AFAIK, Harnad’s CP idea has not had much interest from other cognitive scientists. I realize that is probably not an important criterion for you for evaluating Harnad’s ideas.

The rest of the Barsalou article covers how various approaches noted in SEP fit into the the grounded (embodied) basis he is trying to explain.

Unless there actual differential equations, when I see “dynamical system” in philosophy papers on cognitive science, I take the phrase simply as code words for someone who favors an approach based on how the brain/body works, rather than an approach based representations as abstract and symbolic. So in the philosophy I’ve read, the phrase just tends to identify the writer’s preferences for the importance of considering embodiment and actual neuroscience.

That is what Barsalou favors. He has a lab conducting experiments to try to justify his simulation approach. The simulation approach can be used to characterize PP, at least at a high level, which is what Barrett does in her book.

I don’t agree with the characterization of cognitive science and philosophy in general in the rest of your post. But we have already been down that road enough.

10. Neil Rickert: I have never liked that. If a concept is a representation, then I have no idea what it is supposed to represent.

Well, that question has led to many philosophy PhD theses and many books!

It’s come up often in posts by me and KN. I’ll just summarize my understanding: trying to answer the question using naturalism and without circularity requires focusing on problems of the organism, which I asked you about in the Sandbox thread. The organism problems are taken to be what you said there plus the challenge for the organism to propagate its genes somehow. Those problems then yield solutions that give norms for determining what is represented either by current needs of the organism to stay alive or by past fitness of the organism’s genes or by some combination of those two ideas.

11. Neil Rickert: That would make more sense if we replaced “concept” by “category” throughout that paragraph.

I think concepts inside the organisms versus category outside helps separate the map from the territory in the theorizing about how organisms successfully live and reproduce in their niche.

And I would then mostly agree with it. “Hills” and “mountains” refer to different categories. But those are not disjoint categories. They overlap

That is an important point; it is one reason why the necessary and sufficient condition “classical” approach to defining categories or concepts has generally been abandoned, as described in SEP article I linked above. Barrett thinks humans at least use their goals to dynamically establish the scope of features represented by the concepts of the category. There is then a PP story about how the brain/body attempts to satisfy a goal.

Roughly, the categories that we use are more-or-less observable by other people. The methods that we use to categorize are private. Those are not observable.

I agree with that. It relates to the discussion we had with CharlieM (I think) in his thread on the nature of concepts. As was discussed there, humans align concepts to the world through language, as well as through various learning techniques which are also available to animals and babies.

Taking concepts as mental representations leads to the issue of standards for correctness of representation, and into the theory of representation in my upthread post.

So for me mental representations are the brain/body implementation mechanism used to realize behavior we attribute to the organism possessing concepts it uses to categorize its niche in ways to solve its problems. Mental representations are a theoretical (unobservable) entity in (cognitive) science, so reifying them relates to being a scientific realist. Again, no more on that here.

The theories of mental representation that make sense to me are the embodied, grounded theories that start with modal representations and build concepts for abstractions (including language-based ones) from there. Here ‘modal’ relates to modes of perception, not modal logic.

12. Neil Rickert: This does not require that we know what direction the eye is pointing (external status). It only requires that we know how we are changing that direction (internal status).

I agree with that. The actions of saccades are an important part of PP approaches to action/perception.

But I still don’t understand how measurement fits into your ideas of perception and cognition. For example, when I think of using a ruler, I see:
1. a conventional standard, like centimeters
2. an instrument which realizes this standard in a way that we can use it to apply the standard to the world
3. skilled usage of the instrument by eg aligning marks appropriately
4. interpretation of the instruments reading.

How do these ideas apply to your ideas of perception, where I assume there is no instrument beyond the brain/body itself, including its sensorimotor systems. If you think step 4 matters in that case, how do you avoid a homunculus?

13. Neil Rickert: I’m going by the Shannon view of information as what is transmitted in a communication channel.

For me Shannon information used for communication channels between humans involves
– a human with a message to be communicated to which they have assigned a meaning
– encoding the message using a set of symbols based on a language
– the probability distribution for the symbols in the language
– transmission over the possibly noisy channel
– the probability distribution for the noise inserted by the channel
– decoding the transformed message into the same set of symbols and language
– the meaning assigned by the human receiver

How do these ideas fit into your use of information, eg where are the probability distributions and where is the channel in relation to the niche and the sensorimotor and cognitive systems of the organism?

Later in the post you mention semantics. In the Shannon model above, meaning is assigned by the humans. But if you want to include semantics in your model, you have to avoid homuncular circularity, I think. How do you do that?

14. Last questions for today:
Do the ideas of top-down versus bottom up processing in senorimotor systems fit into your ideas?

I read you as saying there are fundamental errors in cognitive science (including the philosophical aspects of it). Where do you see the fundamental conflicts between your ideas and cognitive science?

15. BruceS: 1. a conventional standard, like centimeters

I take a piece of string, and I tie some knots in it. Maybe I paint the knots with different colors. The knots do not need to be equally space.

That’s my ruler. It becomes my standard. It is only conventional, if you allow there to be private conventions.

I can use it to measure. I align it to my window. One end (the well worn end) is aligned to one edge of the window. And I notice that the red knot turns out to be at the other edge. I take that string with me and go to the store to find a window shade that will fit.

I cannot do arithmetic with the measurements. I cannot call the store on the phone and give the measurement, because this is a private measuring system. But I can still compare lengths for myself.

But you have to start with a different question. What is length?

Maybe the length of everything doubles each day. But because the length of our rulers also doubles, we would never notice. When we invented rulers (or similar) we invented the property of length. Length is a human artifact, because it couldn’t be anything else. Maybe we have tried to invent all sorts of other properties, and length happens to be one that worked well.

As I walk around, I carry some natural rulers with me. I can use my hand, my arm, the span of my fingers as improvised measuring rods. Any animal could do something similar.

Yes, as we grow up, those natural rulers will change in length. Or maybe those natural rulers stay the same length but the world shrinks. If we were not part of a society, it would make no difference which of those we assumed.

There are no natural standards. The only way that we can have standards, is by inventing them, and then persuading the community to adopt those standards.

The literature seems to report that the brain contains arrays of neural structures that look almost identical to each other. These structures could be something akin to calibration marks on a ruler. I don’t know that they are, but it is a significant possibility.

I look at a blank sheet of paper. There are features I can use to fix where I am looking. For example, there are edges and corners. But to see what is between those, I need some way of interpolating. We use a ruler for that if we want high precision. Perception also needs a way of interpolating. So it needs something comparable to divide up the page. If the page is blank, it does not need high precision for this. But to even determine that the page is blank, it need to be able to scan through that page.

16. BruceS: How do these ideas fit into your use of information

There can also be internal communication, from one part of a system to another. So I look at neurons as generating and transmitting internal information.

In a computer, the bits look the same whether they came from the keyboard, the mouse or the disk drive. The meaning is where they came from. For information within the brain, the meanings is going to be where that came from. And that’s pretty much to do with the details of how the categorization is done.

… where are the probability distributions …

Those are only needed for a theoretical analysis. People get ad hoc communication systems working by trial and error tweaking until they have satisfactory results.

17. BruceS: Do the ideas of top-down versus bottom up processing in senorimotor systems fit into your ideas?

In a way. But that analysis doesn’t work very well. We use complex mixtures of both — whatever works.

But maybe this is a good place to jump off at a tangent.

There are two different problems of perception.

problem A

It is an interesting world out there. We know a lot about that world from our own experience. The problem for cognitive science is to work out how a perceptual system can perceive that world.

problem B

The outside world is totally mysterious. The problem for cognitive science is to work out ways that a perceptual system can try to make sense of that initially mysterious world.

It seems to me that most of cognitive science is trying to solve problem A. I am trying to solve problem B. As I see it, problem B is the problem faced by a newborn baby. And the baby cannot get help from society until it has solved problem B well enough to be even able to perceive that society.

18. BruceS: Taking concepts as mental representations leads to the issue of standards for correctness of representation, and into the theory of representation in my upthread post.

After reading that post of yours, I am still back with the same issue. If concepts are representations, then what do they represent? Nobody ever explains this.

And now you want to bring up standards of correctness for representations. But until we know what they represent, how can we talk of a standard of correctness? And if concepts are internal, how can we ever know what they represent?

Maybe this is obvious and something that I am just missing. But, in that case, I have been missing it for all of my life.

19. Human psychology plays an important role in this conversation. The way we approach the world and learn is through association. Even when we look at a cloud our brain will try to see something it can associate with, be it a dragon or a big teddy bear. We call this pareidolia, and it’s pervasive in humans.

I strongly suspect that categories come from our brain needing to make associations. Categories are very often arbitrary, but are still very useful for human communication because we are all association machines.

20. Neil, you write,

There is nothing to copy. The world in itself does not come to us already categorized.

However, you say it comes to us already “featured,” that we use features to discriminate. Do we simply copy features–or do we process them?

21. walto: However, you say it comes to us already “featured,” that we use features to discriminate.

What’s a feature depends very much on us. Birds would not notice some of the features that are important to us, and we are probably unaware of features that are important to birds.

For that matter, there are features of spoken English that we use to categorize into phonemes, but which most Japanese people are unable to detect because their own language learning alters their perception of sound (as does ours).

22. Neil Rickert: It seems to me that most of cognitive science is trying to solve problem A. I am trying to solve problem B. As I see it, problem B is the problem faced by a newborn baby. And the baby cannot get help from society until it has solved problem B well enough to be even able to perceive that society.

Problem B is “solved” by biological evolution. There is no obvious demarcation between what is learned via evolution vs what is learned via experience. The difference is that one is inherited, but it’s pretty difficult to categorize individual instances. It strikes me that we loosely call one talent or instinct, and the other learning, but the line isn’t clear.

23. Neil,

If concepts are representations, then what do they represent? Nobody ever explains this.

They represent their referents. The concept ‘truck’ represents trucks, the concept ‘animal’ represents animals, and the concept ‘murder’ represents certain types of killing.

24. Neil, in Sandbox:

There seems to be a massive failure to understand the problems that perception must solve.

I know that, according to many AI people, when a photon hits a retinal cell, that’s assumed to be data. But it isn’t, and it couldn’t be.

As best I can tell, that view of perception amounts to:

(1) Assume perception.
(2) Using that assumed perception, determine which way the eyes are pointing.
(3) Get data from stimulated retinal cells.
(4) Subtract the direction information from step (2). You now have data about the world. Construct perception with that data.

It is totally circular. It could not possibly work.

It isn’t circular. Your mistake is to lump two separate abilities together in the single category ‘perception’, then to treat that category as all-or-nothing.

Determining which way the eyes are pointing is one kind of perception. Processing retinal input is another. The latter depends on the former, but not vice-versa. There is no circularity.

25. keiths: The concept ‘truck’ represents trucks…

Good point. Nobody in the world is in doubt about what a truck is and all agree on the boundaries of the category.

26. Neil Rickert: I take a piece of string

That is how we humans use tools to measure.

I thought you were using the word ‘measurement’ as an analogy for some aspect of perception as it occurs in any organism. But to use a tool, we already need to perceive. So just describing how we use tools does not explain how I understood you were using measurement to give an explanation for a mechanism for perception.

So I now suspect I completely misunderstood you.

Are you advancing a theory for perception? If so, does it involve an analogy of measuring that does not use any external instruments or standards but instead uses only the brain/body of the organism? That is, it uses only with what evolution and development bequeath an organism for succeeding in its niche.

I agree that human cultures invent and adopt standards. But I am wondering about pre-cultural perception as it occurs in all organisms. Are your ideas about that at all?

27. “Categorization just means dividing up into parts.”

Really? Nope, it is grouping by a given property. Categorization is comprehension of properties.

Ok, nothing worth seeing here.

28. Neil Rickert: There can also be internal communication, from one part of a system to another.So I look at neurons as generating and transmitting internal information.

I am fine with that as long as we stick with Shannon information. It’s standard neuroscience and there are experiments which measure things and which estimate probabilities based on that measurement and then talk about eg Shannon mutual information.

For information within the brain, the meanings is going to be where that came from.And that’s pretty much to do with the details of how the categorization is done.

I find that vague. Do you have anything more detailed for how meanings come from an environment for an organism. For example:
– is there something in the organism that means and if so how does it acquire that property of meaning?
– Or if meaning is somewhere else, where is it?
– Or if you think the above two questions are in error because meaning is not a concrete object (ie because they reify meaning), then how is the word meaning to be applied to explain a natural process?

Those [probabilities] are only needed for a theoretical analysis.People get ad hoc communication systems working by trial and error tweaking until they have satisfactory results.

Well, sure, but the reason Shannon is famous is because he gave a theory to understand how actual communication channels work. That theory let telecommunication companies build optimal, error-correcting communication. Companies did not tweak long distance call infrastructure or DVD error correction codes into existence. They built them based on a detailed theory.

I am trying to understand if you have a theory of perception. If your theory references Shannon information, then I think you need to be specific about probabilities. Otherwise you just have a vague analogy which uses the word ‘information’.

Of course, you can say perception arose through evolution, which is tweaking in some sense. But I am asking how perception works, not how it came about.

29. Neil Rickert: . We know a lot about that world from our own experience.

[….]
The outside world is totally mysterious. The problem for cognitive science is to work out ways that a perceptual system can try to make sense of that initially mysterious world.

It seems to me that most of cognitive science is trying to solve problem A. I am trying to solve problem B. As I see it, problem B is the problem faced by a newborn baby. And the baby cannot get help from society until it has solved problem B well enough to be even able to perceive that society.

Bayesian approaches recognize both problems and provide a combined solution to both. Priors model our experience. That model predicts neural, bottom up neural patterns expected from perceptual and proprioception neural inputs, including neurally-encoded feedback from muscle from action. The neural realization of that prediction IS perceiving. (See this Seth video for high level explanation of that claim). The Bayesian updating process is how we update our models to ensure our success as organisms or how we act in order to align perception with our model.

You keep claiming failures in cognitive science. But you don’t seem to be acquainted with basic research programs in cognitive science. I can list some material that I found helpful if you are interested in learning.

30. keiths:

The concept ‘truck’ represents trucks, the concept ‘animal’ represents animals, and the concept ‘murder’ represents certain types of killing.

Alan:

Good point. Nobody in the world is in doubt about what a truck is and all agree on the boundaries of the category.

Unanimity isn’t a requirement for representation.

31. Neil Rickert: After reading that post of yours, I am still back with the same issue. If concepts are representations, then what do they represent? Nobody ever explains this.

And now you want to bring up standards of correctness for representations.

Those questions cover many topics which I have pontificated about in other threads.

I don’t have anything original tp say in those posts. Instead, I start by trying to understand how cognitive science (including philosophy) addresses those topics. Then I try to come to reflective equilibrium on how those ideas could fit together while simultaneously meeting my intuitive biases for non-reductive naturalism, pragmatism, and scientific realism. Needless to say, I am not quite at that balanced equilibrium yet!

Here are the topics I see in your questions along with quick summary of the relevant ideas. I can expand on any if you want.

1. Perception as (mental) representation: predictive processing tracks the niche’s causal structures that are relevant to an organism’s success by creating neural maps with a structure that bears a second-order similarity to the niche’s causal structure.

2. But causal tracking alone fails to explain standards for “correct” perception: To that causal mechanism, add this:
The content of a representation is correct when it solves the organism’s current problem of staying in homeostasis and/or when it solved the organism’s genetic ancestor’s problem of passing on genes (ie being relatively fit).

3. Concepts as mental representations: ‘Concept’ is a theoretical term meant to capture an organism’s behavioral dispositions — to act in certain ways depending on its environmental context. Concepts are embodied as sets of linked neural patterns which model Bayesian priors for perception linked to neuromuscular responses. Concepts are grounded modally (in modes of perception, that is). So the neural maps are maps of neural sensory responses. Abstract concepts are built from that embodied grounding of perception and action in niche (hence solving symbol grounding issue).

4. Human culture generates standards for judging correctness: Human interaction somehow grounds a different level of standards, eg through language. That is what I think embodied linguistics or Eliasmith’s semantic pointers might capture. (Also KN’s first book!). But I don’t have much detail here because I am not aware of any detailed explanations in cognitive science which explain linking individual organism standards to community, cultural standards. Sellars said some inspiring things in this area, but details are still a work in progress.

5. Mechanisms as scientific explanations: The preceding mixes psychology (mental representation) with neuroscience (neural maps). I think the two can be linked by mechanisms as they are described in modem philosophy of science.

6. Mental representations are real. That claim is based on the no miracles argument for scientific realism and also my acceptance of non-reductive naturalism. Real patterns might be part of that realism.

32. keiths:
keiths:

Alan:

Unanimity isn’t a requirement for representation.

Consensus on what a category is and the word to use for it might be a start.

33. Alan,

Consensus on what a category is and the word to use for it might be a start.

Neil’s question was about what concepts represent:

If concepts are representations, then what do they represent? Nobody ever explains this.

They represent their referents. The concept ‘truck’ represents trucks, the concept ‘animal’ represents animals, and the concept ‘murder’ represents certain types of killing.

34. BruceS: Are you advancing a theory for perception? If so, does it involve an analogy of measuring that does not use any external instruments or standards but instead uses only the brain/body of the organism?

I thought I had been clear that I consider measurement just a particular use of categorization. Organisms categorize. Whether they do that categorization in a systematic way that we would consider to be measurement, I do not know.

Apart from the piece of strings, I also mentioned using hands, arms, etc. Any animal with an ability to do stereoscopic vision is implicitly using the distance between the eyes as a baseline standard.

You need a standard for categorization, not just for measurement. Without a standard, what I categorize as a cat today, I might categorize as a banana tomorrow. A private standard might suffice, but it is will still be needed.

35. BruceS: Bayesian approaches recognize both problems and provide a combined solution to both.

I am doubting that.

Ben Franklin noticed lightning. Maybe he noticed flashing while combing hair. He sent up some kites.

How does he apply Bayesian methods to get as to electromagnetic theory?

Priors model our experience.

How do you start before there is experience?

36. BruceS: Here are the topics I see in your questions along with quick summary of the relevant ideas. I can expand on any if you want.

1. Perception as (mental) representation:

I don’t know where you are seeing that in my posts. I’ve been looking at perception as getting information. I don’t think I have mentioned mental representations. If anything, I am skeptical of the adjective “mental” as it is used in philosophy.

predictive processing tracks the niche’s causal structures

I don’t think I have said anything about causal structures, either.

The content of a representation is correct when it solves the organism’s current problem of staying in homeostasis and/or when it solved the organism’s genetic ancestor’s problem of passing on genes (ie being relatively fit).

I don’t agree with that.

A representation is correct when you have specifications for forming representations, and the actual representation is done in accordance with those specifications.

But that seems unrelated to the earlier discussion. It looks to me like a change of subject.

3. Concepts as mental representations: ‘Concept’ is a theoretical term meant to capture an organism’s behavioral dispositions …

If “concept” is a theoretical term, then I don’t know what theory you are discussing. I was using categories. It was you who introduced concepts. I would prefer to leave concepts out of the discussion.

37. Alan Fox: Good point. Nobody in the world is in doubt about what a truck is and all agree on the boundaries of the category.

Quite a few automakers depend, for their continued prosperity, on no one asking where the boundaries are.

38. petrushka: Quite a few automakers depend, for their continued prosperity, on no one asking where the boundaries are.

I’ll have no truck with that. Trucks are the same everwhere and not at all like lorries,.camions or 货车. 🙂

39. Neil Rickert: I don’t know where you are seeing that in my posts

Sorry, I thought you were asking about a summary of what I believed. I don’t think you have used mental representations and many of the other ideas in my post.

40. Neil Rickert: You need a standard for categorization,

OK, thanks. I did have a misunderstanding about measurement: I had thought it was the name for a new idea you had.

Years ago when I first asked you about your ideas, you recommended I read a math text; it was senior undergrad text on abstract algebra as I recall. Based on that and subsequent posts, I thought you had some kind of mathematical model for perception. In that model, I thought you were giving a simplified explanation of the math by referring to measurement, with that concept meant to indicate some special mathematical model you had developed.

I think most psychologists and neuroscientists will agree that we animals categorize in some sense by using categories that are specific to our sensorimotor systems applied to our niche. (For humans, that niche includes our culture and language).

If standards just means the way organism-specific systems “carve reality”, then they will agree with that too.

Many, I think, will also use internal representations as part of the theory explaining how animals with brains do that.

The concept of standards can mean something different from that organism-specific carving. It might also refer to how to separate representations from misrepresentations since the basic causal explanations in science cannot do that separation on their own. I think that work on that type of standard is mainly limited to philosophers and to the scientists who have stepped back from their problem-solving research to think more deeply about the concepts used in it.

Thanks for taking the time to explain your ideas.

41. BruceS: Years ago when I first asked you about your ideas, you recommended I read a math text; it was senior undergrad text on abstract algebra as I recall.

It would require mind reading to be sure, but I think you are referring to this one:
“Rings of Continuous Functions”, Gillman and Jerrison.

It is a graduate text. It is more likely that I suggested you not actually read it. You would find it very technical.

The basic idea is that if you know the structure of the system of continuous functions on a suitably constrained topological space, then you can reconstruct that topological space from what you know about the functions.

I see it as giving an account of the principles whereby reality can emerge from developing ways of interacting with reality. So, starting with what Kant considered an unknowable world in itself, you could find out about that world. It would amount to constructing a world that is consistent with those interactions.

I think most psychologists and neuroscientists will agree that we animals categorize in some sense by using categories that are specific to our sensorimotor systems applied to our niche. (For humans, that niche includes our culture and language).

But that tells us exactly nothing.

The concept of standards can mean something different from that organism-specific carving. It might also refer to how to separate representations from misrepresentations since the basic causal explanations in science cannot do that separation on their own.

I’m saying almost the opposite of that.

I am saying that the distinction between representation and misrepresentation does not exist until there are standards. And when we invent standards we are, in effect, inventing truth. That is to say, once we have standards, then the distinction between representation and misrepresentation comes into existence in the form of the distinction as to whether the standards were followed.

I think that work on that type of standard is mainly limited to philosophers and to the scientists who have stepped back from their problem-solving research to think more deeply about the concepts used in it.

When I look at science, I see the developing of standards as fundamental.

42. Neil Rickert:
But that tells us exactly nothing.

Well, not much, I agree. The reason is that I cannot capture the details of the theories in a paragraph. The theories have the details of mechanisms and of experiments to justify their claims and they are part of an active research program involving many scientists.

comes into existence in the form of the distinction as to whether the standards were followed.

I agree with that so far. But then the issue is how to do that without circularity or homonculii.

That is, the description of standards for misrepresentation cannot rely on the views of the scientist to provide the standard for an organism. Nor can we use homuncular explanation such as the organism “judges”. The goal is instead to give a causal explanation which uses the organism’s interests only. Those interests are to solve the problems of homeostasis and of propagation of genes.

If truth is binary, then I don’t think truth needs to be part of the explanation of meeting those standards. Instead, some continuous measure of similarity might do. For example, for PP explanations one paper I have seen used KL divergence of a representation distribution and the actual distribution (but the math is more complicated so it does not demand the organism know the actual).

The basic idea is that if you know the structure of the system of continuous functions on a suitably constrained topological space, then you can reconstruct that topological space from what you know about the functions.

Yes, that is the type of thing I had in mind. Can you use those ideas to provide a mechanism or explanation for perception and/or cogitation that applies to any animal? Without the abstract algebra itself, of course, at least for (most?) TSZ readers.

I see it as giving an account of the principles whereby reality can emerge

I think talk of the nature and source of reality is philosophical. I was looking for something scientific, eg a causal or functional mapping of the interaction of brain/body with environment, couched in the language of either neuroscience or psychology, if possible (not getting my hopes up on that last part, though!).

43. BruceS: That is, the description of standards for misrepresentation cannot rely on the views of the scientist to provide the standard for an organism. Nor can we use homuncular explanation such as the organism “judges”. The goal is instead to give a causal explanation which uses the organism’s interests only.

Right. So the organism invents its own standards. That’s where pragmatism comes in. The organism can use trial and error to find something that works. But pragmatism is not a standard — it does not give a true-or-false binary decision. There can be many possible choices that could work.

In turn, pragmatism depends on biology. What works boils down to what meets biological needs.

Instead, some continuous measure of similarity might do.

But then you need to come up with some way of determining similarity.

In normal everyday talk, similarity is a perceptual judgment. But if we are trying to explain perception, using that would be circular.

The compare instruction for computers is really just an arithmetic instruction. And it would have to be applied to representations. So you could easily use that to explain how we choose ways of representing nor how we choose standards.

If I look at a cat, a small dog of about the same size, and a great Dane — then in terms of most properties we would use the small dog is more similar to the cat than to the great Dane. But we see the small dog as more similar to the great Dane. So how we judge similar is not easily explained using computationalism.

Yes, that [a reference to Gillman & Jerrison] is the type of thing I had in mind. Can you use those ideas to provide a mechanism or explanation for perception and/or cogitation that applies to any animal? Without the abstract algebra itself, of course, at least for (most?) TSZ readers.

The mathematics gets technical quickly. But the basic ideas behind them are simple enough. A neural system could do what is needed. So it is available to any animal that has enough neurons and sensors.

There’s a drawback, however. It can only give the topological structure of reality. That gets us back to the old mathematician’s joke “A topologist is somebody who cannot distinguish between a donut and a coffee cup”.

The way that we distinguish between a donut and a coffee cup goes beyond topological properties. It depends on metric properties. And, as best I can tell, metric properties all depend on pragmatic conventions.

I think talk of the nature and source of reality is philosophical. I was looking for something scientific, eg a causal or functional mapping of the interaction of brain/body with environment, couched in the language of either neuroscience or psychology, if possible (not getting my hopes up on that last part, though!).

I don’t think that’s possible.

It is easy to find a causal physical mapping between a tree and the environment. But for animals, that cannot be done. The animal is mobile. It’s causal connections with the environment a changing all of the time. The only way that I know to have such a causal connection, is for the animal itself to be in charge of maintaining that causal connection via the way that it interacts. And that’s where I look to behavior, particularly categorization behavior, as a way of maintaining such a causal connection.

44. Neil Rickert: So the organism invents its own standards. That’s where pragmatism comes in. The organism can use trial and error to find something that works. But pragmatism is not a standard

Trial and error within the constraints of an organism’s evolutionary heritage and subsequent development; these include the learning mechanisms that evolution made available to it (Trail and error would not be a full description of all of the mechanisms how we and some other animals learn, eg statistical learning by recognizing patterns).

What scientific theories like PP provide is a detailed, testable description of those constraints and mechanisms including how they operate and how they could be realized by actual biology and biochemistry.

If you say “error” that I think you must provide is a standard for determining amount of error.

45. Neil Rickert: But for animals, that cannot be done. The animal is mobile. It’s causal connections with the environment a changing all of the time

Sure they are. But PP models that. Further, it explains how that the animal can leverage those changing connections to become successful by appropriate action (including saccades). What is captured as causal structure are the invariances underlying those changes (specifically those changes that matter to the animal in its niche as determined by evolution and its individual development).

In other words, it is possible to build a theory according to the many scientists who use it. Whether it is a correct theory is an open question.

In normal everyday talk, similarity is a perceptual judgment. But if we are trying to explain perception, using that would be circular.

Exactly. Which is why PP does not involve everyday language and everyday judgement. The predictions and the error detection are all implemented as neural interactions using (idealized) models of neuron biochemistry. For example, one approach describes how neural connections and their weights can implement means and variances in the Gaussian distributions used in the math of that specific version of PP.

46. Neil Rickert: It can only give the topological structure of reality. That gets us back to the old mathematician’s joke “A topologist is somebody who cannot distinguish between a donut and a coffee cup”.

That mathematical joke has become the basis of an approach to scientific realism: namely, some form of structuralism.

That approach is trying to solve the pessimistic meta-induction. That argument for anti-realism says that since scientific theories change, how can claim the current theory yields (approximate) truth about a mind-independent reality since all past theories have been shown to be wrong?

The structuralist replies that though theories changes, as long as the theory is mature enough, then it will have captured something about the structure of reality that will persist in subsequent theory.

But as the joke says, all you can talk then about is the structure of reality as captured by the theory. In other words, just relations, not stuff. So there is nothing real about donuts or coffee cups, only the structural relations they both model are real.

A philosopher could use PP to justify structuralism, by saying that all PP claims to model are causal relations. In fact, I just read a book by one who does exactly that!

47. Neil Rickert: And that’s where I look to behavior, particularly categorization behavior, as a way of maintaining such a causal connection.

That reads to me as a way of saying that the animal does not need to represent the environment, since the environment is available for it to interact with. That idea is what enactivism supports and why it discounts representations.

But as per my exchange with KN on radical enactivism, I believe essentially all scientific research programs in psychology and many in neuroscience claim that empirical results cannot be explained without resorting to theories with persisting mental structures which are referred to as representations, in a sense captured by the details of the theory.

48. Neil Rickert: There’s a drawback, however. It can only give the topological structure of reality. That gets us back to the old mathematician’s joke “A topologist is somebody who cannot distinguish between a donut and a coffee cup”.

The way that we distinguish between a donut and a coffee cup goes beyond topological properties. It depends on metric properties

If you want to keep going:
Can you translate those mathematical ideas to talk of categories and features, and the organism and its environment?

In terms of a set, its subsets, their union and intersection:
– What are features and categories and conventions?
– What does an organism learn about the environment through trial and error?

– Why do individuals from language-less, culture-less animals apparently arrive at the same conventions for recognizing features and categories in an environment they share with others of their species?

– Why is a metric needed to explain what organisms do?
– Between which two types of entities does a metric provide a distance?

– You can relate this to Kant’s views if you want to do some philosophy: does constructing a world through successful learning and action depend on anything about the nature of reality (other than it must support that success somehow). This is Hoffman in disguise, since evolution is a form of trial and error learning.

49. BruceS: If you say “error” that I think you must provide is a standard for determining amount of error.

When I mention “trial and error” I am using a common catch phrase. It would be a mistake to think I am talking about the ordinary meaning of “error”, as it would be used in other contexts.

This site uses Akismet to reduce spam. Learn how your comment data is processed.