In a recent comment, BruceS suggested that I do an OP on categorization and measurement, so here it is. I’ll try to keep this short, but later extend it with posts in the thread. If my post in the thread begins with a bold title, then it is intended to extend the OP. And I may then add a link, if I can figure out this blocks editor.
I’ll start with a link to a paper by Stevan Harnad (this blocks editor is annoying).
Harnad’s paper is not quite how I am looking at it, but it gives a good introduction to the idea.
Red, blue or green? That’s a question about categorizing by color. We also categorize by size. Measurement is just a mathematical way of categorizing by length or by voltage or by pressure — by whatever we are measuring.
We categorize as a way of getting information. Within science, information is often acquired by measuring, which is a type of categorizing. I can get information with a digital camera. The digital camera, in effect, categorizes the world into pixel sized chunks and provides data for each of those.
What is categorization?
Categorization just means dividing up into parts. We divide up in accordance with features. Harnad discusses this in an appendix near the end of the linked paper.
There’s an alternative view of categorization, suggested by Eleanor Rosch, where categorization amounts to grouping things in accordance with their similarity to some sort of prototype or family resemblance. Harnad looks at that in the appendix, and he does not agree with Rosch. I concur with Harnad on that.
Perception is a process of getting information about the environment. It works by categorizing, for that is the way of generating information. We perceive cats, dogs, trees because those are categories. We perceive the dog’s eyes, ears, mouth because perception divides the larger categories into smaller categories.
We also perceive individuals. But, as best I can tell, an individual is only a very small category. We cannot perceive individuals in the logical sense of “individual”. In the logical sense, X and Y are identical if “X” and “Y” are different names for the same entity. But, in ordinary life, we take X and Y to be identical if we are unable to distinguish them. That is, we take them to be identical if they are not separate categories. This is why we can be confused by identical twins. Family members usually are able to distinguish the twins, but most people find it difficult.
I drive my car to the store. By the time that I return, some of the rubber has worn off the tires. In a strict logical sense, it is now a different logical object. But I still see it as the same car, because I still place it in the same category.
We perceive a world of objects, because objects are categories and what we perceive are categories. A typical robotic AI system categorizes the world into pixels. And then it attempts to compute which sets of pixels correspond to objects. So the way that the AI system “sees” the world is very different from the way that we see it. This is probably why a self-driving Uber car killed a pedestrian. It could not track objects as well as we can.
I’ll stop here for now. I plan to add more in comments to this thread.