# Categorization and measurement

In a recent comment, BruceS suggested that I do an OP on categorization and measurement, so here it is. I’ll try to keep this short, but later extend it with posts in the thread. If my post in the thread begins with a bold title, then it is intended to extend the OP. And I may then add a link, if I can figure out this blocks editor.

Cognition is categorization

Harnad’s paper is not quite how I am looking at it, but it gives a good introduction to the idea.

Red, blue or green? That’s a question about categorizing by color. We also categorize by size. Measurement is just a mathematical way of categorizing by length or by voltage or by pressure — by whatever we are measuring.

We categorize as a way of getting information. Within science, information is often acquired by measuring, which is a type of categorizing. I can get information with a digital camera. The digital camera, in effect, categorizes the world into pixel sized chunks and provides data for each of those.

## What is categorization?

Categorization just means dividing up into parts. We divide up in accordance with features. Harnad discusses this in an appendix near the end of the linked paper.

There’s an alternative view of categorization, suggested by Eleanor Rosch, where categorization amounts to grouping things in accordance with their similarity to some sort of prototype or family resemblance. Harnad looks at that in the appendix, and he does not agree with Rosch. I concur with Harnad on that.

## Perception

Perception is a process of getting information about the environment. It works by categorizing, for that is the way of generating information. We perceive cats, dogs, trees because those are categories. We perceive the dog’s eyes, ears, mouth because perception divides the larger categories into smaller categories.

We also perceive individuals. But, as best I can tell, an individual is only a very small category. We cannot perceive individuals in the logical sense of “individual”. In the logical sense, X and Y are identical if “X” and “Y” are different names for the same entity. But, in ordinary life, we take X and Y to be identical if we are unable to distinguish them. That is, we take them to be identical if they are not separate categories. This is why we can be confused by identical twins. Family members usually are able to distinguish the twins, but most people find it difficult.

I drive my car to the store. By the time that I return, some of the rubber has worn off the tires. In a strict logical sense, it is now a different logical object. But I still see it as the same car, because I still place it in the same category.

We perceive a world of objects, because objects are categories and what we perceive are categories. A typical robotic AI system categorizes the world into pixels. And then it attempts to compute which sets of pixels correspond to objects. So the way that the AI system “sees” the world is very different from the way that we see it. This is probably why a self-driving Uber car killed a pedestrian. It could not track objects as well as we can.

## Note:

This entry was posted in Uncategorized by Neil Rickert. Bookmark the permalink.

Mathematician and computer scientist who dabbles in cognitive science.

## 90 thoughts on “Categorization and measurement”

1. BruceS: That approach is trying to solve the pessimistic meta-induction.

My own view is that scientific theories are neither true nor false.

We some use “true” as an indication of approval. And in that sense, theories that we approve can be said to be true. But they fall outside of our normal ways of judging truth by applying standards.

2. BruceS: That reads to me as a way of saying that the animal does not need to represent the environment, since the environment is available for it to interact with.

I see “representation” as a poorly defined term that is used inconsistently.

Does my front door lock contain a representation of the key that I use to unlock it? I would say “no”. But the lock does have to be able to match or recognize some features of the key.

3. Neil Rickert: What’s a feature depends very much on us.Birds would not notice some of the features that are important to us, and we are probably unaware of features that are important to birds.

For that matter, there are features of spoken English that we use to categorize into phonemes, but which most Japanese people are unable to detect because their own language learning alters their perception of sound (as does ours).

But, unlike the categories, the features are already there, waiting to be found?

4. walto: But, unlike the categories, the features are already there, waiting to be found?

What we call features are already there. But maybe they shouldn’t be considered features until we develop abilities to detect them as features.

5. Neil Rickert:
I see “representation” as a poorly defined term that is used inconsistently.

I agree. Yet, a lot can be learned from being careful about it, and/or trying to figure out what the user is trying to convey. Too often people mistake representations for the represented stuff.

Neil Rickert:
Does my front door lock contain a representation of the key that I use to unlock it?I would say “no”.

I would say yes. That would not be my first choice though.

Neil Rickert:
But the lock does have to be able to match or recognize some features of the key.

Yep. It does so by having a functional complement to the key.

6. Neil,

What we call features are already there.

You’re coming perilously close to admitting that reality is structured.

But maybe they shouldn’t be considered features until we develop abilities to detect them as features.

They are features of reality, so the word ‘feature’ is appropriate whether or not we can detect them.

7. Neil Rickert: I’ll try to reply to this one in detail over the next day or two.

Thanks. What I am asking you is basically the same question we ask EricMH about his claims that theories in KMI limit what stochastic processes can accomplish through the NS mechanism of evolution. Specifically, how does the math relate to the biology and behavior of living organisms?

One thing that occurred to me: your donut/coffee cup example was using an intuitive, visual example. But my question was based on my rudimentary understanding of axiomatics in coming from an introductory topology MOOC.

I do think you need to move to something more abstract than visual intuition for at least two reasons
– relying on visual intuition for shapes to explain seeing would be circular
– intuition about shapes needs to be more abstract to apply to the other modes of perception: hearing, smelling etc.

But if sets is too basic, feel free to use some more advanced abstractions. I don’t know anything about it, but it seems obvious that in general topological theory does not rely on visual intuition.

I did not ask about the approximating functions you also mentioned, but if would be great if you described where they fit into the feature, category, convention and organism versus environment concepts that you use in other posts.

There is a PP version of what I am asking you. It involves (idealized) neural behavior modeled in Bayesian probabilistic networks which implement statistical learning via approximation techniques developed by Hinton and Friston and other in the early 90s (formalized as variational Bayes). That has been applied both the population learning through the genome and evolution and organism learning through the biochemistry of networks of neurons (mostly the latter AFAIK). There is even some work on applying PP to populations of agents interacting to create a culture.

8. Entropy: Too often people mistake representations for the represented stuff.

Of course, I am talking about representation as is used in scientific theory in psychology and neuroscience. Not as it might be used in casual exchanges in internet forums.

9. BruceS: Of course, I am talking about representation as is used in scientific theory in psychology and neuroscience.Not as it might be used in casual exchanges in internet forums.

Could you explain the difference?

10. Alan Fox: Could you explain the difference?

Some motivating comparisons for the terms fitness and force of gravity as examplea of terms used in internet forums versus in science:

Mung’s talk about fitness versus fitness as a mathematical parameter in population genetics. (I bugged Mung about this when he posted here, so I don’t feel guilty speaking about the departed this way)

Talk about the force of gravity versus how it is actually captured and subsumed by the General Relativity mathematics of geodesics in the abstract geometries of spacetime.

So for Predictive Processing or even any Bayesian brain theory, the representation is the math model of the prior as physically realized in the synaptic weights and learning behavior of idealized neurons and networks of neurons.

If you like the analogy of maps and territories, then the map is the neural pattern modeled by Bayesian prior and updating and the territory is the real world source of the stimulus. That analogy needs to be extended to avoid a static map and instead refer to a map being constantly updated by acting in the real world. Further, the organism’s goal is successful action; the map is not an end in itself, only a necessary intermediary for long term, successful action. And success involves its niche!

There are many other psychological and neuroscientific models that are said by practitioners to involve representations, but I’d have to do a bit of digging to provide examples.

If have enjoy podcasts, the Klein interview I linked upthread or possibly in Sandbox talks about concepts as mental representations and also gives a high level intro to PP.

We don’t just feel emotions. We make them.
in list at
https://www.vox.com/ezra-klein-show-podcast

There is also the philosophical issue of how a mental representation comes to be about something (AKA intentionality).

One starts with science, which provides causal explanation linking the neural pattern to some real world stimulus via perception and action.

But then philosophers claim that those basic causal models alone cannot explain the difference between representing and misrepresenting. I will stop there, but if you are a glutton for philosophical punishment, here you go

https://plato.stanford.edu/entries/content-teleological

In particular, look at intro and section 1 discussions of representing non-existing objects and the discussion of the normative nature of representations.

11. BruceS,
Apologies for not picking up on this, Bruce. I’ve been following as time permits on my phone but not had time to comment. Hope to rectify that this weekend. Briefly, I find Neil makes a fair bit of sense.

12. ## Perception as a function space

BruceS: Specifically, how does the math relate to the biology and behavior of living organisms?

I’ve been slow getting around to respond to this. But here goes.

As we look around, we can ascribe properties such as color or texture or distance to parts of what we see. To me, this seems like reporting the value of continuous functions. So I can think of perception as the ability to apply a repertoire of continuous functions to the immediate environment.

In mathematics, if we have a topological space , then is the space of bounded continuous functions on . Don’t be confused by the word “space”. It means much the same as “set”, except that we look at as possibly having some sort of structure beyond just membership of sets or subsets. In particular, if we are looking at continuous functions with numeric values, then we can add, subtract, multiply those functions. That makes and algebra (or a ring) in the terminology of mathematics.

The study of is usually part of functional analysis.

I should perhaps point out that if we number the colors, then we can think of the color function as giving a numeric result. So assuming that results are numeric is not really a restriction.

Similarly, if we look at science, we see that science can report (or measure) properties. So we can look at science as providing a family of continuous functions. Laws of science, such as Newton’s then become algebraic relations within .

Mathematicians tend to think in something like platonic terms. So they see as containing infinitely many functions, even if we have only ever used a small finite set of such functions. Limiting to the functions that we have used is not much of a restriction. The Weierstrass approximation theorem, or its generalization, says that all functions can be generated as limits of a small set of functions. The Weierstrass theorem says that you can approximate any function as closely as you want with a polynomial function.

I think of this as a useful mathematical model. Our perception gives us these continuous function that we can apply to the world . And, as perceivers, we use those to attempt to find out as much as we can about the world. If we go by the Kantian view, that we have no direct access to the world in itself, then what we can determine using the functions in is all that we can actually know about the world.

There is a well known theorem in functional analysis that, given suitable restrictions, the algebraic structure of determines the topological structure of . I usually mention the Gillman & Jerison book for this, as it is one of the first sources. But almost any modern functional analysis textbook will have a proof of this, and newer proofs are probably easier to read than the earlier Gillman & Jerison proof. Unless you are into the mathematics, you don’t really need the proof.

The basic idea of the proof is to construct a topological space , using the algebraic properties of , and then to prove that is homeomorphic to . Here “homeomorphic” means the same as “topologically equivalent”.

It seems at least plausible that perception is actually constructing the reality that we perceive. The brain need not be doing the fancy mathematics. That is only needed if you want to prove that the contructed is homeomorphic to the original .

13. ## Empiricism and perception

This is a continuation of the previous post.

If we consider the continuous function on the topological space (which I am taking to be world), then a fact coming from that function will look something like:

< < implies < < .

For example, the function might report the location (geographic coordinates) while the function might report temperature. I use a range of values because measurements or observations are never exact. A small range is the best that we can do. Here are just numeric values. So this fact would say that it is particular temperature or temperature range at a particular location or range of locations.

We cannot describe a fact as being of the form because we don’t really know what the values are. They are points in reality (points in the world in itself), but because we do not have direct access to the world in itself we cannot identify actual points.

According to the mathematical theory, we construct based on the algebraic structure of . According to traditional empiricism, our knowledge is derived from facts. These seem contradictory. Doubtless we use facts to fill in factual details, but we depend greatly on the structure of .

If you look at a physics text book, you will see much the same. The book will derive a great deal about reality using only laws of physics (which amount to the algebraic structure of . The physics book occasionally looks at actual facts to illustrate or to fill in details. But much of our knowledge of the physics of reality comes from the algebraic structure of .

According to some view, the laws of physics are arrived at by induction. I cannot see much evidence of this. If you started with Aristotle’s conception of motion, added newer instrumentation, and attempted induction methods (Bayesian methods, for example), you would never get Newton’s laws. All of the factual evidence that an Aristotlean might produce would argue against Newton’s laws. To get to Newton, you needed different concept or different functions. You had to change the structure of , and you could not do that using only facts. Similarly, you could not get from Newton’s laws to Einstein’s relativity with induction based on facts. This is why Kuhn made the case for paradigm shifts. And this is why I am a skeptic of Bayesian epistemology.

14. ## Topology and categorization

Another continuation.

Topology is about continuity and closeness. Roughly speaking, to say that is a continuous function is to say that is close to if is sufficiently close to . But it avoids use of metric properties such as distance functions. A topology is typically defined in terms of a family of open sets. Intuitively, and are close to the extent that they are both in many open sets.

If we have a continuous function , then
<
is an open set. It is also a category defined by the criterion < .

I see categorization as a way of getting at the topological structure of reality.

A topological space need not be a single continuum. It could consist of several continuums. When we see separate objects, we can think of each object as something like a separate continuum. Recognizing a world with objects is recognizing topological structure.

Properties such as shape and curvature are metric properties. So they are geometric properties, but they are not topological properties. If we look at the space of continuous functions on , then the metric functions are included there. But there are possible metrics that are very different from the ones that we use. A simple example would be to use the distance measured on a Mercator map. That would give us a flat earth. But we would grow taller as we moved toward the north or south poles.

There are very good pragmatic reasons to prefer the metrics that we do use. But I cannot see any reason, other than pragmatics, to prefer them. And what we consider pragmatic is going to depend on our biology.

15. Neil,

You’re contradicting yourself again on the issue of structure in the real world.

Today:

I see categorization as a way of getting at the topological structure of reality.

September 20th:

There are no structures in the world.

16. keiths: You’re contradicting yourself again on the issue of structure in the real world.

Actually, I’m not.

That was about topological structure, which is mathematical structure.

Please remember that I am a fictionalist with respect to mathematics.

17. I see.

So topological structure doesn’t exist, but reality has this same nonexistent structure and categorization is our way of “getting at it”:

I see categorization as a way of getting at the topological structure of reality.

C’mon, Neil.

18. Neil Rickert: I’ve been slow getting around to respond to this. But here goes.

Thanks Neil. Some starting questions. I am going to stick to perception for organisms with brains and put aside what scientists do or the philosophical implications for now. Again, feel free to stop responding whenever you want.

1. If the question makes math sense in your model: can you tell me the co-domain and domains of the functions you postulate. For example, are they mapping some vector of reals which models the world as sensed by an organism to a real (or real interval) which models the category and or features assigned by the process of perception? (If this makes no math sense, why?)

2. Can you be more explicit about what happens physically in an organism when it applies a function? For example, for human sight there are neural processes in the brain and optic nerves that are influenced by photons striking the eye. As well, there are muscular processes involved such as saccades, or head movements, and eventually possibly uttering words (although you can omit that bit for now).

3. Does an organism have an innate initial set of functions?

4. How does learning fit into your model?

5. I agree that the brain does not compute in the same way that orbiting planets do not compute. Instead, science can try to model aspects of the neural and muscular and hormonal processes of the brain/body in terms of computing. I understand your model of applying a function to be doing that.
Given that, why do you focus on polynomial approximations? Are you saying that the biochemistry and brain/body architecture involved in perception are better modeled by restricting to polynomial coefficients and structure for computation? (Comparison: I think parts of processes in the inner ear can by modeled by Fourier approximations, although I suppose the sines/cosines could be approximated by polynomials as well in the physical implementation).

6. Does action in its environment by the organism fit into your model of perception?

7. Do proprioception and internal regulation as part of maintaining homeostasis fit into your model? ETA: After looking again at Sandbox, I see you classify proprioception as a form of perception. Can you be more specific about the domain and co-domains of the functions in your model in that case? Do categories and features still matter?

19. Neil Rickert: f you look at a physics text book, you will see much the same. The book will derive a great deal about reality using only laws of physics (which amount to the algebraic structure of

PP also models reality by structure. But its models focus on a causal probabilistic structure eg as captured by Bayes networks and the corresponding probability functions. (To be clear: this modeling has nothing to do with Bayesian epistemology in science or everyday life).

I think there is a problem with using the math of fundamental physics in a model for what perception does. The problem is that our current fundamental physics does not explicitly included 2LT; we need statistical mechanics and some other postulates (eg about initial state) to have 2LT emerge from the math of fundamental physics.

But causality is fundamental to how we perceive and act in the world and that type of causality requires the asymmetry implied by 2LT.. That’s one reason why I see the PP models of structure to be a better approach than the acausal models of fundamental physics.

How do you see this issue?

20. Neil Rickert: Please remember that I am a fictionalist with respect to mathematics

Fair enough, but the philosophical argument would be not about the nature of the math, but rather this:
1. We have a math model which captures entities and/or relations (ie structures)
2. We successfully act in novel situations in the world based on past pragmatic tailoring of the physical processes which implement the model
3. Given 1 and 2, what if anything can we conclude of the entities and structures of reality?

The no miracles argument says that the structures or entities of reality correspond to/are represented by the math entities and/or structures (regardless of the metaphysics of math itself). I believe from past discussions you don’t accept this argument. I’m happy to leave it there.

21. Neil Rickert: There are very good pragmatic reasons to prefer the metrics that we do use. But I cannot see any reason, other than pragmatics

Is it fair to understand pragmatics as you are using it to mean organisms select the metrics that work best to solve their problems, where those problems are to be understood as you describe them in the Sandbox thread.
http://theskepticalzone.com/wp/sandbox-4/comment-page-40/#comment-262763

Can you describe in more detail the measures of success that are implied by your model and how an organism can access those measures and apply them to better solve its problems, presumably by somehow adjusting the functions it uses.

22. BruceS: 1. If the question makes math sense in your model: can you tell me the co-domain and domains of the functions you postulate. For example, are they mapping some vector of reals which models the world as sensed by an organism to a real (or real interval) which models the category and or features assigned by the process of perception? (If this makes no math sense, why?)

The domain is that part of reality that we are looking at. Or, more precisely, that part of “the world in itself” that we think we are looking at. But since we have no direct access to the world in itself, we cannot really know the domain. The whole point of perception is to find out what we can about that domain. And I see that as implying that we model it as best we can.

By “codomain” I assume you mean the values of the function. Within a brain, those are presumably neural nodes. But we can attempt to give them numeric values such that we can treat them an real numbers.

By comparison (and to illustrate), when I use a ruler the values are calibration marks on the ruler. But I manage to treat those as if real numbers. We pretend that there is a continuum of possible values, even though the ruler only provides a finite discrete set of values.

Can you be more explicit about what happens physically in an organism when it applies a function? For example, for human sight there are neural processes in the brain and optic nerves that are influenced by photons striking the eye. As well, there are muscular processes involved such as saccades, or head movements, and eventually possibly uttering words (although you can omit that bit for now).

I have not attempted to do any empirical work on this.

When I look at technology, the bar code scanner in the supermarket looks as if it is solving the same kind of problem. It uses something comparable to saccades to locate the bar code. The final information that it reports is not from sensing photons. Rather, it is a composite that depends on the light sensing but also depends on the internal timer moving the laser around, and maybe on other internal information such as rate of movement.

The main idea would be that the light input undergoes a sharp transition when the direction crosses an edge of some sort. And it is the times of these transitions that make up much of the information.

I don’t know that much about how an eye works. But I would expect something similar. The saccades would produce sharp transitions at some retinal cells, and timing (coming from muscle movement) would be part of what shows up in the resulting information. This is why it is important to think of perception as behavior, and not merely as passive pickup of received stimulus.

Does an organism have an innate initial set of functions?

Hard to say, because it depends on what you mean.

I think of these functions as being something like the transducers that J.J. Gibson discussed in his theories. There might well be some innate transducers, but they would probably be badly out of tune and need some fine tuning or tweaking to get them to work well.

How does learning fit into your model?

I see learning as mostly perceptual learning. That is, it would involve improving discrimination, tuning, adjusting. At the neural level, we see Hebbian learning. I take much of that to be something like recalibrating a measuring device (which is a kind of tuning). But the brain presumably can construct new detection circuits (new transducers), though I don’t know how that would be done.

Given that, why do you focus on polynomial approximations?

I don’t. I mention polynomial approximations because the Weierstrass approximation theorem is well known.

As for what the brain is doing, I would suggest something more like piecewise linear approximations. When you see a curve on graph paper, you can approximate that with short straight line segments, and that would be a piecewise linear approximation. Looking at neurons, you have something more like the use of a ruler with calibration marks. A neuron fires when its signal reaches a threshold, and that’s similar to crossing a calibration mark when measuring. Because we like to think of results as real numbers, we interpolate between calibration points, which gives us a piecewise linear curve. The piecewise linear interpolation would come from how we model the neural activity. We need not assume that the neurons are doing anything mathematical.

Does action in its environment by the organism fit into your model of perception?

Do proprioception and internal regulation as part of maintaining homeostasis fit into your model?

Yes, very much so.

The internal regulation for homeostasis is what I see as leading to pragmatic judgment.

As best I can tell, almost all behavior depends on proprioception.

Example. I type a password into my computer. But I don’t watch my fingers as I type. And, because it is a password, it does not show up on the screen. Yet I can usually tell when I have mistyped a letter. That has to be proprioception reporting that I hit the wrong key.

We talk of learning behavioral skills. As best I can tell, that is all perceptual learning (with proprioception as one of the kinds of perception involved). The feedback from proprioception is a good part of what guides our behavior. Improving a motor skill must mainly amount to improving the proprioceptive abilities at monitoring that motor activity.

23. BruceS: But causality is fundamental to how we perceive and act in the world and that type of causality requires the asymmetry implied by 2LT.

I’m inclined to see that as backwards.

That is to say, how we perceive and act in the world is fundamental to how we think about causality. Our notion of cause seems to be derived from what we can cause with our actions. Science tests its ideas of causality by attempting to cause events.

And then we have those people — typically free will deniers — who deny that we can cause anything. I wonder where their ideas of causation come from.

24. BruceS: 3. Given 1 and 2, what if anything can we conclude of the entities and structures of reality?

They can still only be entities and structures that we ascribe to reality. And most of those structures depend on metric properties that could only arise from our ascribing.

I’m not attempting to deny reality. Rather, I am trying to point out the importance of our involvement. A passively observed reality would be dull and boring.

25. BruceS: Is it fair to understand pragmatics as you are using it to mean organisms select the metrics that work best to solve their problems, where those problems are to be understood as you describe them in the Sandbox thread.

Yes, that’s fair enough.

It is important to recognize that the pragmatically “work best” choice is often hightly underdetermined. So there are choices (arbitrary, but pragmatic) that we must make.

Can you describe in more detail the measures of success that are implied by your model and how an organism can access those measures and apply them to better solve its problems, presumably by somehow adjusting the functions it uses.

I think you would have to get into neuroscience for that. From our point of view, we can see that it just works.

Here’s an example that recently occurred to me. When using a camera, we adjust the lens to bring it into focus. Here “in focus” just seems obvious to us. As far as I know, we are not getting data from photon impacts and doing a Bayesian analysis to decide what is in focus. It is more likely that we are just attempting to maximize the amount of information we can get. A self-focusing camera is presumably attempting to maximize the contrast in the image.

26. I think it’s quite right to insist on homeostasis (and also allostasis!) as crucial for biological cognition, and I think that one good worry to have about predictive processing is that it builds homeostasis into the account for free without worrying about how it got there. Some of you might be interested in Homeostasis as a fundamental principle for a coherent theory of brains by J. Scott Turner.

I think that I myself would take issue with Turner’s overly strong dichotomy between the cybernetic approach and the ecological approach — I think he’s reading far too much of the Newell and Simon, physical symbol system approach into what he’s calling cybernetics. There are subtle and overlooked differences between cybernetics and AI that make a difference to what we take to be the relevant theoretical options today. There is even (I have just learned) a non-PSS reading of Turing’s own theory of computation (see here), which is rather surprising!

27. Neil: Thanks for all your answers. Here is my last question:
I understand many of your posts on the forums to say there are fundamental errors underlying assumptions of much of the cognitive science of perception.

If I’ve got that right, can you describe what they are by contrasting your theory and why it does not make those fundamental errors?

28. Neil Rickert: a Bayesian analysis to decide what is in

As I’ve said in previous posts, should you ever want to engage in detail with the math or cognitive science of PP and how in particular it uses Bayes, I’d be happy to give you some references I found helpful.

In particular, you keep raising concerns with intelligent agents using Bayes to draw conclusions or perform skilled tasks.

In its core form, PP is not a model for whole agent behavior; it’s a model for sensorimotor and neural dynamics that can be used to explain aspects of human and animal perception, proprioception, behavior, homeostatic regulation, and cognition.

Whether we should also be Bayesians in our approach to scientific and everyday beliefs and evidence is a separate issue addressed by Bayesian epistemology.

29. Kantian Naturalist: There is even (I have just learned) a non-PSS reading of Turing’s own theory of computation (see here), which is rather surprising!

I don’t think that paper has much to do with Turing’s core work since Turing was addressing the mathematical theory of computation and in particular its applications to one of Hilbert’s mathematical challenges.

Instead, the paper seems to be a critique of PSS as the sole approach to cognitive science. I think there is broad agreement on that. But there are much better ideas for understanding the brain and how it interacts with the body and world than the paper’s idea of trying to use the Turing machine model literally.

I think there is a separate issue of whether the mind can somehow avoid the theoretical limits of the Church-Turing thesis. I think the current consensus is that no, it cannot*. That is, it does not provide something equivalent to hypercomputation, as discussed in these two SEP entries. The second also gives a useful overview of the ways to think of physical computation.

https://plato.stanford.edu/entries/church-turing/

https://plato.stanford.edu/entries/computation-physicalsystems/

——————————————————————–
* Penrose/QM consciousness group would disagree on this point, I think. As does EricMH.

30. Neil Rickert: By “codomain” I assume

It was called the range of the function back in my day. But I see co-domain in some of the math material I read now, and Wiki says it is the preferred and more precise term.

I understand many of your posts on the forums to say there are fundamental errors underlying assumptions of much of the cognitive science of perception.

If I’ve got that right, can you describe what they are by contrasting your theory and why it does not make those fundamental errors?

There’s an underlying assumption, that information exists in the world as a natural kind and we can just pick it up.

Philosophy of science makes similar assumptions — that data is already there, and we can just pick up the data and look for patterns in the data.

That is at least partly true. Once you are getting data you can look for patterns in that data. And some of science uses that. But at the most fundamental level, science has to invent data. That is to say, it has to invent ways of getting data from reality. Generally speaking, this requires inventing procedures that we must follow in order to get data.

We are all familiar with length, and using a ruler to measure it. But there was a time in the history of humans when nobody was doing that. Somebody had to invent the idea of a yardstick. Before it became a society wide invention, people were probably using the idea with the length of their arms or hands as a kind of yardstick. But even that had to come as a discovery, and it required suitable behavior to take advantage of it.

It is that invention part that people are missing.

What does it mean to say that I live at 10 Main Street? (I don’t, this is just a made up example).

So I look at the street signs on the way to my house, and I look at the house numbers. And those tell me that I am at 10 Main Street.

Alternatively, I stand in front of my house and use a GPS transponder, which tells me that I am at 10 Main Street. The two methods agree.

And then there is an earthquake. And part of the land shits by 1000 feet. And now the “look at street signs” method and the GPS transponder method seriously disagree as to where I live. And they will continue to disagree until something is changed (probably the database used by the GPS software will have to be updated).

Those disagree because they depend on incompatible standards. Either standard could work — as long as the community agrees to stick with that standard. And my point — there is no reliable data without standards. Getting data requires standards. It might be a scientific standard or a community standard or even an informal private personal standard. But some sort of standard is needed. You cannot have data without first inventing a standard for that data.

32. BruceS: It was called the range of the function back in my day.

Yes, that’s what I though you meant.

Mathematicians are (in my opinion) going way overboard in the extent to which they formalize everything. This makes mathematics less accessible to non-mathematicians. I see that as a mistake.

33. Neil Rickert: There’s an underlying assumption, that information exists in the world as a natural kind and we can just pick it up

Thanks again Neil.

I debated trying to contrast your ideas with PP since I see your ideas as taking different detailed approaches but sharing some general ideas. But I’ll wait to see if you are interested in engaging with PP first. I do see your ideas as related to the constructionist viewpoint taken by some PP proponents. Here are some quotes from the Barrett book on emotions.

“To achieve this magnificent feat, your brain employs concepts to make the sensory signals meaningful, creating an explanation for where they came from, what they refer to in the world, and how to act on them. Your perceptions are so vivid and immediate that they compel you to believe that you experience the world as it is, when you actually experience a world of your own construction. Much of what you experience as the outside world.”
– page 55

“[Our values and practices] are accurate only in relation to a shared social reality that our collective concepts created in the first place. People aren’t a bunch of billiard balls knocking one another around. We are a bunch of brains regulating each other’s body budgets, building concepts and social reality together, and thereby helping to construct each other’s minds and determine each other’s outcomes.”
— page 289

34. Kantian Naturalist: Some of you might be interested in Homeostasis as a fundamental principle for a coherent theory of brains by J. Scott Turner.

I enjoyed that paper. Thanks for linking to it.

I agree that contrasting cybernetics with homeostasis is misleading. In fact, the mathematics of PP is a form of cybernetics since its implementation of Bayes is a form of control theory. The key, of course, is that PP relies on prediction and feedback via error to implement aspects of homeostasis through internal regulation and external action.

Turner admits in the paper that an algorithmic approach can handle three of the four characteristics he is looking for (representation, tracking, intentionality). He says that creativity is the fourth that also must be handled, and it involves “unmooring” representations from sensory inputs, with schizophrenia given as an extreme example of this. As I am sure you aware, PP proponents do note its ability to unmoor representation from inputs and they have tried to use PP to explain schizophrenia that way.

I also skimmed the overview article for the theme issue that the Turner article appears in:
https://royalsocietypublishing.org/doi/10.1098/rstb.2018.0383

It has some interesting ideas on how to generalize the concept of brains via a functional, multiply-realizable definition. You and I exchanged some posts in another thread on attempts to generalize PP to eg agents in cultures (and not just neurons in animals); I wonder if PP could be used to address aspects of that functional definition as well as Turner’s ecological approach.

35. Kantian Naturalist: I think it’s quite right to insist on homeostasis (and also allostasis!) as crucial for biological cognition, and I think that one good worry to have about predictive processing is that it builds homeostasis into the account for free without worrying about how it got there

What I think you wondering about is why evolution would favor a neural architecture and processing that can be modeled in certain aspects via PP.

Possibly, the answer would lie in comparing the energy efficiency of various approaches to implementing homeostatic control. Cybernetics might provide some possibilities of options of algorithms for control for consideration. But I don’t know of any work on that why question.

Here is paper specifically on PP and homeostasis that (I think) Dan Williams cited in his blog. It may interest you. I have only skimmed it.
https://www.researchgate.net/publication/320677836_Allostasis_interoception_and_the_free_energy_principle_Feeling_our_way_forward

36. BruceS: Here are some quotes from the Barrett book on emotions.

The second quote seems reasonable.

I do not agree with the first quote. It talks of employing “concepts to make the sensory signals meaningful”. It’s nonsense. Most of our information about the world is constructed by us. It is meaningful by the way we went about constructing it.

Sure, if a mosquito bites, I might be using my concepts to make that meaningful. But that’s not central to meaning. Our interactive behavior in exploring the world is the main source of meaning.

This site uses Akismet to reduce spam. Learn how your comment data is processed.