Facts as human artifacts

BruceS suggested that I start a thread on my ideas about human cognition.  I’m not sure how this will work out, but let’s try.  And, I’ll note that I have an earlier thread that is vaguely related.

The title of this thread is one of my non-traditional ideas about cognition.  And if I am correct, as I believe I am, then our relation to the world is very different from what is usually assumed.

The traditional view is that we pick up facts, and most of cognition has to do with reasoning about these facts.  If I am correct, then there are no facts to pick up.  So the core of cognition has to be engaged in solving the problem of having useful facts about the world.

Chicago Coordinates

I’ll start with a simple example.  I typed “Chicago Coordinates” into Google, and the top of the page returned showed:

41.8819° N, 87.6278° W

That’s an example of what we would take to be a fact.  Yet, without the activity of humans, it could not exist as a fact.  In order for that to be a fact, we had to first invent a geographic coordinate system (roughly, the latitude/longitude system).  And that coordinate system in turn depends on some human conventions.  For example, the meridian through Greenwich was established as the origin for the longitudes.

That fact also depends on the naming convention, which designates “Chicago” as the name of a particular town.  And, it depends on a convention specifying a particular location within Chicago (probably the old post office, though I’m not sure of that).

I won’t go through a lot of examples.  I think the one is sufficient to illustrate the point.  Everything that we call a fact depends, in some way, on human conventions.  So facts are artifacts, in the sense that we must first develop the conventions necessary for us to have the possibility of their being facts.

Acquiring information

A number of years ago, I made a usenet post in comp.ai.philosophy.  I couldn’t find it in a google search.  The idea of the thought experiment was that someone (an airplane pilot) dropped me off in the middle of the Nullarbor plain (in Southern Australia), with a lunch pack, pencil and note paper.  I was to record as much information as I could about that location, before I was picked up in the evening.

The thing about the Nullarbor plain, is that it is desert.  But it is not sandy desert like the Sahara.  There are many plants — desert scrub — that grow in the occasional rain, then dry out and look dead most of the time.  So perhaps I could record information about the plants, such as their density.  But, how could I do that.  Everything looked the same in every direction.

So I used the soda can from the lunch pack as a marker.  And I used a couple of other items as markers.  That enabled me to fix a particular region where I could start counting, in order to be able to write down some information.  The markers broke up the sameness, and allowed me to have a sense of direction.  In effect, those markers established conventions that I would use in counting the number of plants.

As I recall, others in the usenet discussion did not like that post.  They saw my use of the soda can as something akin to making an arbitrary choice (which it was).  There appears to be some unwritten rule of philosophy, that anything depending on arbitrary choices must be wrong.  (Oops, there goes that meridian of longitude, based on the arbitrary choice of Greenwich).

Knowledge

The traditional account, from epistemology, is that knowledge is justified true belief.  Roughly speaking, your head is full of propositions that you are said to believe, and you are good at applying logic to those propositions.

You can be a highly knowledgeable solipsist that way.  And, as a mathematician, I suppose that term “solipsist” fits some of what I know.

My sense is that acquiring knowledge is all about anchors.  We must find ways of anchoring our propositions to reality.  That’s roughly what the system of geographic coordinates does.  That’s what the soda can did in my thought experiment.  And that, anchoring ourselves to reality, is what I see our perceptual systems to be doing.

AI and autonomy

AI researchers often talk of autonomous agents.  And that’s the core of my skepticism about AI.  What makes us autonomous, is that each of us can autonomously anchor our thoughts to reality.  The typical AI system uses propositions that are anchored to reality only by the auspices of the programmer.  So the AI system has no real autonomy.  And, when boiled down, that is really what Searle’s “Chinese Room” criticism of AI is about — Searle describes it as an intentionality problem.

I’ll stop at this point, to see if any discussion develops.

240 thoughts on “Facts as human artifacts

  1. @ Gregory

    Your comment is in guano. You are welcome to repost leaving out the personal stuff.

  2. Interesting! This approach reminds me very much of a lot of recent philosophy I’ve been reading and writing about lately.

    The crux of the view, as I interpret it through my own lenses, is that we should adopt a very modest view of “facts” as “true claims”. And claims are the content of a certain kind of activity — namely, claiming. If there were no beings engaging in the social activity of claiming, then there would be no claims, hence no facts.

    The hard question is, of course, about the claimables — what the claims are about, when they are true.

    So that’s one area of inquiry. Another, closely related area of inquiry concerns our basic cognitive relation to the claimables. I think it’s much more helpful to think of our basic cognitive relation to claimable particulars in terms of successful mapping than in terms of accurate describing, with the key proviso that organisms do not use maps but rather instantiate maps.

    (Sellars, and following him Churchland, think of the neural networks in the organism’s brain as the maps of the environment. I have some background in environmental philosophy and classical pragmatism, so it seems more ‘intuitive’ to me to think of the totality of the organism’s habits as its map of its environment, where the neural networks play a central role in the causal implementation of the maps.)

    Lately I’ve been toying with a “two-tiered” model in which there are two homomorphisms by which “thought” is related to “the world”. Firstly, there’s a homomorphism between bodily habits and the ambient environment; secondly, there’s a homomorphism between discursive practices and bodily habits. So a discursive act — such as uttering, “look, a rabbit!” — is connected to the world only by virtue of how uttering it directs attention, gets us to look where we weren’t looking, invites us to distinguish that particular figure from a perceptual background — pointing may be involved.

    The fundamentally social nature of language finds its telos in how it helps us better coordinate our perceptual and practical engagement with the world. If there weren’t a background of perceptual and practical engagements which are already rich in cognitive and affective structure, discursive practices would be idle — like a purely abstract deductive system, with no anchor to the world.

  3. Kantian Naturalist: I think it’s much more helpful to think of our basic cognitive relation to claimable particulars in terms of successful mapping than in terms of accurate describing, with the key proviso that organisms do not use maps but rather instantiate maps.

    See cat vision

  4. I don’t agree there is logic in the universe. Only there is accurate conclusions in relationship with other accurate conclusions. Logic is a minor and special and easily wrong result of the great truth of conclusions. Truth is accurate conclusions. These established by god and known by us only if he tells us. Figuring it out can only be, possibly, a minor reflection on some truth point.
    so the world decides whats true by foundational FACTS. However if the facts are not agreed upon then conclusions are not .
    AI is entirely based on human conclusions or facts. AI is not intelligent enough to question these facts. It has no idea that error is possible in its reasoning application.
    AI is not intelligence. its non existent as a thinking being.
    There is no possible way to have AI.
    nothing can think intelligently if it could never correct its presumptions .
    God talking to a computer would have no impact unless a programer changed the computer’s program.

  5. nothing can think intelligently if it could never correct its presumptions .

    So it can successfully mimic the behavior of most people?

  6. Kantian Naturalist: I think it’s much more helpful to think of our basic cognitive relation to claimable particulars in terms of successful mapping than in terms of accurate describing, with the key proviso that organisms do not use maps but rather instantiate maps.

    Yes, that sounds as if it is in the right direction, though I might disagree with the specifics.

    In particular, I think of the explorers who would draw up maps — perhaps rough maps — of the territory they explored, highlighting landmarks that could be used to orient oneself. This seems to me to be a more realistic picture of how we acquire knowledge, than is the inductionism that is typically given credit.

    I have some background in environmental philosophy and classical pragmatism, so it seems more ‘intuitive’ to me to think of the totality of the organism’s habits as its map of its environment, where the neural networks play a central role in the causal implementation of the maps.

    Yes, I prefer that version.

    Thanks for that thoughtful reply.

  7. Robert Byers: I don’t agree there is logic in the universe.

    I’m not sure what you mean by that. If you are saying that logic is a human invention, then I would agree, though with the proviso that other animals probably do something equivalent.

    Truth is accurate conclusions.

    That seems like a trivial truism, so says nothing much at all.

    These established by god and known by us only if he tells us.

    That is the theistic view of “truth”. Outside of theistic views, the received view of “truth” seems to be somewhat similar, that “truth” somehow has an external basis. However, “truth” is a mess. We use it inconsistently, and philosophical theories of truth don’t seem to work very well.

  8. I won’t go through a lot of examples. I think the one is sufficient to illustrate the point. Everything that we call a fact depends, in some way, on human conventions. So facts are artifacts, in the sense that we must first develop the conventions necessary for us to have the possibility of their being facts.

    … As I recall, others in the usenet discussion did not like that post. They saw my use of the soda can as something akin to making an arbitrary choice (which it was). There appears to be some unwritten rule of philosophy, that anything depending on arbitrary choices must be wrong. (Oops, there goes that meridian of longitude, based on the arbitrary choice of Greenwich).

    … My sense is that acquiring knowledge is all about anchors. We must find ways of anchoring our propositions to reality. That’s roughly what the system of geographic coordinates does. That’s what the soda can did in my thought experiment. And that, anchoring ourselves to reality, is what I see our perceptual systems to be doing.

    I read this and I couldn’t figure out what to say except “of course it’s obvious, who’s gonna argue with this” so I didn’t reply.

    Now I see that Gregory has come and gone. If he has a point, mixed in with his sarcastic personalized attacks, I can’t see it, but it appears evident there’s at least one person who’s gonna argue with this. *shrug*

    I suppose it’s possible that you and I are not in agreement, that what I take to be obvious-beyond-any-reasonable-argument is not actually what you meant. You said you didn’t need a lot of examples but I’d like to propose one of my own, and perhaps test if we are indeed in agreement.

    Looking at my hand about to insert my door key into my door, I realize that “keys” aren’t “facts”. That is, there is physically a key in my hand, it’s hard, it’s machined to a certain identifiable shape; everyone else who shares my physical reality can also agree that the key physically exists. And we can all agree that the name for this object is ‘key”. But that, I think, is where the conjunction of physical solidity and name-agreement misleads us all into assuming a key is a plain fact. Yes, yes, it’s a “metal instrument by which the bolt of a lock is turned” – but what is that? A key depends for its very existence (as a key, rather than, say, an ugly bit of jewelry or a baby toy or scrap) on the concept of a lock. We came up with the conventions of interior space, of partitions, of doors (not just doorways!) and then the concept of and the instantiation into solid reality of locks and their keys. “I have a home behind closed doors” – what a concept!

    And the convention works pretty well; I can just about always reach into my pocket for my keys, get the right key oriented into the right lock. It works so we!! that we never have think about it. We not only take it for granted that keys are just a fact of modern life, we take it for granted that “key-ness” is a fact of that piece of metal.

    If I understand what you’re saying, then you would perhaps phrase it as “we don’t have a perception of key-ness, we construct key-ness out of our perceptions”, and we do it without (usually) realizing that’s what we do.

  9. “Fact” isn’t a word I find terribly useful. As I see it, there are data and there are models, and our data are simply models at a lower level of analysis.

    Conversely a model at one level becomes data at the next level up.

  10. hotshoe: I read this and I couldn’t figure out what to say except “of course it’s obvious, who’s gonna argue with this” so I didn’t reply.

    I think it is, or should be, reasonably obvious to most scientists. But some philosophers seem to find it troubling.

    Now I see that Gregory has come and gone.

    Yes, Gregory’s reaction was amazing. (For those wondering what happened to Gregory’s reply, Alan moved it to Guano — thank you Alan, much appreciated).

    In any case, I think you got my point pretty well. Thanks for the reply.

  11. The typical AI system uses propositions that are anchored to reality only by the auspices of the programmer. So the AI system has no real autonomy.

    That’s what is so bizarre about the Plantinga “argument” about truth and knowing it, etc., nothing we’ve ever made ever came close to having any kind of comprehension of truth, and there’s no reason to think that we ever will make any such thing. Why would we? It wouldn’t serve any purpose of ours.

    Not that we’re very well made for understanding “truth” either, being far more disposed toward a phenomenological view of the world, complete with biases toward seeing “design” where it isn’t. Science took a long time to develop to get around our tendency not to see things “objectively.” But we simply evolved to deal with our world, and, whatever flaws evolution tends to produce our faculties are actually evolved to allow us to cope with the world for ourselves, rather than to merely spit out results useful to someone or something else, hence our models of the world are likely to have a meaningful congruence with that world.

    Of course I know what’s going on with Plantinga and those who think he’s on to something, which is the presupposition that we’d be made by God in order to simply know a sort of Platonic Truth. But there’s no excuse to suppose that, and if it at least makes some sense in Plato’s model of knowing, it certainly doesn’t fit at all well with the analogies that ID attempts to use to show how we “must have been designed.” We’ve never made anything that is even on the path to “knowing truth,” and it’s not clear why we ever would. And even if we’re never going to “know truth” in the way that IDists think we do without even trying, we at least have every reason (from evolution, especially) to suppose that our models exist for us to understand the world, and not as some sort of program existing for the use of someone or something else.

    Glen Davidson

  12. So I used the soda can from the lunch pack as a marker. And I used a couple of other items as markers. That enabled me to fix a particular region where I could start counting, in order to be able to write down some information. The markers broke up the sameness, and allowed me to have a sense of direction. In effect, those markers established conventions that I would use in counting the number of plants.

    There is an interesting parallel in physics called symmetry breaking that allows the manifestation of phenomena that can be detected; i.e., exert “forces” that can move particles, thereby leaving evidence of its existence.

    In mathematics, there is also an interesting way to develop the subject of vector analysis that starts with completely empty and featureless space. A line segment breaks the symmetry by establishing a reference; and a line segment with an arrow head on one end defines a direction.

    Another such arrow, not parallel to the first, defines a plane and a basis set for that plane. A third such arrow, not parallel to either of the other two, forms a basis set for 3 dimensions.

    One can then define the projection of one vector onto another; and then also define a “cross product” that builds a vector perpendicular to two given vectors and with a magnitude equal to the products of the magnitudes of the the two vectors and the sine of the angle between the two given vectors.

    So with a few symmetry-breaking vectors that form a basis set, one can develop step-by-step all of vector analysis.

    Human conventions are much like the idea of symmetry breaking in that they cull out reference phenomena and introduce definitions that allow meaningful communication. Otherwise no one would know what anyone else is making noises and gesturing about.

  13. Kantian Naturalist:
    II think it’s much more helpful to think of our basic cognitive relation to claimable particulars in terms of successful mapping than in terms of accurate describing, with the key proviso that organisms do not use maps but rather instantiate maps.
    [..]
    Lately I’ve been toying with a “two-tiered” model in which there are two homomorphisms by which “thought” is related to “the world”.Firstly, there’s a homomorphism between bodily habits and the ambient environment; secondly, there’s a homomorphism between discursive practices and bodily habits.So a discursive act — such as uttering, “look, a rabbit!” — is connected to the world only by virtue of how uttering it directs attention, gets us to look where we weren’t looking, invites us to distinguish that particular figure from a perceptual background — pointing may be involved.

    The fundamentally social nature of language finds its telos in how it helps us better coordinate our perceptual and practical engagement with the world.If there weren’t a background of perceptual and practical engagements which are already rich in cognitive and affective structure, discursive practices would be idle — like a purely abstract deductive system, with no anchor to the world.

    Thanks for these stimulating thoughts, KN.

    First comment: how do you differentiate “use” from “instantiate”. I think neural maps are dynamically evolving brain (or maybe organism) states. Changes can be short term, eg rate of firing in response to perceptions, or longer term, eg learning by changing the interconnections between neurons in some more permanent way. If that is right, using a map seems hard to disentangle from instantiating it.

    Can I also ask you how you view the relationship between language, body habits, (which I am taking to mean behavior), and the maps.

    Is language just a more complex type of body habit? Or is it something qualitatively different from the body habit/behavior. Is so, in what way?

    Let me try to be more specific by referring to the standard bee dance example. A bee dances for other bees to communicate to location of pollen she has found (which is encoded first in dancer and then in audience as neural maps).

    Is that dancing a language?

  14. Neil Rickert: Yes, that sounds as if it is in the right direction, though I might disagree with the specifics.

    In particular, I think of the explorers who would draw up maps — perhaps rough maps — of the territory they explored, highlighting landmarks that could be used to orient oneself.This seems to me to be a more realistic picture of how we acquire knowledge, than is the inductionism that is typically given credit.

    Very interesting post, Neil.

    I agree with KN’s thoughts as well.

    Given that agreement can you help me understand why there could never be an artificial, autonomous agent. Why could we not mimic the type of information processing which happens when an organism processes neural maps and interacts with the world. I do agree that ongoing interaction is important.

    I believe this is the essence of the robot reply to the Chinese room argument. In other words, an artificial being that interacted with the world and dynamically updated the right type of maps would understand meaning.

    Now I have seen that reply criticized by saying such an artificial being would not have the conscious experience people have when they access and contemplate a specific belief. But it seems to me that if you accept that, then we are back at the zombie argument.

  15. BruceS: Given that agreement can you help me understand why there could never be an artificial, autonomous agent.

    I don’t claim that. Rather, I see a problem for AI systems (i.e. based on computation). You agree that interaction is important. The interaction is not in the computation.

    Many people think of the brain as a computer. I don’t. I see the brain involved in redesigning how we interact with the world. An artificial autonomous agent would need to have its sensory abilities in the core, rather than in the periphery, and would need to be redesigning how those sensory abilities are used.

    Now I have seen that reply criticized by saying such an artificial being would not have the conscious experience people have when they access and contemplate a specific belief.

    It’s difficult to comment, because I never could work out what “belief” is supposed to mean. When we put data in a computer database, those might be our beliefs, but I don’t see them as the computer’s beliefs.

    What is needed for an artificial intelligence, is for it to come up with its own meanings, and then work out how to express those with language.

    As I see it, a newborn infant already has meaning and experience. It is just that the meaning and experience are detached from the world. Learning involves the child finding ways of broadening its meaning and experience to encompass the external world.

    The AI community wants to start with syntactic expression and make meaning an add-on. We start real life with meaning, and the syntax is an add-on to allow us to communicate our meanings.

    In terms of consciousness and “the hard problem”, we start life with the subjective, and we get to the objective (more-or-less) by representing it as subjective experience. The “hard problem” is backward. It wants to start with the objective, and then get to the subjective. I doubt that it can be done. Take FOPC (first order predicate calculus) as an example. It is a completely objective language. But it is incapable of making reference to anything outside the language. Making reference depends on the subjective. Any reductionist account of the subjective would reduce to solipsism because it would eliminate any possibility of reference.

    At least, that is how I see it.

  16. It wants to start with the objective, and then get to the subjective. I doubt that it can be done.

    It can’t be done as such, and that is the problem of much of the nonsense about consciousness.

    However, information is basically the same in the “subjective” and in the “objective,” for the obvious reason, that the “objective” is largely an abstraction of the “subjective.” The two can be related via information, so long as one doesn’t expect the “subjective” to reduce down to–or appear the same as–the abstractions in the “objective” sense.

    Glen Davidson

  17. sez neil rickert (with emphasis added):

    The traditional view is that we pick up facts, and most of cognition has to do with reasoning about these facts. If I am correct, then there are no facts to pick up. So the core of cognition has to be engaged in solving the problem of having useful facts about the world.

    No facts? I don’t buy that proposition, nor do I find the associated rationale for that position to be plausible, let alone convincing.

    My objection can be summarized in six words: The map is not the territory. A map is typically a piece of paper with a whole lot of lines marked on it in ink, and there’s a whole lot of “human conventions” which go into interpreting those ink lines; by Rickert’s argument, the fact that ‘human conventions’ are necessary in order to comprehend that that set of ink lines is a river, means that there isn’t actually any river there.

    Yyyyyyeah. Right.

    The river exists independently of whatever maps may display it. And the fact that one needs ‘human conventions’ to comprehend the map-marks which indicate the river does not mean that the river itself is a ‘human convention’.

    Similarly, the city of Chicago has a location which is independent of whatever human-invented coordinate system you may happen to prefer for describing that location. And the fact that one needs ‘human conventions’ to comprehend the latitude/longitude numbers which specify the location of Chicago does not mean that Chicago’s location is a ‘human convention’.

    The map is not the territory. The words/numbers/symbols one uses to describe a thing, are not the thing which is being described. In both cases, the former is all about ‘human conventions’, while the latter is what we refer to as “objective reality”, and is not dependent on, nor particularly related to, ‘human conventions’.

  18. cubist: The map is not the territory

    Lizzie above suggests avoiding “fact” and employing model and data instead. Hence a map is a model which will be as good as the data we have about the territory we wish to survey

  19. cubist,

    I think you are very slightly missing the point of Neil’s post — which is that, on his account, facts are themselves on the map-side of the map/territory distinction, not on the territory side.

    Notice that he didn’t commit himself to any claims about whether there’s any territory at all, or our modes of cognitive access to the territory, or how the map is produced — all he claimed is that facts on the map side of the map/territory distinction.

    And that’s actually quite important, because if facts on the map side and not the territory side, then describing and explaining the world isn’t a matter of matching up statements with facts, like peeling off stickers from the right-hand page into the corresponding empty spaces on the left-hand page in a child’s sticker-book.

    It means that the business of describing and explaining the world isn’t about discovering the facts that are there any way and writing them down, but drawing a useable map of the territory, in which facts are part of the notational system in which the map is drawn.

  20. Kantian Naturalist: It means that the business of describing and explaining the world isn’t about discovering the facts that are there any way and writing them down, but drawing a useable map of the territory, in which facts are part of the notational system in which the map is drawn.

    Whew!

    No wonder I’m tired a!! the time. A!! that surveying and drawing maps and such!ike 🙂

    edit – everything after “whew”

  21. Alan Fox: It’s not just me, then.

    Well, I edited my comment which changes the sense – but yes, I agree, “Whew”.

    I don’t do philosophy, or really, any thinking about “thinking” at all – I swore it off back when I was 15. The adults around me at that time …. dear god …. horrible examples …

    It’s only recently that watching KN in action I get the feeling there might be some worth to this “thinking about thinking” business after all. But I’m still in a state where I admire xis replies as I admire a tightrope walker, appreciating the feat but wondering if I do quite get the point of the whole effort.

  22. cubist: The river exists independently of whatever maps may display it. And the fact that one needs ‘human conventions’ to comprehend the map-marks which indicate the river does not mean that the river itself is a ‘human convention’.

    KN has answered this pretty well.

    I’m aware that there are at least two different views of what constitutes a fact. In ordinary use, a fact is a true statement. So call that a P-fact (or propositional fact). Some people say that facts are metaphysical things, so call that an M-fact. My post was about P-facts. I’m skeptical of the notion of M-fact, though you seem to like something along those lines.

    The trouble with M-facts, is that they are useless. We have no access to them, as far as I can tell. It’s the P-facts that are useful.

  23. Neil Rickert: The trouble with M-facts, is that they are useless. We have no access to them, as far as I can tell. It’s the P-facts that are useful.

    Just to clarify, how do facts (P and M) map to data?

    ETA presumably not at all for M-facts

  24. Neil Rickert: KN has answered this pretty well.

    I’m aware that there are at least two different views of what constitutes a fact.In ordinary use, a fact is a true statement.So call that a P-fact (or propositional fact).Some people say that facts are metaphysical things, so call that an M-fact.My post was about P-facts.I’m skeptical of the notion of M-fact, though you seem to like something along those lines.

    The trouble with M-facts, is that they are useless.We have no access to them, as far as I can tell.It’s the P-facts that are useful.

    Nice. Now I know that it’s M-facts I have a problem with,

  25. Alan Fox: Just to clarify, how do facts (P and M) map to data?

    ETA presumably not at all for M-facts

    If M-facts stood for Model-facts I could maybe get behind them 🙂

  26. hotshoe: It’s only recently that watching KN in action I get the feeling there might be some worth to this “thinking about thinking” business after all. But I’m still in a state where I admire xis replies as I admire a tightrope walker, appreciating the feat but wondering if I do quite get the point of the whole effort.

    Yeah, I have the same feeling.

  27. Alan Fox: It’s communication. Language can fail at that.

    I’m not sure I understand the difference you are referring to. I don’t think you mean that bees cannot fail to communicate but language can fail: presumably, there could be a dancing bee with six left feet, so to speak.

    As others have commented, I’m usually mentally winded by KN’s posts, but in a good way. The same applies for many of the other posters here. That is why it’s a great forum for improving one’s mental fitness, as long as you put the effort into your workouts.

    In this case, I am trying to puzzle through what I read as his mappings thought->language->body habits and thought->body habits->ambient environment along with the added proviso that language was always used in a social context.

    As part of trying to puzzle that out, I wondered if it was specific to people.

  28. Neil Rickert:
    As I see it, a newborn infant already has meaning and experience.It is just that the meaning and experience are detached from the world.Learning involves the child finding ways of broadening its meaning and experience to encompass the external world.
    […]
    The AI community wants to start with syntactic expression and make meaning an add-on.We start real life with meaning, and the syntax is an add-on to allow us to communicate our meanings.

    I think that the fetus must develop the ability to interact with the world at some point while it is still in the womb. The brain/nervous system would have developed to the point where it is involved in redesigning how the baby interacts with the world (to paraphrase your description).

    Subjectivity would begin during that period before birth.

    The development path for fetuses to do that would have come from past interactions with the world outside the womb. These interactions are captured by evolution.

    To build artificial autonomous agents with subjectivity, researchers will need to replicate those “redesigning” mechanisms and that development path somehow. That might be “hard coding” them, but more likely it would involve starting with more primitive capabilities like those of the fetus and exposing the nascent artificial agent to interactions with the world in a controlled environment like the womb.

  29. BruceS:

    Alan Fox: It’s communication. Language can fail at that.

    I’m not sure I understand the difference you are referring to. I don’t think you mean that bees cannot fail to communicate but language can fail: presumably, there could be a dancing bee with six left feet, so to speak.

    I might have been shell-shocked from discussing generations in another thread where language at times did not seem to result in communication.. Communication is transfer of information from one organism to another or others of which language is a subset involving production and reception of sound.

    Hymenoptera and communication (the whole eusocioality thing) is fascinating. Bee dancing goes on in darkness so the transfer of information involves touch. Is smell involved too? It wouldn’t surprise me. Bees in the wrong hive are identified by smell and ejected or killed. Ants (also hymenopterans) have a vast array of pheromones (35 or so at the last count, I think, in some species) that are used to communicate. One might wonder whether such pheromonal communication was semiotic but that way lies madness. (Check out threads here involving Upright Biped)

    As others have commented, I’m usually mentally winded by KN’s posts, but in a good way. The same applies for many of the other posters here. That is why it’s a great forum for improving one’s mental fitness, as long as you put the effort into your workouts.

    Agreed but don’t let on to KN. It’ll only go to his head

    In this case, I am trying to puzzle through what I read as his mappings thought->language->body habits and thought->body habits->ambient environment along with the added proviso that language was always used in a social context.

    No question language, human evolution and sociality in humans are inextricably linked. You might be interested in The Mating Mind which strongly (perhaps too strongly) promotes the idea of sexual selection as being an important factor in the evolution of language.

    As part of trying to puzzle that out, I wondered if it was specific to people.

    Pretty much, notwithstanding cetaceans and primates who have sophisticated verbal communication. There might be an argument for sexual selection in “singing” in the humpback whale as there is sexual dimorphism there. But humans surpass that with poetry, singing, theatre etc.

  30. Kantian Naturalist:
    Still thinking about the language of bees over here.

    My humble opinion is that to be language, a communicartion system must be able to encode novel information that can be understood by anyone in the community. I would say bee dance is language because anyone can decode it.

    DNA is not a language because novel sequences have no predictable meaning and cannot be understood or decoded.

  31. Hi Neil,

    I’m aware that there are at least two different views of what constitutes a fact. In ordinary use, a fact is a true statement.

    Not always. A fact can also be a state of affairs, as in the phrase “a statement of fact”. If “fact” always referred to a true statement, then “a statement of fact” would be redundant.

    So call that a P-fact (or propositional fact). Some people say that facts are metaphysical things, so call that an M-fact. My post was about P-facts. I’m skeptical of the notion of M-fact, though you seem to like something along those lines.

    If an M-fact is a state of affairs, then P-facts are utterly dependent on M-facts. A P-fact can only be a P-fact if it reflects an M-fact.

    The trouble with M-facts, is that they are useless. We have no access to them, as far as I can tell. It’s the P-facts that are useful.

    It’s true that we have access only to P-facts (generalized to include representations as well as propositions) and not to M-facts. M-facts are still useful, however, in that P-facts are constrained by M-facts. Otherwise, facts wouldn’t be about reality at all.

  32. keiths: A fact can also be a state of affairs, as in the phrase “a statement of fact”.

    Talking about “states of affairs” is mostly a way of sounding as if you are saying something profound, while not actually saying anything at all.

  33. Neil Rickert: I’m not sure what you mean by that.If you are saying that logic is a human invention, then I would agree, though with the proviso that other animals probably do something equivalent.

    That seems like a trivial truism, so says nothing much at all.

    That is the theistic view of “truth”.Outside of theistic views, the received view of “truth” seems to be somewhat similar, that “truth” somehow has an external basis.However, “truth” is a mess.We use it inconsistently, and philosophical theories of truth don’t seem to work very well.

    yes logic is a human invention but this is because its a special case within a bigger equation. accurate conclusions within relationship with other accurate conclusions!
    logic is therefore fallible because we never actually figure out the accurate conclusions in order to compare them to others or attempts to discover them.

    For example. The accurate conclusion of miracles makes logical invoking them upon some need and evidence. So its logical. yet if miracles are not accurate conclusions, not true, then its illogical to invoke them for some need and claimed evidence. And so on.
    Logic fails all parties because only the accurate conclusion of miracles being true or not can bring true logic. Yet all parties would be logical from their presumptions.

  34. Neil,

    Talking about “states of affairs” is mostly a way of sounding as if you are saying something profound, while not actually saying anything at all.

    Not at all. A “state of affairs” is just how things are — a situation.

    A boulder balanced at the top of a hill is a particular state of affairs. The same boulder resting at the bottom of the hill is a different state of affairs. It is an M-fact, and it remains an M-fact even if there is no corresponding P-fact — that is, even if no one knows or asserts that the boulder is there.

    P-facts depend on M-facts and are constrained by them.

  35. Is this all not a bit beside the point and semantic?

    Hoping this isn’t a classic illustration of the Dunning-Kruger effect (you would tell me guys, wouldn’t you?), let me try a simple metaphor. Our perception is limited. We live in fog. We can penetrate the fog but it limits how far we can see. We can improvise to extend our perception by tools like the Hubble telescope but there is still the limit of how far light will travel in the available time since its (apparent) beginning. We don’t know if the universe extends beyond that limit. Similarly at a much closer level, our perception is limited. The fog is patchy at the human scale and then becomes dense at the quantum level. The fog at either end of the scale of our universe (and beyond three dimensions) may always be an ultimate barrier against our ability to perceive it or imagine® it. Everywhere else (reality, perhaps?) is available for scrutiny now or in the future via scientific enquiry and shared experience, especially as communication and data sharing continues to expand at a seemingly exponential rate.

    ETA does it help or hinder to consider unknowable facts, U-facts if you will, or would that be an oxymoron? (Square circles can exit in Banach space) 🙂 . The existence of God is a U-fact, perhaps.

    ETA2 Should there be S-facts, too (s = supernatural) or are they a subset of U-facts? I’d argue for I-facts, where I stands for imaginary®.

    But all this is largely irrelevant to understanding human cognition. And whether we talk about M-facts and P-facts, maps and territory or data and models, is this not just a preamble to how and whether we can understand how human brains work?

    ETA3 “reality perhaps?”

  36. We must find ways of anchoring our propositions to reality.

    *Note to self. Read OPs and comments carefully before jumping in.*

    @KN

    While looking for models related to Big Bang and singularities (another thread), I came across an article (PDF) by David Chalmers which might be somewhat related to the matters in play in this thread. I know Chalmers name has popped up on occasion but I can’t recall if you regard him favourably.

  37. Robert Byers: logic is therefore fallible because we never actually figure out the accurate conclusions in order to compare them to others or attempts to discover them.

    I’m not quite sure what you are saying there.

    I sometimes say that when people disagree in what is said to be a logical argument, the disagreement is usually about the premises rather than about the logic. I think you are making a similar point.

  38. keiths: Not at all. A “state of affairs” is just how things are — a situation.

    I often see what seems to me to be a circular argument or a circular definition. The person giving the circular definition uses alternate wording such as “state of affairs” or “situation” in an apparent attempt to hide the circularity. However, circular definitions fail to define.

    A boulder balanced at the top of a hill is a particular state of affairs.

    “Boulder”, “balanced”, “top of hill” — you are appealing to conventional meanings. Those meanings are human artifacts.

  39. Neil Rickert:
    BruceS,
    I pretty much agree with all of that.

    I don’t understand why your had trouble with your ideas in philosophical circles. They remind of philosophical discussion of scientific realism versus anti-realism (roughly: are P-facts truths about reality ie M-facts, or simply tools for successful prediction), of Kant’s distinguishing phenomena versus things-in-themselves (again roughly P-facts versus M-facts ), and of pragmatic approaches to judging the “truth” of theories and the facts as described by those theories by how well they work for humans goals.

    So it seems to me that your ideas are would fit into easily into philosophical discussions.

    I also understand that you worked on the learning mechanisms for acquiring useful P-facts and the models/maps that use them (which is how I understand by your phrase “anchoring ourselves to reality”). Also on how these mechanisms could have evolved. That’s the stuff I referred to as “evolutionary psychology” in another thread. It too would make a great post, IMHO.

  40. Alan Fox: But all this is largely irrelevant to understanding human cognition. And whether we talk about M-facts and P-facts, maps and territory or data and models, is this not just a preamble to how and whether we can understand how human brains work?

    To be clear, my interest has been in cognition, or how cognitive systems work. How brains work is a rather different question, though no doubt they are used in the implementation of cognition. My interest is in broad principles, not in implementation details.

    Talk of M-facts and P-facts comes up, because they are part of a 2000 year tradition in attempting to talk about cognition. But that tradition is mostly one of taking a “God’s eye view” approach, though that is often denied. My study has been from an “organism’s eye view” approach, which I see as the way to get at how cognitive systems could have evolved. However, my approach is largely disregarded on the apparent basis that it is not a “God’s eye view” approach, or that it comes to very different conclusion from what arises out of a “God’s eye view” approach. So I have to be able to respond to the objections from the “God’s eye view” people.

    To a first approximation, “objective” is a reference to a putative “God’s eye view”, while “subjective” is a reference to an organism’s eye view. The Chalmers “hard problem” is to give a “God’s eye view” (or objective) of the subjective. Many (but far from all) people who have attempted to study the hard problem, have concluded that the subjective is an illusion.

    Starting the other way, from an organism’s perspective, what I see as a kind of hard problem (though not as hard), is the problem of starting with a an “organism’s eye view” and attempting to get at some approximation of an objective account of reality. It is a different problem, but a far more solvable one that what Chalmers proposed. And, as I see it, that is the problem that a cognitive system has to solve.

  41. Kantian Naturalist:
    Still thinking about the language of bees over here.

    It seemed to me it might be considered a primitive language in order to provide an evolutionary precedent for human language.

    There is a similar situation with consciousness, of course (if it evolved, what aspects of it do other animals have).

    If it is a language, then perhaps we could illustrate P-facts using this scenario:

    A botanist states the position of a flower containing interesting pollen. That is a human P-fact. A bee dances the position of the same pollen. That is a bee P-fact.

    Are they making the same claim about the underlying pollen M-fact?

  42. Neil Rickert: Starting the other way, from an organism’s perspective, what I see as a kind of hard problem (though not as hard), is the problem of starting with a an “organism’s eye view” and attempting to get at some approximation of an objective account of reality.

    I think that may be where my problem lies, being able to go anywhere except via my own limited perception. A “God’s eye view” is a U-fact to me.

  43. BruceS,

    I’m reluctant to include bee communication in the set of things called “language”. I’m going to spend a little time checking what has already been said about such distinctions before reinventing the wheel, though. I do hear the siren call of semiotics, even of biosemiotics; :).

    ETA

    I was going to post an OP a while ago and I came across this on-line journal who have published a paper (full text available as a download (warning – big PDF) entitled “Biosemiotic Entropy: Disorder, Disease, and Mortality”. Never got round to writng the OP. Might be of interest.

  44. Alan Fox:
    Pretty much, notwithstanding cetaceans and primates who have sophisticated verbal communication. There might be an argument for sexual selection in “singing” in the humpback whale as there is sexual dimorphism there. But humans surpass that with poetry, singing, theatre etc.

    No doubt human language is much more sophisticated than any animal communication, much like human consciouness compared to animal consciousness.

    I guess it depends how one defines language. My intuitive idea would have involved syntax, semantics, and the ability to communicate novel situations. I had thought that bee dancing incorporating all of that at some level.

    You obviously know more about bee dancing them me. Plus a quick check of wikipedia shows my characterization of language is incomplete.

    Displacement, Bee Communication, Language

    But I still think there might be something to the scenario in the explanation of the evolution of human language (building on mechanisms that other species use). Also, as in my other post, as a way of illustrating bee P-facts versus human P-facts.

Leave a Reply