How do we jump into a closed set?
March 28, 2006 6:52 AM   Subscribe

If humans communicate via symbols and can only understand something (i.e., give it some semiotic 'identity') by way of comparison to already-familiar things, how do we begin at all?

If the identity we assign to something—insofar as we acquire whatever 'stuff' we need to make it a topic of conversation—has its roots in things we already understand, we're breaking into a closed set. WTF?

I get what Pinker said about innate capacity for language, but a CPU's innate capacity to act on certain instructions to produce certain results doesn't generate any input for it to process...

I'm posting this in Science & Nature because I have more input from Philosophy and Religion than I can handle already. But if you've got some stellar rebuttal of Aristotelian logic, or some especially clever intellectual poo-flinging that shuts Augustine down, don't let me stop you.
posted by Yeomans to Science & Nature (18 answers total) 8 users marked this as a favorite
 
Answer 1: We have an inborn stockpile of already-familiar things. See, for instance, the way young animals can respond to particular visual stimuli.

Answer 2: The "humans ... can only understand something ... by way of comparison to already-familiar things" hypothesis is kind of a leap. Are you sure you buy that?
posted by Wolfdog at 7:01 AM on March 28, 2006


Best answer: We start with concrete things. I gesture at the stream, you gesture at the stream. We lie in it, feel the water moving, taste it, try to smell it, get a noseful. I make a sound that comes out "Stream." You make a sound that sounds like "Stream." We've agreed that's a stream.

We spend so much time together naming things, I start to feel a strange, unnamed feeling, you start to feel a strange, unnamed feeling, and I turn to you, and with metaphor, I gesture to myself, washing my hands down my chest where I feel the feeling, and say, as I look meaningfully into your eyes, "Stream." And you get it, because you know that the stream is cool and flowing and it washes over your whole body when you lie in it.

Later on, we figure out that meringue is a cloud on our tongues, and hate is lightning and thunder and a broken pot, and that death is asleep that never stops sleeping. CPUs don't do metaphor, but higher primates do. That's how we break a closed set.
posted by headspace at 7:06 AM on March 28, 2006 [14 favorites]


It's incorrect to start building up analogies between the way a computer works and the way a brain works.

Computers are really just hella-fast adding machines. They don't "think" like animals think, and especially not like humans think. Humans, on the other hand, really suck at arithmetic, but they can draw inferences and reach conclusions in a way that computers may never be able to simulate.

If the identity we assign to something—insofar as we acquire whatever 'stuff' we need to make it a topic of conversation—has its roots in things we already understand, we're breaking into a closed set. WTF?

Are you talking about correlations of mental states between individuals?
posted by bshort at 7:08 AM on March 28, 2006


Response by poster: bshort, we suck at arithmetic, but we're still logic processors at heart. Even when the reasoning is totally fallacious, humans still persist in trying to appeal to rational syllogisms when explaining, well, anything.

I'm talking about why socialization works at all—why a group can integrate a member, instead of having to accommodate, in a new member, totally different basic assumptions which he/she will draw on to develop generate the "stuff" that language is made of (i.e., making sense of things, and thus, how one person or another adds new 'lumps' of knowledge to their pile).
posted by Yeomans at 7:20 AM on March 28, 2006


But we also acquire that "stuff" from conversation itself, and since the act of language formation is inherently introspective (intentionality and all), talking about conversation ends up being surprisingly easy.

That's what I was talking about when I said humans were really good at drawing inferences and reaching conclusions.
posted by bshort at 7:25 AM on March 28, 2006


it might help if you restrict what you consider to mathematics. that will help you avoid an awful lot of poorly defined waffle.

if you look at axiomatic mathematics, you have a clear example of how surprisingly complex things can be built on top of apparently trivial axioms using standard processes.

within that context, what are you asking?
posted by andrew cooke at 7:27 AM on March 28, 2006


Best answer: A large part of language involves metaphors grounded in the human body, which forms a common perspective.
posted by MonkeySaltedNuts at 7:27 AM on March 28, 2006


bshort, we suck at arithmetic, but we're still logic processors at heart. Even when the reasoning is totally fallacious, humans still persist in trying to appeal to rational syllogisms when explaining, well, anything.

I strongly disagree with your conclusion.

If we were logic processors at heart, that means you'd be hard pressed to find anybody who did not think in a basically logical manner at all times. Indeed, there are plenty of people who don't think logically, and there are times when nobody thinks logically, for instance when under certain chemical influences, or when suffering brain damage. Babies also don't think logically. You'd think if we were logical processors at heart then the default state we'd revert to when all cultural and contextual influences were eliminated would be logic, but it's rather the opposite.

There's also no evidence to say that, even if people in whatever population you're referring to when you say people are basically logical are in fact logical, that the set of all people in all time periods in all cultures are logical. You'd need to show that.

Also, if we were basically logical processors, rather than having logic grafted on to another more fundamental system, you'd expect to see logical fallacies causing us more problems at a basic neurological level -- computers don't handle cognitive dissonance or doublethink well, but people do.

I think a more reasonable proposal would be that we have discovered logic and adopted it as a deeply-rooted method of understanding the world, but that our fundamental way or thinking is something less methodical. We see ourselves as basically logical beings only because we see logic as the best way of thinking, and because the current technological metaphor of our time is the computer, and we tend to see everything in terms of these systems.
posted by Hildago at 7:50 AM on March 28, 2006


Is now a good time to bring up something like Loglan?
posted by meehawl at 7:55 AM on March 28, 2006


Response by poster: Hildago, I didn't mean to suggest that humans reason with precise, accurate logic. But the framework—we can't break out of it, as far as I can tell. Even if you justify or explain something with totally false information, you still make your appeal (whether thinking to yourself or addressing others) via the same path.

"That [brown] quadruped is a dog. It's purple. So, dogs (a 'named' concept that others have a grasp of) can be purple-colored (something, again, that has currency to a group, but for the observer in this example doesn't agree with what the other members define as purple)."

And what I'm saying about this is that we already understand 'purple-ness', or it couldn't be pointed out to someone who calls brown-colored dogs purple-colored that the identifier, "purple", should be used to refer to some other color, and "brown" used where "purple" had been.

But if we already understand everything we'll ever encounter—that is, we're only naming the things we see, hear, taste, etc.—how do we arrive at that 'closed set' (i.e., the substance we use when we (inevitably, I'm suggesting) base our portrayal of something we're talking to another person about in terms of likenesses and differences to mutually familiar things)?
posted by Yeomans at 8:18 AM on March 28, 2006


Response by poster: So here's an idea (thank you MonkeySaltedNuts and headspace): it's meta-thought that makes shared (and shareable) understanding work. We can think about thinking about thinking about something. Abstract thought and all that.

Neato. :-)

(Or I'm wrong. Am I wrong?)
posted by Yeomans at 8:23 AM on March 28, 2006


Best answer: For what it's worth, I recently spoke with Dan Everett, a really cool linguist / anthropologist (he of the Pirahã fame) about Quine's gavagai problem, which is not unrelated to your query?

The idea being that an anthropologist visiting an uncontacted people see a rabbit dash past. One of these people points at it and says, "Gavagai!" Maybe gavagai is their word for rabbit, but maybe it means "food", or "fast", or "Cute!" Unless you already speak the language (or another language in common), how can you possibly learn it?

Anyway, the point of this all if that Dan Everett told me that in his experience it isn't a big deal. He's learned a couple of languages in this way, from the ground up, and apparently most people point at a rabbit and say "rabbit!".

I guess all I mean is that I used to think that headspace was wrong about that. But now i've changed my mind, after all. You should also check out Dedre Gentner, especially her paper Why we're so smart.[pdf].
posted by Squid Voltaire at 8:30 AM on March 28, 2006


computers don't handle cognitive dissonance or doublethink well, ...

Right, like if you tell them "black is white! good is bad!" then they blow up.

Computers don't have any more trouble storing contradictory or inconsistent facts than people do.
posted by Wolfdog at 8:33 AM on March 28, 2006


Best answer: A lot of work goes on in this area under the heading "symbol grounding". See the links to Stevan Harnad's stuff at the end of the article. This should be helpful to you.
posted by teleskiving at 8:35 AM on March 28, 2006


Anyway, the point of this all if that Dan Everett told me that in his experience it isn't a big deal. He's learned a couple of languages in this way, from the ground up, and apparently most people point at a rabbit and say "rabbit!".

We all do this when we're babies, and retain that capacity until we're about 6 years old, at which point we've pretty much figured out how concepts and properties tend to be sliced up in the language we learned. (I once read something interesting about kids who are just learning to speak experimenting with different kinds of groupings of meaning - for example, using the same adjective to denote a group of objects associated with a certain context, rather than describing a visible trait that doesn't vary between contexts. Kids have to experiment quite a bit to figure out whether 'gavagai' is a color, an emotion, a function, etc.; it's just that they start to catch on really fast, and are excellent at remembering patterns and applying them to new words, so we notice a few funny mistakes but don't realize it's an entire stage of the learning process happening behind the scenes. I think that inside jokes, and to some extent slang, occasionally provide examples of adults retaining this ability, though we're not usually aware that we're using it.

It's only after we get used to parsing meanings in a smaller set of habitual ways that get reinforced by the way our mother tongue works - so much that it starts to feel natural, if not essential to the way not only words but also thoughts work - that it becomes possible to get confused and ask Quinean questions. (A lot of what Quine says is "the emperor's wearing no clothes!" stuff, in my opinion.)

It is possible to learn a foreign language from the bottom up, but much easier for adults to do it from the top down, with the aid of familar templates, dictionaries, etc. We're so used to the idea that these learning tools are necessary that it seems impossible to do anything without them.

I like to imagine what it was like for polyglots in the old days, before the time of "Teach Yourself ___ with 2 CDs", or for anthropologists doing field work on speakers of incredibly small or isolated dialects - for example, Sandor Csoma de Körös walked from Eastern Europe to Tibet, picking up some 14 languages on the way, and wrote the first ever Tibetan-English dictionary (and was probably the first Westerner to learn to speak Tibetan at all), all with absolutely minimal learning materials. He just learned that shit, probably by talking to people and piggybacking on his knowledge of neighboring/shared languages - and pointing at a lot of things. On the other hand, the existence of pidgin languages is anecdotal evidence that a lot of adults are either incapable of, or just insufficiently motivated to learn to speak a foreign tongue fluently without top-down instruction. (Speakers of pidgins basically wait for the next generation to come along and make it into a real Creole, a la Chomsky.)
posted by xanthippe at 9:13 AM on March 28, 2006


I think one of your mistakes in thinking (since you asked) occurs very early on. To wit that one can only understand something by giving it semiotic meaning. I would argue that that is one narrow slice of understanding something. I did not realize that I loved my wife by seeing a symbol for love on my wife. I do not paint by assembling the symbols for colors and shapes. I don't eat nan and dal because they are symbols for "Indian food." I don't listen to Godspeed You Black Emperor as the symbol for contemporary orchestral music. There's quite a bit that goes on in one's brain that is not language or even symbol, the Continentalists not withstanding. Let me suggest as a former philosophy BA about to become a professor of art, that there's a vested interest on the part of academia to convince you of its general reductivist and symbol-centric view. That is how academia necessarily functions, by reducing experiences to exchangeable symbols. This is not necessarily what one's actual experience of the world is however.
posted by Slothrop at 10:38 AM on March 28, 2006


Right, like if you tell them "black is white! good is bad!" then they blow up.
Computers don't have any more trouble storing contradictory or inconsistent facts than people do.


You misunderstand me. I'm not saying computers blow up if they get a set of contradictory propositions. What I'm saying is that they cannot procede without discarding one proposition, or "belief", either permanently, or for the duration of an operation. In AI this is generally part of a truth maintenance system. That's about the extent of my knowledge of the subject, but I can assert with some certainty that computers do not handle cognitive dissonance the same way we do, at least not yet, and that was my point. Your condescending dismissiveness is appreciated though.
posted by Hildago at 4:13 PM on March 28, 2006


Best answer: " and apparently most people point at a rabbit and say "rabbit!". "

Well, yeah. especially if you are trying to help some strange anthropologist understand your language.

That whole conundrum is a major issue though when you are trying to figure out a dead language on tablets or the eventual SETI data we will get...
posted by furiousxgeorge at 2:43 AM on March 29, 2006


« Older music heard in a tv documentary   |   Marcus and I even go to the same barber, although... Newer »
This thread is closed to new comments.