How fast can you say the same thing?
July 20, 2011 10:43 AM   Subscribe

Is there any data out there relating to the relative speeds that different spoken languages can express human thought?

I'm curious as to whether different languages take longer or shorter times on average to express the communicators thoughts. If I were to take a lot of random sentences in German for instance and translate them into two other languages would say French take on average longer to express the sense of the original sentences than Italian? (Languages just for example)

I'm assuming that the speed of thought is a physiological constant across speakers of different languages so do some spoken languages express thoughts more efficiently while some take longer giving the speaker more time to think about the concept being expressed?

Looking for data/studies rather than conjecture please.

(This is another question coming on the back of the BBC's Planet of the Apemen which mentioned that Neanderthals would have been likely to speak more slowly than modern humans due to physiological constraints of their throat/voice box structure)
posted by merocet to Science & Nature (18 answers total) 15 users marked this as a favorite
 
I've always been interested in how American Sign Language compares to spoken languages on this scale.
posted by Jairus at 10:46 AM on July 20, 2011


Check out the sapir-whorf hypothesis: the hypothesis that language determines thought and that linguistic categories limit and determine cognitive categories
posted by SollosQ at 10:52 AM on July 20, 2011


The short answer is, "No."

As SollosQ said, look into some of the studies and observations in linguistic relativity, and you'll note a variety of different studies examining how languages express specific domains of thought and how they're perceived by native speakers. E.g., you'll find a range of studies with a range of results showing that certain languages can affect the granularity with which native speakers perceive or at least express specific things such as color or orientation or numbers.

But any study attempting to argue that specific languages enable users to better or more quickly express 'thought' would be nearly impossible, as first you'd have to quantify what constitutes the entire domain of human thought.
posted by ernielundquist at 11:08 AM on July 20, 2011


According to Malcolm Gladwell's book Outliers, Chinese numbers are faster to say than English.
posted by fings at 11:21 AM on July 20, 2011


Mod note: please don't insta-derail this with a whether we think in words or not sidebar, thank you.
posted by jessamyn (staff) at 12:19 PM on July 20, 2011


Best answer: This question can't really be answered as asked. Mainly, there are many different ways - in any language - to express the same 'thought'. Some of those ways take longer to say (Mean Utterance Length) than others, but that is a product of many factors. And taking longer to say something (for whatever reason) isn't necessarily 'allowing the speaker more time to think'...language doesn't work this way.

Think about these different ways to express the same 'thought':

"Excuse me ... if it wouldn't at all, uh, trouble you ... could I perchance have some caaaaaake?"
"Excuse me, I'd like some double chocolate cake with coconut frosting please."
"Cake, please?"
"CAKE!"

Now imagine these translated. Next imagine that your speaker, hearer and settings are all mixed up such that you have various combinations of formality, social distance, age, noise, time constraints, mood, politeness strategies, or physical impediments to speaking (internal or external).

What we can see is that there are various reasons that you might want to use the long-form way to express a thought, and again many of those have embedded pragmatic connotations or additional 'thoughts' in them too. But comparing the result of the literal 'length of time' it takes to speak an utterance as an indicator of a language's complexity (or lack thereof) or a brain's processing is a completely false analogy at multiple levels. It also disembodies language from the speaker. A claim about a language's complexity or efficiency - especially as compared to another - is really a statement about what those speakers' brains can process and produce (and vice versa). Which is to say, there aren't general populations of people out there who are speaking 'simpler' languages because their brains can't create or use or express anything too complex, presumably (fyi, language contact situations where pidgins and creoles arise is a different phenomenon entirely). This is important because your question unintentionally carries the implication that there is perhaps an across-the-board and direct correlation between complexity/efficiency of thought (and therefore brain processing, and therefore brain of the speaker) and speaker rate/length/complexity of utterance (the code/language used).

All that said, there ARE languages (and dialects) that, on average, have faster speaker rates than others. But this is more a feature of the language like any other feature, rather than, again, an indicator of some advanced or more efficient thought processing.

Also, it's very hard to connect processing (thought) and production (actual speech) in this manner.

I know that I've thrown a lot of stuff out here, and probably confused the issue while also not answering your question. But I wanted to put those ideas out there to show the various ways that this topic could be explored further and which premises are more fruitful to start with.
posted by iamkimiam at 12:21 PM on July 20, 2011 [2 favorites]


Jairus, BSL allows a certain parallelism in gestures, but I know that ASL is very different.
posted by scruss at 12:46 PM on July 20, 2011


Look at it this way: words are composed of phonemes, and phonemes take roughly the same amount of time to enunciate. So if you want to reduce the time it takes to speak a word without reducing the total number of words, you must minimize the average number of phonemes words contain.

The number of distinguishable words of any given length will be a subset of the possible permutations of that many phonemes. The number of possible permutations of a string of any given length goes up very sharply with the number of different objects you can choose from for each position on the string.

Therefore, the more phonemes a language has, the faster you would tend to be able to say an average word for any fixed size of a vocabulary.
posted by jamjam at 1:06 PM on July 20, 2011


jamjam: "Therefore, the more phonemes a language has, the faster you would tend to be able to say an average word for any fixed size of a vocabulary."

Not at all. What is the frequency and the quality (tense/lax, long/short, etc.) of each phoneme? Is phoneme length contrastive? What about elision and other shortening and/or lengthening processes? Variation? At the phonological level, these things add up to crucially contribute to overall utterance length. Especially when it comes to syllable timing (eg. English) or mora structure (Japanese). Morphological complexity (agreement across constituents in a sentence, adding some or several phonemes/segments to a/some/each word(s)) interacts with phonology as well.

You can't look at the size of the phonemic inventory alone, or even the distribution, or even the quality of the segments; it's all part of a much bigger system that has crucial bearing on how and how much those phonemes are used. And what gets left out.
posted by iamkimiam at 2:02 PM on July 20, 2011 [1 favorite]


Best answer: Therefore, the more phonemes a language has, the faster you would tend to be able to say an average word for any fixed size of a vocabulary.

You're missing something huge: redundancy as a form of error checking. There's a reason why you can stand next to a revving motorcycle and talk (shout) to someone else and still be understood. It's because the language isn't compressed to the bare minimum.

This goes back to Shannon, who showed that error correction required reduction in efficiency of bandwidth utilization. Natural languages have to make a tradeoff between the two, just like any other communications has to, and in general natural languages have a great deal of redundancy, in the service of error correction.

Noun gender is an example of that. (It's one that English discarded.)

English has a bunch of suffixes which indicate parts of speech, so that -ly means an adverb and -ness a noun, for example. Those things don't really directly convey information, but they reduce confusion and ambiguity.

There was a Heinlein story in which featured a language in which all redundancy had been eliminated, which could express meaning precisely and very very rapidly. But Heinlein wrote that before Shannon's work, and Heinlein's hypothetical language would have been subject to immense problems in any noisy environment.

Wri*t*n E*g*ish is *lso h**hly r*du*ant, an* t*at's w*y y*u ca* u*de*sta*d t*is se*te*ce. (maybe with a bit of work.)
posted by Chocolate Pickle at 2:16 PM on July 20, 2011 [1 favorite]


Think of increasing the number of phonemes as increasing the bandwidth of a communication channel, Chocolate Pickle; it would shorten the time it takes to transmit a given message without necessarily affecting the redundancy of that message.
posted by jamjam at 2:51 PM on July 20, 2011


Best answer: All that said, there ARE languages (and dialects) that, on average, have faster speaker rates than others. But this is more a feature of the language like any other feature, rather than, again, an indicator of some advanced or more efficient thought processing.

I think you may be looking at the OP's question backwards. From the question:

I'm assuming that the speed of thought is a physiological constant across speakers of different languages so do some spoken languages express thoughts more efficiently while some take longer giving the speaker more time to think about the concept being expressed?

So I think the OP's point is that language features that promote rapid speaking or have a denser amount of information per second (on average as measured in some sort of study) may result in people expressing their thoughts faster compared to how fast they think (which for the purposes of this question is assumed to be constant). It's difficult to control for cultural differences across different languages, but theoretically you could compare, say, recordings of debates across many different languages and see if some languages are prone to expressing information more rapidly than others which would theoretically make it more difficult to think fast enough to debate properly in those languages.
posted by burnmp3s at 3:26 PM on July 20, 2011


it would shorten the time it takes to transmit a given message without necessarily affecting the redundancy of that message.

It could be used that way, but it could also be used to increase redundancy without increasing effective bandwidth.
posted by Chocolate Pickle at 3:42 PM on July 20, 2011


Maybe this is an incorrect metaphor, but isn't this like debating the difference between CISC and RISC processors. One version has a lot of components allowing specificity in fewer 'steps' while the other is simpler but requires 'interpretation'. Ultimately, which one is faster ends up being dependent on actual speed, task flexibility, etc.

Jargon is a great example of how language becomes more efficient. When I'm talking about building details I can say it faster using construction jargon specifically because it includes single words that represent complex concepts that you would not otherwise have a word for. However, it isn't reasonable to say a language contains all it's sets of jargon, because there is no individual who actually speaks that.

Between jargon, cultural familiarity and very fast enunciation, radio announcers can fit some pretty broad and detailed legal disclaimers in a fix second spot on the radio...
posted by meinvt at 3:59 PM on July 20, 2011


To make my argument clearer, iamkimiam, though no truer, suppose English had two million words (a figure I've seen), but also two million phonemes instead of the forty-five odd the Wikipedia article on phonology talks about.

Then every word in English could be expressed with a single phoneme, and we could say entire sentences as fast as we now say single words.

It would be a WYSIWYG world.
posted by jamjam at 6:26 PM on July 20, 2011


Best answer: Let me put this another way...if thought process/rate is assumed to be constant, the language that has the most information density per second is going to be the one with the fastest average speaker rate.

But even that's false, and it's arbitrarily assuming a lot of things. And completely ignoring pragmatic and social information that is carried in the signal. One feature of which is speaker rate. So, if I'm speaking at the rate that is the norm for my speech environment and 'packing in' a lot of social and grammatical information (intonation, one lexical choice over another, specific grammatical markings, socially-marked pronunciations of particular phonemes that index certain attitudes or stances, etc.), maximizing the message with lots of these social and grammatical markers, who is to say that the thoughts I'm trying to convey aren't all in there?

...How do we code the rising intonation at the end of an utterance? If I'm asking a question vs. speaking in a SoCal dialect vs. impersonating a California teenager, would I give each of these the same amount of 'weight' as far as meaning/thought being conveyed? They all take the same amount of time, but clearly have different 'meaning loads' ... dependent on audience, too, of course.

Which brings us to this other problem of hearer processing...a message that is too 'overloaded' as such does not serve its point and actually FAILS to convey the thoughts of the speaker. We as speakers more or less attune for this by using all the resources we have to maximize the thought conveyance loosely within the bounds of the code. So sometimes we are actually manipulating the speaker rate feature to pack in some extra thoughts there.

The reason why I've approached this whole question ass backwards is because it might be more apt to start from the assumption that the thought conveyance (information in the signal) is the constant, and all the other features of the particular code are manipulated to express that in varied and creative ways. Such is the richness of all language.
posted by iamkimiam at 7:33 PM on July 20, 2011


But you can't *have* two million phonemes, for anatomical reasons. Since all languages are spoken by humans with human mouths and listened to by humans with human ears there's going to be some upper bound on how big the phoneme space is and some lower bound on how far apart two sounds have to be in order to call them different phonemes.

(IANAL. IAAM.)
posted by madcaptenor at 9:46 PM on July 20, 2011


Response by poster: Well I started off fascinated by a particular aspect of my question (as spotted by burnmp3s) and then was completely blown away by the richness and thoughtfulness of the answers. Thanks to everyone who answered.
posted by merocet at 8:28 AM on July 21, 2011


« Older What bug is this?   |   Math ties? Newer »
This thread is closed to new comments.