Speech may have a universal transmission rate: 39 bits per second
Speech may have a universal transmission rate: 39 bits per second
www.science.org
Just a moment...
Interesting excerpt:
De Boer agrees that our brains are the bottleneck. But, he says, instead of being limited by how quickly we can process information by listening, we're likely limited by how quickly we can gather our thoughts. That's because, he says, the average person can listen to audio recordings sped up to about 120%—and still have no problems with comprehension. "It really seems that the bottleneck is in putting the ideas together."
That's the part I don't get. How do you determine the bits of information per syllable/word in different languages?
If I pick a random word such as 'sandwich' and encode it in ASCII it takes 8 bytes / i.e. 64 bits. According to the scientists, a two-syllable word in English only holds 14 bits of actual information. Anyone understands what they did there or has access to the underlying study?
You've stumbled upon the dark arts of information theory.
Sure, conveying "sandwich" in ascii or utf-8 takes 64 bits of information, but that's in an encoding that is by default inefficient.
For starters, ascii has a lot of unprintables that we normally don't really use to write words. Even if we never use these characters, they take up bits in our encoding because every time we don't use them, we specify that we're using other characters.
Second, writing and speaking are 2 different things. If you think about it, asking a question isn't actually a separate ("?") character. In speech, asking a question is just a modification of tone, and order of words, on a sentence. While, as literate people, we might think of sentence as written, the truth is that speech doesn't have such a thing as question marks. The same is true of all punctuation marks. Therefore, a normal English sentence also encodes information about the tone of the sentence, including tones we don't really know how to specify in text, and all of that is information.
This is the linguistic equivalent of kolmogorov complexity which explores the absolute lowest amount of data required to represent something, which in effect requires devising the most efficient possible data encoding scheme.
The Japanese would like to have a word with you.
Thanks a lot for these insights, much appreciated!
But if modification of tone encodes additional information wouldn't we need to consider that as additional bits?
So if 'You need a taxi.' and 'You need a taxi?' are two different things, I don't think we can just skip punctuation when measuring the bits of information in a sentence.
I linked the paper in the OP. Check page 7 - it shows the formulae they're using.
I'll illustrate the simpler one. Let's say your language allows five syllables, with the following probabilities:
If you apply the first formula, here's what you get:
Of course, natural languages allow way more than just five syllables, so the actual number will be way higher than that. Also, since some syllables are more likely to appear after other syllables, you need the second formula - for example if your first syllable is "sand" the second one might be "wich" or "ing", but odds are it won't be "dog" (a sanddog? Messiest of the puppies. Still a good boy.)
ASCII is extremely redundant - it uses 8 bits per letter, but if you're handling up to 32 graphemes then 5 bits is enough. And some letters won't even add information to the word, for example if I show you the word "dghus" you can correctly guess it's "doghouse", even if the ⟨o⟩'s and the ⟨e⟩ are missing.