Is it possible to quantify the number of words in a language?

People like counting.
People like comparing things.
People like to marvel at the supposed uniqueness of English.
Those three trends are exaggerated even further on the internet.

So it’s no wonder that there are websites like this, Global Language Monitor, that tout that the number of words in English as some precise number  (1,013,913 and growing at 15/day), as compared to woeful 2nd place finisher Mandarin.

The problem is that this is complete and utter nonsense, crap, b.s., ridonkulosity, bushwa, bubbe-meises.*

* – Keep this in mind as we move on.

The problem with trying to quantify the number of words in a language is that there is no precise way of defining the two most important things in that sentence – words and language.

What is a word?

What, exactly, counts as a word? We have a general sense – dog is a word, bnick is not, but the challenge with really figuring out what counts as a word is highlighted by some of the examples in the sentence above beginning with nonsense.

1) Morphology
Does nonsense count as a word? Or is it the same as sense?
What about dog and dogs?
Or dog and hot dog?
How many words is flame, flames, inflame, inflammable, flammable?
Or grandfather, great grandfather, great-greatgrandfather and so on?

English, like almost every other language, has morphology, which is a system of building words from meaningful word parts. Loosely, morphology can be broken down into Inflectional morphology (run -> runs), Derivational morphology, (run -> runner) and Compounding (with varying degrees of coherence, e.g., cab driver, toothpick) with lots of gray area in between.

There is no way of deciding which of these word forms count as a word  in a way that is not completely arbitrary. Lest you think this is a minor factor, these would easily change your answer by close to an order of magnitude as you can see from the flame or grandfather examples. Almost every word is subject to morphology and there is no principled way of deciding when the result should be counted as another word or not.

2) Synonyms, homonyms and heteronyms, oh my!
Crap is a verb. Crap is a noun. Crap means a lie and crap means feces. I guess you can count that all as one word, but what about same spelling and a more radically different meaning, e.g., bank (river) and bank ($)? Or how about same spelling, different meaning and different pronunciation, e.g., desert (sand) and desert (leave)? Or if spelling is your guide, what about different spelling of the same meaning, e.g., advisor v. adviser?

Indeed, almost every permutation of same v. different meaning, spelling and pronunciation can be found among (amongst, wink wink) words:


Some of the interesting many-to-many relationships between meaning, spelling and pronunciation

As with morphology, there is no non-arbitrary way of deciding what counts as a separate word here.

3) Acronyms
Moving on to the next word in our little rant, b.s. Are you counting abbreviations and acronyms in your list and if so, how? B.S. is pretty conventionlized, but certainly not as much as laser, though moreso than POTUS, though that depends if you’re working in politics or not, not to say what the status is of EKG, an acronym you certainly hear more than the real word itself.

As above, whatever deciding line select will be completely arbitrary. The number here probably isn’t too high – maybe on the order of 10s of thousands, but it serves to highlight another parallel problem, that of:

4) Neologisms
Did you like the word redonkulosity? I just made it up. Or at least, I thought I just made it up, but it does show up in google w/ 4000 hits. That was after thinking I had sort of created the novel word ridiculosity – spell check says it isn’t one – but Merriam Webster says it is.

The fact is that there is no definitive way of deciding whether a new word should count as, well, a word. New entries in the OED or MW are decided by a person, or group of people, according to some general guidelines relating to frequency of use, place of use and so on. These are not guidelines handed down from on high, as much as we revere the Oxford English Dictionary, but are, again, arbitrary. They even vary from dictionary to dictionary resulting in something like a two-fold difference in the size of different dictionaries.

5) Archaisms
Next up, bushwa, a word I didn’t even know until I read this article Keeping It Real on Dictionary Row, where Geoff Nunberg debunks the charlatans at Global Language Monitor, albeit briefly. That’s because the word has been going out of style since about 1950. That’s a relatively recent decline as compared to other words, like emmet or pismire, both words for ant, which went out of use hundreds of years ago.

So not only do we not have a concrete way of deciding when to add a word, we similarly have no way of deciding when to remove a word from our list, either. Given that languages are in a constant state of flux, the creates a moving target wherein the exit criteria should be linked to the entrance criteria, which itself is arbitrary. So, again, more arbitrariness.

6) Borrowings
Finally, bubbe-meises, my favorite in the list, which is a word in the English dictionary. It is clearly a borrowing, in this case from Yiddish roughly meaning Old Wives’ Tale, but with a bit more of a sense of dismissal.  Words are borrowed into English not with a single leap, but gradually, at different rates for each word depending on pronunciation, frequency, semantics and so on.

In counting the words of English, you will have to somehow define yet another cut-off point here when figuring out what to count and what not to count.

7) Specialty Words
And last, but not least, indeed perhaps most, in terms of how it would affects your final number, we have the millions upon millions of words associated with different scientific specializations. Not to say that Critical Theory hasn’t come up with its own unique vocabulary, but no one quite compares to Chemists and Entomologists in outdoing everyone else in word creation.

There are 350,000 species of beetles on this planet, each that can be given its own name. And that’s just beetles. There are up to 1 billion different species of bacteria. If many of the species in the Mammalia class each get its own word, so to with at least some of the Prokaryot kindgom, no?

A similar problem exists with chemicals and all the permutations and combinations that lead to a near-infinite number of possibilities wherein the only real limits are those of chemistry and not language. How, praytell, would that work in your word count?

Oxygen, certainly yes. What about Dihydrogen monoxide? Or its synonyms, Dihydrogen Oxide, Hydrogen Hydroxide, Hydronium Hydroxide and Hydric acid? Get to know these chemicals (Facts About Dihydrogen Monoxide), but good luck in figuring out how to count their names.
Clearly there are some tough (and by tough, I mean completely arbitrary) choices to be made in terms of counting words, what about language?

What is a Language?

I speak English. You (probably) speak English. We certainly don’t speak the same precise language in terms of word knowledge. Which one do we use? There are so many different levels at which a language can be defined that it’s impossible to declare a definition of what the limits of any given language is.

First, for a language like English, you have national differences. The language of America will have different words than that spoken in Canada, Australia and the UK, not to mention what people speak in India and Nigeria.

And even within a single country, you have regional dialects that have different lexicons:


The different ways of saying roundabout in America

And on down to the specific person, or Idiolect, where each have our own way of speaking English, with different lists of words in our heads.

If you want to move away from the individual person and try to define the English language that is spoken in the world, it’s not clear what that really means. Is that the some total of all words across all self-reported English speakers? That’d be a mess.

You may try to go for some principled definition, e.g., the words in all books published in English, but that, too is problematic for who it excludes and the pride of place you give to literacy, the literary and editors.

Thus, as with the definition of word, you’re stuck with an arbitrary definition of what a language is.

Summary

Without a clear definition of word and without a clear definition of language you kind of sort of have no practical way of counting anything of anything. And we’re not talking requiring a level of exactitude that is within some reasonable margin of error. We’re talking potentially orders of magnitude difference depending on how you decide.

So, yes, by all means, count the number of word of English and say it’s 1,019,430, so long as you’re comfortable saying that’s +/- 1,000,000 words.



So why all the pissy vitriol on my part?

For starters, one website in particular (mentioned above) has raised ignorance on this issue to new levels of awfulness in the seeming hopes of generating profit. They say, on the issue of word counting:

Though GLM’s analysis was the subject of much controversy at the time, the recent Google/Harvard Study of the Current Number of Words in the English Language is 1,022,000. At the time the  New York Times article on the historic threshold famously quoted several dissenting linguists as claiming  that “even Google could not  come up with” such a methodology.  Unbeknownst to them Google was doing  precisely that.


As if saying the word google magically makes everything better. The website must not have even bothered reading the NYT article as it doesn’t try to address one single issue mentioned. Indeed, the group behind this word count of English project is nothing more than a PR company – they seemingly take pride in elevating the amount of Bullshit in our daily lives. It’s hard enough debunking the myths of well-meaning scientists, let alone people purposefully obfuscating the truth to make a buck.

I do appreciate the efforts of Computer Scientists in their endeavors to quantify anything that may need to be quantified. Indeed, there are certain branches of linguistics where exact answers aren’t obtainable and we must be okay with approximations and probability distributions over possible answers. I get it and understand the concept behind uncertainty and quantifying uncertainty.

At a certain point, though, there needs to be some recognition that the answer you’re providing is not meaningful in the sense that human beings would consider meaningful. So by all means, use these methods if you need an estimate on the amount of memory you’ll need in some program that indexes “all the words of English” but don’t pretend that you have calculated the “number of words in English” in any human sense of those words.

Don’t get me wrong: I’m all for finding the seed of truth in things that are otherwise considered garbage science . It’s actually  sort of a little hobby of mine to revisit debunked ideas and mine them for interesting truths.

But this nonsense, crap, bushwa, b.s., bubbe-meises, I simply can’t stand.

Advertisements

Why wasn’t English replaced by French during the Norman Conquest?

War is an annihilator, the winner sweeps out everyhting, including poeple. But how did Englsih survive the Norman Conquest?  There are actually two very general and hugely complex questions involved here, not one:

(1) What were the specific sociolinguistic conditions in medieval Britain that allowed the maintenance of English?
(2) Why do languages become replaced by other languages in the first place?

I’m going to give a brief survey of the answers to number two first, since that will make the situation in Britain clearer.

Why do some languages replace other languages?

Languages, like economic systems, are self-organizing systems in the sense that properties of the system arise independently of any directing authority.  At no point in history of English has any government had to tell English speakers:  “mark plural nouns with an -s suffix”. That just happened on its own as the outcome of the breakdown of a much more complicated inflectional system of Old English.  Likewise, people’s choices to use particular languages, and indeed varieties of the same language, arise because of countless decisions on their part based on preexisting conditions around them, as individuals.  Speakers in some sense use language as a kind of communicative currency, the value or utility of which varies depending on many independent and interdependent factors, including at least the following seven in roughly decreasing value:

(1) numerical demographics:  how many people are already using a particular kind of speech?  English, Spanish, Hindi and Mandarin are all languages that have huge numbers of speakers (200m+), and so there will always be some kind of ‘market’ for people speaking their languages.

(2) cost-benefit analysis:  is the language useful to create the kind of life one wants or needs to lead? Some languages are spoken by people with access to particular technologies or particular kinds of advantageous economic systems.  German today has far fewer speakers than either Hindi or Bengali, but there are more people in Europe, America and East Asia actively trying to learn German than Hindi or Bengali.  For most people, the cost of learning Hindi is too high despite its higher overall speaker-base.

(3) presence or absence of competing languages:  is there already another language that can achieve the same goals? Until the early modern period, when governments and societies began to invest in their own languages, all scholastic infrastructure was couched in Latin, and so that language had no serious competitors in Europe as a language of science and technology.

(4) geospatial distribution of languages:  how wide spread is a language used? At around 800m speakers, Mandarin has almost double English’s 450m speakers, but almost all those speakers are found exclusively in China, while the infrastructure and speakers of English are found almost everywhere now.

(5) what domains of language is a given language used for: languages are used for specific purposes and sometimes speakers of one language use another for a specific purpose, and thereby in effect lose one domain for their own.  Again, English today is used in Europe by Dutch people speaking to Italians, Germans or Swedes or even other Dutch people.  One famous German linguist of my acquaintance once told me that he feels uncomfortable talking about linguistics in German, his native language, because he simply never does it — the fact that he has little experience doing so means to start now would involve considerable effort on his part.

(6) prestige or symbolic value of a language:  like peacocks with their tails, people use language as a symbolic way to make nonlinguistic statements about their worth or value as an individual, and as a community.  It comes in two varieties:

  • internal prestige: how a particular language community values (or not) its own language both as a medium of communication and as a source of identity;
  • external prestige: how people outside the language community value (or not) that language community’s language.

(7) [Marginally, and relevant only for adults:]  how closely related is one language to another? Someone who learns Spanish, French and Italian is much less impressive than someone who learns Mohawk, Hebrew and Hausa, because the latter languages have almost no lexicon in common, while the former have most of their lexicon in common, not to mention grammatical differences or similarities.

This set of criteria is an oversimplfication; I have not discussed (nor have space to do so) how they relate to each other nor what others might be relevant.  Anyway, they are the big ones.

What was the sociolinguistic environment of early medieval Britain?

When the Normans defeated the Saxons at Hastings in 1066, they arrived in a country that had already effectively witnessed a wholesale linguistic replacement:  that of the Celtic-speaking Britons some five or six centuries before. English had also recently sustained rather intense conflict with Scandinavians of various types and some but not all of these invaders had come to settle permanently in the Danelaw.  Let’s tally up what the new Norman aristocracy faced:

(1) The Normans came in the tens of thousands, not hundreds of thousands, when they secured power for themselves. The indigenous English by comparison probably numbered at least two and a half if not three million people. Even if you double or triple the highest estimate of William’s soldiers at Hastings, around 12k, and assume that additional bureaucrats followed in their wake, the number of Norman French speakers in England was still totally dwarfed by the indigenous population.

French mounted soldiers at Hastings from the Bayeux Tapestry: Not Numerous Enough

(2) It is undeniable that pre-Conquest English aristocrats now had very good reasons to become fluent in French to maintain and add to their power base (though many lost their lands anyway).  The vast majority of the peasantry (95% of the population) however never had any such need:  the people they interacted with were not the nobles, but the people who oversaw the nobles’ lands, who continued to be primarily English speaking. Sociolinguistically speaking, England was not entirely unlike the exploitative colonies of 19th-20th century Africa that Salikoko Mufwene has spent decades describing in which an interface class of overseers ran the government for the elites. In this case, many upper elites didn’t even spend much time in England, choosing rather to pursue petty feuds in France.

(3) In most of Britain outside Wales and Scotland (which were at that point not under the English crown anyway), there were essentially no alternatives to speaking English:  every community, from top to bottom, was English-speaking.  This is even true in the Danelaw, where the Scandinavians had largely assimilated to the local population giving us words like skirt, sky, and the pronoun they.

(4) Since England was almost uniformly English-speaking, there were no pockets of other spoken languages that could possibly have competed with English until the Normans arrived. (Cornish is an exception that proves the rule.) Although English had dialects and diversity within it, from the perspective of the man on the street making a choice about English or French, it was a monolith.

(5) English before the Conquest was one of the few vernacular languages in Europe that actually had a fairly vibrant literary culture:  Anglo-Saxon kings were mostly literate, and some of them like King Alfred the Great actually made translations of Latin classics into Old English.  This was not true of most languages on the continent. After the conquest, English more or less completely lost its status as a chancellery language to French, but French was instead still competing with Latin as an alternative language in the domain of high literature.

Statue of King Alfred the Great at Winchester.  Alfred translated Boethius’ Consolation of Philosophy into Old English.

(6) In the 11th century, French did not have the immense internal or external prestige that it was to acquire in later centuries.  Indeed, French did not even become the official language of France until 1539, when King Francis I made it the official language of the court. (Just think about that fact for a second.) As such, it is not surprising that the English, especially among the illiterate peasantry, would probably have seen the use of Norman French as the language of occupation rather than of a self-evidently superior conqueror.  French probably had high external prestige in comparison to English’s not inconsiderable (but by no means high) internal prestige.

(7)  Lastly, although French is an Indo-European language like English, it was nowhere near as similar lexically or grammatically as the Norse spoken in the Danelaw — and that Norse soon disappeared.  This criterion is in any event relevant only for adults trying to learn French, since children can acquire any human language with ease.

Summary:  in most respects, the fate of French was sealed at its arrival:  it simply did not have enough traits, whether demographically, economically or otherwise, in its favor to do more than influence the lexicon of English. It is doubtful that French had a significant effect on English grammar, since Old English was well on its way to losing case-inflections already.

 

As native speakers , how many rules do we not know but still follow?

Pretty much all of them.


One of the first things you realize, when you study linguistics, is that language—every language—is filled with an amazing amount of complexity and regularity to the point of defying description. And I mean that literally. There is not one single natural language that has been completely formalized at all levels of description in any way.

Think about that for a second.

Even English grammar, the ins and outs of which have been studied by thousands of people for centuries, has not been completely described. You can’t go anywhere and pick up a book or look up a computer program that has all the rules of English. Thus, there is no documented list of the rules an English speaker is supposed know and so most native speakers don’t really “know” most of the rules of English.

So what are English teachers teaching you in school and what are William Strunk Jr. and E.B. White (Strunk & White, The Elements of Style) getting all in a huff about?

Elwyn White’s gonna get all up in ur grammar

The rules people talk about—in blogs, in English classes, in ESL classes and so on —are:

1) Rules that are in the process of changing, e.g., How do I stop being annoyed by people using literally as an intensifier?

2) Rules that carry inordinate weight as social signals (e.g., gonna, or
dangling participles: Where you at? or even Where are you from? instead of From where are you?)

3) Rules that are particularly confusing to newcomers (Adjective order for instance)

4) Rules that are cool and/or funny.

The fact of the matter is that almost everything we know about our native languages is what’s called implicit knowledge. Stuff we don’t know that we know, or stuff that we can’t really describe, but we can do anyway. Like maybe riding a bike, or walking.

So what are some examples?

Let’s start with one of the most basic examples I can think of:

Phonetics: How do you pronounce the letter p?
Easy, right? Well, there are actually a number of different ways p is articulated in English.

Compare, for example, spot and pot. They sound the same to an English speaker, but put your hand an inch from your mouth when you say the two words and you’ll notice a much bigger puff of air for the p in pot.

Indeed, in other languages, they’re two completely different sounds.
Aaaannnnddd if you cut the s off the word spot you’re left with something that actually sounds like bot, not pot.

Native English speakers never make a mistake here, but don’t even know they’re doing this complicated articulatory gymnastics, saying p differently in different contexts; “it’s just p,” we think.

The same holds for pretty much every phoneme (sounds letters) in something called Allophony: The phonemes t and k also adhere to this Aspiration rule; t is also involved in a flapping rule (the t in duty is d-like, which speakers may actually know because: doody!!!!!!!); l is different in the onset vs. coda of a syllable (look vs. cool) and so on and on and on.

Indeed, we can go through all the levels of linguistic description: phonetics, phonology, morphophonology, morphosyntax, syntax, semantics, pragmatics and pick out some of the most basic rules and pinpoint discrepancies in explicit knowledge.

Phonology: What makes something sound like an English word?

There are dozens, if not hundreds, of rules governing where sounds can go in a word, i.e., Phonology, that most speakers are not aware of. If I ask you does zbashk or sneeld sound more like a real word, every native English speaker would answer the same way, Russian speakers would answer differently, but without much insight into why. (The reason I say this is not because it means native English speakers are ignorant, but even linguists haven’t figured out the precise details of how people make word-likeness judgements).

Morphophonology: How does pronunciation change when you add affixes to a word?

The stress in parent is syllable 1, add a suffix, -hood, and it’s still syllable 1, parenthood,  but add -al and it shifts: parental.  Why the difference?

Another one: you may know why some words are un- and other words are in- as in unable but incapable (hint: it’s primarily word origin). You might have even noticed that in- assimilates to the following sound (e.g., illegible, impossible, irregular). But why not umbelievable? Or ullimited?

The answer has to do with whether there are serial levels, or strata, of processing in morphophonology—a debate still raging today—with in- being in an earlier stratum (before consonant assimilation) and un- being in a later stratum.

Morphosyntax: When do you use accusative case pronouns?

EnglishIvsMe

To provide an example of something that we think we know, but we actually don’t: When do we use the accusative form of pronouns in English (me, him, her, them)? When it’s the object of the sentence, Object Pronouns Grammar Rules, right? Well, not quite. Consider the following:

Q: Who wants cake?
A: Me

Me and John went to the store

She thinks I am smart
She considers me to be smart.
She considers me smart.

The rules of case assignment just got real…. complicated. So real that linguists still aren’t quite sure how it works.

Syntax: What is English word order?

How about something as basic as  can be: word order?
English is subject-verb-object, right?
Well, that rule I don’t like so much. (interjection, object, subject, verb, adverbial phrase). You get that?

Semantics: How do you interpret words like some and every?

Semantics, I know the least about, but consider these two sentences:

There is someone who loves everyone.
Everyone is loved by someone.

The second sentence can mean what the first sentence means, but it can also mean that everyone has some person that loves them, but it can be all different somebodies.

(Lame, I know, but like I said, I don’t really know semantics (: )

Pragmatics: Who gets talked about next in a discourse?

Conversation is complicated. If you actually listen to recordings of your conversations it’s any wonder that anyone understood anything. One of the really hard parts is reference resolution. When you say he or she or her or his, who the heck are you talking about?

Well, one “rule” that I explored with a colleague is that certain verbs implicate certain arguments as the topic of conversation. So, in John annoyed Tom because he … you presume he refers to John whereas John admired Tom because he … means you’re more likely to then talk about Tom.

Without these little rules of conversation, we’d be lost when talking to each other. But they are rules that you aren’t generally taught and they are rules we generally aren’t aware of.

And they are rules that we haven’t even pinned down particularly well. Reference resolution is one of the hard problems for natural language processing.

 

Wait, Dave, are you talking about John or Tom?

Indeed, if we knew the rules of English, it wouldn’t be so hard to program a computer to follow them. But we don’t, so we can’t.

 



So, in this small selection of rules, I actually tried to pick the most mundane things I could: How do you pronounce p? What is English word order? When do you use accusative case? How do you figure out who pronouns refer to? How do you add suffixes to a word?

Not the funniest or trickiest, but the ones that show that even the most fundamental aspects of grammar, the rules that allow us to communicate in even the most basic ways, fly below the radar of our awareness.


(This post is an appropriation of Mark Ettliger’s original answer. I claim no part of it, and I reposted it for the spread of beneficial knowledge) 

The origin of language: what’s underneath the tree of languages.

The question of the origin of language is by far the most interesting and probabaly the most mysterious in linguistic research. The answer to this question is complicated by a number of facts, some of which are obvious and some of which are not. There are many attemts out there chasing the so-called Proto Language, but they all remain a long shot in the dark.  The present explanation remains very closely aimed at this question, and provides a very researched-backed account to this interesting issue by  stiching together linguistic, biological, and evolutionary facts to fit in the bigger picture of language origin. Before we get to when language evolved, we need to know what evolved first.

THE FOSSIL RECORD

Let’s start with the obvious fact: language leaves no fossil record.  Ancient peoples (including other hominids) may have had spoken language, but if a language dies out completely, as theirs certainly have, we have no idea how it functioned and therefore whether it was like modern languages. All we have to go on is physical remains of actual bodies.  From the perspective of the fossil record, here is what we know happened:

A. Other hominid species had similar but not identical vocal tracts — e.g. Neanderthals had hyoid bones like we do, though this is only a necessary and not a sufficient factor in human vocalization.


B. Earlier hominid species had brain cases both of increasing size and increasing complexity.  Unfortunately, we can derive relatively very little information about how language functioned from this fact alone — almost nothing at all, in fact.  It is even disputable that an enlargement in the brain mass and/or development of particular regions of the brain have a direct implication for particular functions of the brain, which in all cases is of course missing. 

main-qimg-93ec8e4a65eb76b52c82e4d70722b37b-c
A Neanderthal hyoid bone (replica)

WHAT IS LANGUAGE?

Evolution essentially never operates by great leaps but instead usually operates by small changes that accumulate over time.  I think this is the key fact that we must consider: how could something as seemingly complex and interconnected as language evolve on its own? I’d argue that’s because most people who’ve made proposals before have not fully articulated what evolved.

I think we can do this by breaking down the human language faculty into its component characteristics.  In the 1950s and 60s, Charles Hockett articulated what he called the Design Features of Language, a set of criteria that distinguish different kinds of animal communications systems from each other.  This was important because it anchored the study of the origins of human speech firmly in a biological context with which we could compare the properties of human languages with that of other species.  These design features relevant for humans included:

  1. The use of a vocal-auditory tract — humans emit noises from their bodies to communicate, and do not emit chemical trails or flashes of light as other species do.
  2. Broadcast transmission and directional reception: human speech spreads through the air in all directions, and is received by anyone within the field of broadcast.
  3. Transitoriness — human speech fades rapidly and can only be received roughly at the time of transmission.
  4. Interchangeability — a human has the ability to both send and receive the same signal.  This is not true of some kinds of insect communication, for example.
  5. Total feedback — a human has the ability to hear oneself.
  6. Specialization — human language sounds are specialized for the use of language and do not in general have any other functional use.
  7. Semanticity — human speech signals can be matched with specific predictable meanings
  8. Arbitrariness — the relationship between the semantic content of the speech signal and the acoustic form of that signal is essentially arbitrary (onomatopoeia are the exceptions that prove the rule).
  9. Discreteness — human speech signals can be broken down into discrete units that do not bear any meaning in and of themselves (namely, phonemes).
  10. Displacement — human speech can be used outside of the contexts for which the signal was originally designed (all kinds of natural language negation might fall under this phenomenon).
  11. Productivity – language can be used to create new and unique meanings that have never been uttered before.
  12. Traditional transmission — the actual words of human languages are not innate, but are rather trasmitted from one generation to the next via culture.
  13. Duality of patterning — meaningless units of sound are combined to create meaningful words.


Some of these criterial features of language are related to each other; others are independent.  What is important to recognize though is how many of these criteria are found in the communication systems of other plants and animals.  For example, the highly complex system of alarm calls used by vervet monkeys to warn the troop of predators involves many of these features, including the vocal-auditory tract, transitoriness, broadcast transmission, semanticity, and even arbitrariness, since a call for an eagle has no particular iconic similarity to an eagle in comparison to that for a leopard (the uniqueness of arbitrariness of human languages has been exaggerated by some).

main-qimg-bfb34e6bb4a52ce3a33defccdf592799-c.jpg

TIMING

I think Hockett’s list of criterial features gives us a much better starting point than trying to intuit language-y things from the fossil record.  More importantly, it allows us to compartmentalize the development of particular aspects of language in particular parts of human evolution, and does not require us to believe all facets of it exploded into being all at once.  To get the relative timing of these events though requires us to look at these Hockettian criteria through cladistics, the study of branching relationships in species, languages,  etc.  In cladistics, the principle of cladistic parsimony suggests that if two organisms  Y and Z descend from an ancestor X, and both descendants have the same evolutionary feature, then we must assume that that that feature was also present in their ancestor organism.  Using this cladistic principle we can trace the evolution of specific features back quite a long way:

  • The use of the Vocal-Auditory tract for communication has probably been with us primates for tens of millions of years, since the origin of primates in fact, if not before that.  The same goes for the other early features (2)-(5).  At least 65 million years ago to this fellow, Notharctus:

 

  • Specialization, Semanticity and Arbitrariness are features that are actually found in nonprimate communication systems, such as those found in birds.  This means that these features have arisen independently in different genera of animals and so are probably easier to acquire and therefore (?) earlier than other more sophisticated criteria.  I will hazard a guess that we had these by the time our line, close to the apes, broke away from Old World Monkeys about 25 million years ago


Where things really get tricky is identifying when the last five criteria arose, those which truly set us apart from other animal communication systems:

  • Discreteness basically boils down to the ability of the brain to coordinate the vocal tract to articulate segments of sound and consistently treat those segments of sound separately from other possibly similar segments of sound. Human children start learning to aurally distinguish different speech sounds essentially from birth, and begin to control and articulate different aspects of the vocal tract in the first year of life (the babbling stage). Probably the late Australopithecines had similarly extremely rudimentary ability to articulate different kinds of speech sounds consistently on target, as it were, but we can’t know for sure if this is true and when these facts became categorical phonemes in the modern sense (probably the distinction between phones and phonemes is late in the evolution of speech).  I will assign this a (somewhat arbitrary) age though of 2-3 million years ago.
  • Once our ancestors had discreteness, they were primed to evolve Productivity (because they could suddenly create many more words than they could have before) and the Duality of Patterning (since discreteness largely implies that sounds are distinct from meaning).  I really don’t know when this might have evolved, but we would want to look for evidence that early hominids’ interactions with their environment are increasingly complex both in the sense of tool use and in the manner in which they process food.  Homo erectus arises on the scene roughly 1.8 million years ago (either in east Africa or, less likely, perhaps in what is now the Middle East as evidenced by the finds in Dmanisi Georgia).  One of the things we know about H. Erectus is that they had more advanced tool designs than earlier Australopithecines, and they discovered how to manipulate fire and they also learned how to cook food.  To my mind, this increasingly sophisticated kind of manipulation of their environment could have necessitated the kind of proto-language that we have been talking about here.  I will guesstimate an age based on the earliest evidence of fire being used for cooking, splitting the difference between the oldest suggested ages and the youngest:  900k to 1 million years ago.
  • The most sophisticated aspects of human language on Hockett’s list are aspects of traditional transmission and displacement, the ability to use language outside of the context for which it was envisioned.  These facets of language use would be necessary for the development of idioms and any lexicon larger than a few hundred words. We know that displacement is one of the features that human babies learn somewhat late, around the age 3-4 if I recall correctly, so it must have been very late to evolve. Displacement would also have been necessary to use any form of negation, to talk about is not happening, as well as any form of tense system that distinguishes past, present and future. Combined, these two features would allow something like the kind of discourse possible in the most technologically primitive societies today..  Because we have (very) tentative evidence of art among Neanderthals (though nowhere near as much as from the earliest Homo sapiens) and other evidence that Neanderthals could plan for the future, I will take a shot in the dark and say that something like the earliest form of modern language appeared at roughly the time H. neanderthalensis and H. sapiens speciated, around 450-500k years ago.


So, there we have it: evolutionarily modern human language might have arisen about half a million years ago.

I want to stress that much of this timeline is highly speculative.  My specialization is linguistics, not human evolution (although I do read quite a bit about human evolution in my spare time).  I would like to invite any specialists in human evolution to cross-check their understanding of the fossil evidence with what I have articulated here.

So, did language evolve only once?

If we view language evolution as a complicated multistage process, the question becomes moot, since ‘language’ is not one thing but many.  At each stage of human language evolution, some of our ancestors developed a more advanced form of communication, and others did not.  Half a million years ago, when there were probably at least four or five different hominid species still extant (H. sapiens, H. neanderthalensis, H. denisova, H. floresiensis, and H. erectus), probably each of these hominids had some form of communication system more advanced than any nonhominid communication system.  Where we draw the line between proto-language and full-on language is to a certain extent a matter of degree than a categorial fact. 

Source:

Thomas Wier

The hardest aspect of learning English as a second language.

We’ve all had our issues with learning English, or any language for that matter. This is because there are some aspects that are just hard to learn or deal with. The maddening things in learning English are numerous. But without a doubt, the spelling/pronunciation inconsistency stands out to be the most salient. Every foreigner will know instinctively what I’m talking about. It’s the thing that makes you want to rip your hair out.

Let me tell a story about this.

So, you’re learning English when you come across the word ear. It’s pronounced ee-ur. Got it! that’s easy. Ee-ur. I can say that. Then you see dear and you think dee-ur. Same thing for fear: fee-ur!

I’m loving this!

Then you encounter this: bear. Now, you’ve never heard this said aloud. You’re reading about it. You casually mention to someone that you’re afraid of grizzly bears:

main-qimg-5eaa744c271a79199bef66a88b511e4a-c

But of course, you don’t say grizzly bear: you say grizzly BEER!

main-qimg-9f3d5b707566ab14e36b1c591590f395

Now, your interlocutor looks at you like you’re speaking Ancient Martian. You don’t understand why. After all, what you’ve said is perfectly clear. You wonder how someone in America can grow up not knowing what a grizzly bear is.

You’ve never heard of a grizzly BEER?

You don’t understand why he’s looking so puzzled. Then he says, with a mixture of amusement and bemusement: You’re not trying to say grizzly BAIR, are you?”

It rhymes with AIR???

You wonder if he’s pulling your leg.

Are you sure it’s not grizzly BEER? You say EE-UR, and DEE-UR, and FEE-UR, so why not grizzly BEE-UR?

But every time you mention grizzly bear, you send him into a fit of hysterical laughter. You go home dejected.

You think about it that night.

English is stupid. I still think it should be grizzly BEER.

 

Are there any Tritransitive verbs?

Short Answer: Yes, but not in English. 

But first, what do we mean by valency, anyway?

To answer this question in more detail, it’s important to distinguish different kinds of grammatical valency.   As in chemistry, grammatical valency is a measure of a kind of asymmetry between a ‘nucleus’ and potentially several satellite words that are structurally dependent on that nucleus. Contrary to popular understanding, almost any part of speech can serve as a nucleus around which other words will (optionally or obligatorily) surface: verbs, nouns, prepositions, adjectives, etc.. The most common part of speech to show these effects is of course verbs, and verbs also have the largest variation in the kinds of other parts of speech that act as a satellite.  Typically, when we talk about transitivity then we are talking about how many noun arguments a verb takes:

  • Intransitive verb (V + N):  John ages slowly.
  • Monotransitive verb (V + N + N):  John kicked the ball.
  • Ditransitive verb (V + N + N + N):  John gave Mary a book.

However, identifying discrete categories of verbs based on transitivity is not always easy, because (in English, anyway) there exist a number of verbs which seem to fall between the cracks, such as labile verbs and ambitransitive verbs.  Labile verbs are verbs which may optionally allow a certain number of arguments:

  • John ate pizza — in fact, he ate all day long

Here, in both clauses the semantics are unchanged, but in the first the semantic argument — the thing actually being eaten — is made explicit, while in the latter it is not.  Like labile verbs, ambitransitive verbs also optionally allow varying numbers of arguments, but unlike labile verbs the way in which the semantics is encoded as subject or object is different: 

  • The cup broke — actually, John broke the cup.

Here, what is the subject of one clause is the object of the other.  What such examples illustrate is that the semantics of a verb may remain constant — there is a breaker and a thing being broken in both cases — but the syntax of the verb may change.  Syntactic encoding, in other words, is autonomous from semantic meaning.  

So, what about tritransitive verbs?

So, here is where the debate about tritransitive verbs comes in.  If we’re using the number of syntactic noun phrases, as opposed to semantic arguments or other kinds of phrases, as our criterion, then English has no tritransitive verbs.  All the other answerers for this question have provided examples that are either not obligatory (and therefore syntactic adjuncts) or which are not syntactic noun phrases, or both.  For example:

  • (1) John traded Jane an apple *(for an orange).
  • (2) I bet you two dollars (that it will rain).

Sentence (1) actually is merely a ditransitive verb with a prepositional phrase that, though semantically obligatory and syntactically requires a prepositional phrase headed by ‘for’, is not a noun phrase.  Sentence (2) likewise is ditransitive with an optional clausal adjunct:  if you remove the subordinate clause, the sentence is still grammatical, and can even be replaced with an entirely different and equally optional subordinate clause: I bet you two bucks, (because there’s no way the Cubs will win the World Series this year).  This distinction between syntactic transitivity and semantic transitivity is even clearer with certain verbs where the semantically obligatorily argument belongs to an open class.  For example:

  • John put the book *(on the shelf / in his suitcase / beside the coffee table / down / away ).

In this sentence, the location of the book is obligatory — *”John put the book” by itself is ungrammatical — but the exact kind of location is not specified by ‘put’.  As long as there is some location specified there, whether that be a prepositional phrase or a locative adverb, the verb is satisfied.  

So, do strict tritransitive verbs with only noun phrases exist in any language? 

Yes, they do, though they are somewhat rare.   Most examples that on first appearance look like tritransitive verbs with only NPs actually have one NP that is optional, e.g. French ‘give’:

  • (3)  Je te    le    (lui) donne.

              I    you  it    him  give.1Sg
              ‘I give it to you for him.’

In this example, the NP lui ‘for him’ can be optionally added or dropped as needed, and probably more often than not is not present.  We might call this a tritransitive labile verb.  Like English ‘eat’, it may or may not have that added argument.  Note that this is not an example of differing semantic transitivity: the English phrase would have an identical meaning and number of arguments, but nonetheless *I gave you it him is ungrammatical in English.

A map showing the locations of Abkhaz, Georgian and Svan

There are some stronger cases of tritransitive verbs, and I will give three examples from languages of the Caucasus. In the Abkhaz language, for example, if the base form of the verb is already ditransitive, then an additional argument can be added with either the causative or benefactive valence suffixes (Chirikba 2003: 39):

  • (4)  wǝ-lǝ-z-já-sǝ-r-c˚až˚-wa-jt’

             2sg-her-BENF-him-RELA-1Sg-CAUS-speak-PRES:DYN-FIN
             ‘I shall make you speak with him about it for her.’

Unlike French or English, Abkhaz is a so-called pro-drop language: because the verb is marked for agreement with every argument, no NPs are obligatory for any verb.  However, this example still counts as a tritransitive verb because (a) full NPs could be supplied if needed, and more importantly (b) the verb morphology is not optional, if one wants to indicate that many arguments.

In another language of the Caucasus, Georgian, verbs can likewise indicate four noun phrase arguments at once, despite the fact that the relationship between inflectional morphology and syntactic arguments is not as straightforward as Abkhaz:

  • (5) მე თქვენ ივანეს        წიგნს           მიგაცემინებთ

            me tkven  Ivane-s       c’ign-s          mi-g-a-c-em-in-eb-t
            1Sg 2pl    John-DAT  book-DAT   PVB-2-PRV-give-TH-CAUS-TH-PL
            ‘I make y’all give John the book.’

One last example is Svan, a Kartvelian language of the Caucasus rather distantly related to Georgian.   In this language, there are double-causatives that carry with it the implication of assistance in an act (Boeder 2003: 43):

  • (6a)  Regular causative

               მǝშკიდ         ხაშკა̈ა̈დუნე                           ჭყინტს      ჩა̈ა̈ჟს
               mǝšk’id         x-a-šk’ääd-un-e                      č’q’int’-s      čääž-s
               smith.NOM   3-PRV-forge-CAUS-PRES  boy-DAT    horse-DAT
               ‘The smith makes the boy shoe the horse.’
      (6b)  ‘Assistive’ causative
               ჭყინტ           ხაშკა̈ა̈დუნა̈უნე                                მǝშკიდს    ჩა̈ა̈ჟს
               č’q’int’            x-a-šk’ääd-un-äwn-e                         mǝšk’id-s   čääž-s
               boy.NOM      3-PRV-smith-CAUS-CAUS-PRES smith-DAT horse-DAT
               ‘The boy helps the smith shoe the horse.’

In this example, it is unclear exactly how many arguments are involved.  It is clear that there are at least three.  However, because the example in (6b) includes two causative suffixes, each apparently adding something to the sentence, it is not clear whether there is a fourth (in this case) implied argument meaning something like ‘the boy helps (the situation) that the man shoe the horse’.  

In any event, the bottom line is that tritransitive verbs with only noun phrase arguments exist, but are rare.

 

This answer was brought to you by  Thomas Wier, PhD in linguistics and assistant professor. 

How did the word “ass” become a word for buttocks?

How does a word denoting an innocent animal come to be used to refer to something as hidiously ridiculous as a backside? It is actually more interesting than you would think it would be. The following explanation comes to you from Oscar Tay, a language teacher and online course developer. 

As many would suspect, yes, it comes from the word “arse”, which came from the Old English ærs, “buttocks”. This then stretches back to the ancestor of English (and German, Swedish, and about two dozen others), Proto-Germanic, which had the word *arsoz for that particular part of the body.

It’s even older than that: it comes from the ancestor of not only English, but also Latin, Greek, Russian, Welsh, and Sanskrit, spoken 8000 years ago in the steppes north of the Caucasus east of modern-day Ukraine: Proto-Indo-European. Their word for “tail” or “rump” was something like *ors-. This became orros in Greek and arrash in Ancient Hittite, with the same meaning.

Somewhere along the line, “arse” turned into “ass” in some dialects of English. The two main theories about why this happened are:

  • the “r” just fell off: There are several examples of r-loss before /s/ in English. A rather relevant example is “curse” shifting to “cuss”.
  • people were trying to be polite: This seems funny in retrospect, but it made sense at the time: the word “ass”, with its original meaning of “donkey”, sounded enough like “arse” to be used as a minced oath.

“Ass”, meaning “donkey”, has an intriguing etymology of its own: it comes from Old English assa, which probably comes from the Latin asinus. Unlike “arse”, asinus doesn’t have an Indo-European root.

This was because the original Indo-European people didn’t have donkeys. They (the donkeys) came to Europe from the Middle East sometime in the 2000s BC, well after the Proto-Indo-European period (6000–4000 BC). Donkeys themselves can be traced to the ancient Egyptians, but are ultimately descendants of a wild ungulate still found in Egypt and Somalia.

The word comes from a language spoken in Anatolia or the Middle East. We’re not sure what that language was, but a relative can probably be found in the Sumerian ansu, “donkey”.

The first use of “ass” for “arse” is from the 1800s, but it may date back even further – to Shakespeare’s time, in fact: Shakespeare’s use of wordplay is well-known, and one such example may be from a scene in A Midsummer Night’s Dream, where a character is turned into a donkey and says:

I must to the barber’s, mounsieur; for me thinks I am marvellous hairy about the face; and I am such a tender ass, if my hair do but tickle me, I must scratch.

The character’s name? Nick Bottom.

(For those curious, arce-hoole dates back to the 1400s.)

If you made it this far, please share 🙂