A comet passes the moon. How does it find its way without knowing the law of gravity? It just rolls with the geometry of the universe.
The human brain did not evolve rules that shape language grammatically; instead, grammar rules evolved to fit the brain. This reversal from the ordinary expectation is proposed in a forthcoming article in Behavioral and Brain Sciences titled “Language as Shaped by the Brain,” by two psychologists, Morten H. Christiansen and Nick Chater. (Various drafts are circulating on the web. Here is one.) This theory changes a central question for this blog. The old question asked what changes to the brain occurred to permit syntactic constructions? Was that change some kind of adaptation or a spandrel? Did it just permit recursion or did it do more? The new approach puts all those questions aside and raises new ones.
The Christiansen and Chater paper addresses one of the deep mysteries of human development: the speed with which children start talking. Nobody doubts that children pick up the words from those they hear spoken around them, but the grammar comes with suspicious ease. Children are able to create sentences they have never heard. The standard explanation for the past 50 years has been that children are born with an understanding of grammar, but an alternate proposal is that language has evolved to be easy to learn. The idea is that if you start with a language that has no syntax, one will evolve. One speaker might say red saw I hat and another says red hat saw I but after a few generations the form that is easiest to say and understand will be the standard and new speakers will find that the language is more easily learned. The relationships expressed come from a pre-existing general intelligence rather than from a specialized syntactic module.
The change marks a break with the nativist philosophical tradition revived by Chomsky, but does not return to the "blank slate" philosophy that nativisim replaced. At least, it does not embrace the naĩve form of the old blank slaters. In that tradition, people were said to be born with a mind as blank as a new blackboard and anything could be written on it. The Christiansen and Chater account denies that there are any innate ideas already written in the mind, but some things can be marked down much more easily than others. The brain has a geometry—i.e., various traits and limitations that make it easy or hard to write something on it. In the case of modern languages that have a long history of passage from generation to generation, any language an infant must learn is so well adapted to the brain's geometry that it is picked up without any serious difficulty.
The dimensions of this mental geometry include:
Perception: The limitations of attention and memory "may ... force a code that can be interpreted incrementally rather than [by using one of] the many practical codes in engineering in which information is stored in large blocks." On this blog (and not in the Christiansen/Chater paper) I treat language as a pilot of attention. That task, of course, forces incremental communications [attend to X... attend to Y...].
Motor: "The basic phonentic inventory is transparently related to deployment of the vocal apparatus." In other words, the sounds we use are obviously dependent on the sounds we can make. Our use of a stream of air to form words forces a sequential construction of messages. Thus, perception and motor ability work together to shape a language that will express itself serially, either by using a fixed word order or by a fixed order of suffixes or prefixes.
Thought processing: Planning and motor control "involve the extraction and further processing of discrete elements occurring in complex temporal sequences." Language is not just one word after another. For example (I'm supplying the example; Christian and Chater are not so strong on examples) in tool-making a person may be performing one step in anticipation of a final product. Similarly, a speaker may utter a phrase in anticipation of the whole sentence.
Pragmatic processes: Pragmatics concerns the difference between the literal interpretation of words and the way they are understood. Take a sentence like John arrived and began to sing. That can only be interpreted as meaning John began to sing. We can, however, imagine a language that includes a rule stating that whenever a subject is omitted the first person singular is to be understood. In that case the full sentence would be John arrived and I began to sing. But no language has this imaginary rule and nobody has to be taught that the omitted subject in the second half of the sentence is John. Even very small children understand the sentence correctly without having to be explicitly taught this rule. The old theory was that the rule for understanding the meaning of this anaphoristic sentence must be built into the child from birth, and indeed the rule does have to be specifically programmed into a computer if you want it to use language correctly. The Christiansen/Chater paper says that the understanding comes from the pragmatic working of the brain and does not require a special linguistic module. Many of the rules that seem particularly arbitrary to students of formal linguistic rules, are of this type and, say Christiansen and Chater, may refelect pragmatic constraints (dimensions).
Generative grammar does not include these dimensions, so some other kind of grammar must take its place. Christiansen and Chater support "the 'lexical turn' in linguistics, focusing on specific lexical items [i.e., words] with their associated syntactic and semantic information." (This argument was anticipated at the Barcelona meeting; see: Words Are More Human than Syntax.) "Specifically, we adopt a Construction Grammar view of language, proposing that individual constructions consisting of words or combinations thereof are among the basic units of selection." Construction Grammar is an approach that focuses on words and stereotypical units like spic-and-span rather than syntactic categories like noun phrases. It studies language on a much less abstract level than generative approaches.
The idea of a "basic unit selection" is essential to the Christiansen/Chater paper and plays no role at all in generative accounts of language origins. The idea that language evolved is taken literally by Christiansen and Chater.
- Words and set phrases play the role of genes, the units that are selected.
- Languages are the equivalent of species.
- An idiolect (the speech of an individual) is the product of its constructions, just as an individual organism is the product of its genes.
- A language is a set of mutually intelligible idiolects, just as a species is a set of mutually interbreeding individuals.
- In biological evolution, species evolve according to their environmental fitness. In linguistic evolution, languages evolve according to their adaptation to the mental geometry of speakers.
Although the idea is similar to Terrence Deacon's notion of the co-evolution of brain and language, it is not the same because it sees evolution in one direction only. Language adapts to the brain; brain and language do not adapt to each other. Deacon proposes a mechanism known as the Baldwin effect, by which learned behavior can enter the genetic code. But Christiansen and Chater have run simulations that calculate the Baldwin effect does not work if language is evolving more rapidly than the species.
The idea is also different from Richard Dawkins' idea of memes or the "genes" of culture. Christiansen and Chater say of meme-theorists, "[Their] explanations of fashions (e.g., wearing baseball caps backwards), catchphrases, memorable tunes, engineering methods, cultural conventions and institutions (e.g., marriage, revenge killings), scientific and artistic ideas, religious views, and so on, seem patently to be products of sighted watchmakers; i.e., they are products, in part at least, of many generations of intelligent designers, imitators, and critics." In the Christiansen/Chater theory of how the brain shapes language, the process is unconscious and undirected. The structure emerges without conscious intervention.
The paper's limitation is that it lists dimensions, but does not show how they work. It is not an achievement of the first level that puts all questions behind it, but then we cannot really expect or demand the kind of Einstein mastercoup that says: here are the dimensions, here is the geometry, and here is the formula for drawing a line across the map. What we have instead is a tentative list of dimensions, a vague sense of how the geometry works, and no formula at all for predicting the results. What we need now are a number of Galileos who can work out the rules of movement through this mental space.
More next week.
So now we have two evolutions – a genetic evolution of Homo brains that resulted in very early language followed by a linguistic evolution of languages. There may have been further genetic evolution of the brain areas used for speech but this would have been much less significant then the linguistic evolution once languages were being transmitted. We need two evolution narratives but each will be easier to envision than a single narrative that is either genetic or non-genetic. I like it.
Posted by: JanetK | July 06, 2008 at 01:55 PM
1. Before language could evolve it had to first exist, and this metaphor (as they specifically call it) has no explanatory power for the original creation. Nor can it explain why creating a language out of nothing (Nica.) requires no longer time frame than acquiring an existing, already highly evolved language. OTOH, if what 'evolved' is communication, then pre-linguistic and even non-human abilities are included in an uninterrupted continuum, and no prediction of different learning speeds.
2. Pragmatics dictates that we sign words like 'look' or 'give', with movement from the speaker towards the object. To see why, try saying 'I see you' while moving your hand toward yourself. Since the speaker [first person singular] is the one performing the action, a special sign for “I” is pretty redundant too, thus the rule about null subjects.
Languages that are not limited by the vocal apparatus naturally lexicalize these into non-sequential constructions—“Did you give it to her?” is all one word in ASL—but processing constraints set limits. When we shift to using the vocal tract we have to resort to fixed orders, suffixes, and prefixes, due to constraints on production. All this is consistent with the accounts given here, and contra generative models.
Posted by: watercat | July 06, 2008 at 11:46 PM
@watercat
I must confess that I'm not getting what you mean by your last statement.
If you say: "Languages that are not limited by the vocal apparatus naturally lexicalize these into non-sequential constructions"
What about the SOV-Word-Order of both Al-Sayyid Bedouin Sign Language and Nicaraguan Sign Language.
And if you say that "Did you give it to her?” is all one word in ASL". Isn't the same true for agglutinating languages such as the various Eskimo languages?
Homesing also has an SOV or only OV word order, for reasons you succinctly pointed out.
Although the SOV-order needn't be (and probably isn't) an innate linguistic feature, given that Susan Goldin-Meadow's recent research published in the PNAS has shown that speakers of languages with differing word orders use it in describing events. But I don't see how this all is "contra generative models", because Generativists do not deny that there are other important cognitive principles that influence and shape the form of language.
As according to Chomsky, UG denotes the genetic structure that enables a human child but not a frog to learn language, and it could well turn out that there are no language-specific parts of UG, generative models would not be wrong, but only incomplete. Thus both approaches would have to work toward each other to determine the mental structures as well as the structure of the input that enable language learning. If both sides are open-minded (O idealistic me), I don't think that there shoud be a problem.
I think it would be fruitful to contrast this article with, say, Jackendoffs 2007 or 2005 article in "The Linguistic Review", to determine where generativist and construction approaches can find common ground, and where they really differ. Jackendoff for example argues that there many people overlook "far more robust arguments for Universal Grammar (perhaps because the one who gave them wasn’t Chomsky."
After all, the aim is to explain all of language, and find "A more nuanced account of the balance between general and special" cognitive features that enable language, and one major values of the generativist enterprise is that it gives one view of what needs to be explained in the first place.
Posted by: Michael | July 07, 2008 at 12:40 PM
Yes, in Aleut the question above would be one word, by joining morphemes together in sequential order. The difference is that visual languages produce morphemes at the same time, so the ASL sentence is just one syllable! The facial expression for a question, the hand shape for the direct object, and the movement of the verb all occur simultaneously. One study cites a syllable/phrase with 14 separate non-syncretic morphemes.
This sort of thing can't be done in all cases though. Articulatory constraints prevent some signs from being modulated, and these must be produced in order, giving rise to the conventional word orders.
My understanding of linguistic universals is that they must be unique to language, and if none are then you would no longer have a generative model. I could be wrong.
Posted by: watercat | July 08, 2008 at 03:18 PM
Thanks for clearing that up. You're certainly right about the linguistic universals, but I think there still are some proposals for special linguistic universals/or language-specific computational operations out there, so that it is certainly right that often GG is wrong in its disregard of general cognitive principles and performance factors.
As Jackendoff says:
"everyone will have to give a little in order for the pieces to fit together."
As far as I see, one language specific/universal aspect of both ASL and spoken language would be Duality of Patterning / unbounded recursive merge which creates larger meaningful wholes out of merging simpler units together and adjoining them. You could then opt for a hierarchy of Projection Rules (a term from the Minimalist Program which, I must confess, I don't properly understand). By this I simply mean that the language system may have come to exhibit (innate or learned doesn't matter here) a certain hierarchy of when to apply its operations. If, following numeration (i.e. the retrieval of "words" from the mental lexicon) the duality of patterning/unbounded merge operation phase is prior to the application of word order, the differences between between ASL and spoken languages could be explained in this terms. Due to its difference in performance structures, ASL would work more with non-syncretic morphemes built by unbouned mergey/patterning operation, and would then apply the word ordering operation, whereas spoken language, do to its more stringent "linearity", as Hockett and de Saussure would say, would rely more on word ordering than ASL. The language system would still have a language-specific structure and there would be language universals. This was just meant as one example of how these facts could be incorporated into a generative model, and isn't meant as a serious argument. If the weight of the evidence points in the other direction, and there are in fact no (innate) language-specific operations or architectural constraints, generativist models would be wrong. But from my perspective this is still an open question.
Posted by: Michael | July 09, 2008 at 04:01 AM
I don't see any reason to assume, automatically, that language is produced or understood by a sequence of operations as opposite to simultaneous operations. While we are metaphorically 'looking up the words', we are also dealing with what we expect to hear in the context of the conversation. At the same time we are analyzing the grammar, starting with the sound envelops of the phrasing etc. etc. At some point it all fits together and we have the meaning.
We should forget sequential operations in the brain unless we have some evidence for them in some particular context. Sequential operations would just be altogether too kludgey to even be a 'kluge'. Think simultaneous.
---------------------------------
BLOGGER: TThe sequential task I was talking about was putting one word after another.
Posted by: JanetK | July 10, 2008 at 05:24 AM
@ JanetK.
Good point.
Just for the sake of the argument, a hierarchy doesn't need to be sequential, and we could rephrase my thought experiment in terms of the parralell satisfaction of interface conditions, as in Jackendoff's parralel architecture,and, I think, Optimality Theory, both nonmainstream generative approaches.
I think, Optimality explicity states that there seems to be a ranking, hierarchy and valency of operational procedures and interface constraints. So, in Sign language, the ranking of the 'simultaneous linguistic expression interface' would simp,y be higher than in spoken language.
I don't know anything about this topic, but a google search for evidence for sequential operations in a wide array of cognitive operations gives a hint that there seem to be both simultaneous and sequential operations in cognition.
Posted by: Michael | July 10, 2008 at 05:51 AM