Blog Rating

Selected Books by Edmund Blair Bolles

  • Galileo's Commandment: 2500 Years of Great Science Writing
  • The Ice Finders: How a Poet, a Professor, and a Politician Discovered the Ice Age
  • Einstein Defiant: Genius vs Genius in the Quantum Revolution

« The Richness of the Stimulus | Main | Questions For A Theory »



So now we have two evolutions – a genetic evolution of Homo brains that resulted in very early language followed by a linguistic evolution of languages. There may have been further genetic evolution of the brain areas used for speech but this would have been much less significant then the linguistic evolution once languages were being transmitted. We need two evolution narratives but each will be easier to envision than a single narrative that is either genetic or non-genetic. I like it.


1. Before language could evolve it had to first exist, and this metaphor (as they specifically call it) has no explanatory power for the original creation. Nor can it explain why creating a language out of nothing (Nica.) requires no longer time frame than acquiring an existing, already highly evolved language. OTOH, if what 'evolved' is communication, then pre-linguistic and even non-human abilities are included in an uninterrupted continuum, and no prediction of different learning speeds.

2. Pragmatics dictates that we sign words like 'look' or 'give', with movement from the speaker towards the object. To see why, try saying 'I see you' while moving your hand toward yourself. Since the speaker [first person singular] is the one performing the action, a special sign for “I” is pretty redundant too, thus the rule about null subjects.
Languages that are not limited by the vocal apparatus naturally lexicalize these into non-sequential constructions—“Did you give it to her?” is all one word in ASL—but processing constraints set limits. When we shift to using the vocal tract we have to resort to fixed orders, suffixes, and prefixes, due to constraints on production. All this is consistent with the accounts given here, and contra generative models.



I must confess that I'm not getting what you mean by your last statement.
If you say: "Languages that are not limited by the vocal apparatus naturally lexicalize these into non-sequential constructions"
What about the SOV-Word-Order of both Al-Sayyid Bedouin Sign Language and Nicaraguan Sign Language.
And if you say that "Did you give it to her?” is all one word in ASL". Isn't the same true for agglutinating languages such as the various Eskimo languages?
Homesing also has an SOV or only OV word order, for reasons you succinctly pointed out.
Although the SOV-order needn't be (and probably isn't) an innate linguistic feature, given that Susan Goldin-Meadow's recent research published in the PNAS has shown that speakers of languages with differing word orders use it in describing events. But I don't see how this all is "contra generative models", because Generativists do not deny that there are other important cognitive principles that influence and shape the form of language.
As according to Chomsky, UG denotes the genetic structure that enables a human child but not a frog to learn language, and it could well turn out that there are no language-specific parts of UG, generative models would not be wrong, but only incomplete. Thus both approaches would have to work toward each other to determine the mental structures as well as the structure of the input that enable language learning. If both sides are open-minded (O idealistic me), I don't think that there shoud be a problem.

I think it would be fruitful to contrast this article with, say, Jackendoffs 2007 or 2005 article in "The Linguistic Review", to determine where generativist and construction approaches can find common ground, and where they really differ. Jackendoff for example argues that there many people overlook "far more robust arguments for Universal Grammar (perhaps because the one who gave them wasn’t Chomsky."
After all, the aim is to explain all of language, and find "A more nuanced account of the balance between general and special" cognitive features that enable language, and one major values of the generativist enterprise is that it gives one view of what needs to be explained in the first place.


Yes, in Aleut the question above would be one word, by joining morphemes together in sequential order. The difference is that visual languages produce morphemes at the same time, so the ASL sentence is just one syllable! The facial expression for a question, the hand shape for the direct object, and the movement of the verb all occur simultaneously. One study cites a syllable/phrase with 14 separate non-syncretic morphemes.
This sort of thing can't be done in all cases though. Articulatory constraints prevent some signs from being modulated, and these must be produced in order, giving rise to the conventional word orders.

My understanding of linguistic universals is that they must be unique to language, and if none are then you would no longer have a generative model. I could be wrong.


Thanks for clearing that up. You're certainly right about the linguistic universals, but I think there still are some proposals for special linguistic universals/or language-specific computational operations out there, so that it is certainly right that often GG is wrong in its disregard of general cognitive principles and performance factors.
As Jackendoff says:
"everyone will have to give a little in order for the pieces to fit together."
As far as I see, one language specific/universal aspect of both ASL and spoken language would be Duality of Patterning / unbounded recursive merge which creates larger meaningful wholes out of merging simpler units together and adjoining them. You could then opt for a hierarchy of Projection Rules (a term from the Minimalist Program which, I must confess, I don't properly understand). By this I simply mean that the language system may have come to exhibit (innate or learned doesn't matter here) a certain hierarchy of when to apply its operations. If, following numeration (i.e. the retrieval of "words" from the mental lexicon) the duality of patterning/unbounded merge operation phase is prior to the application of word order, the differences between between ASL and spoken languages could be explained in this terms. Due to its difference in performance structures, ASL would work more with non-syncretic morphemes built by unbouned mergey/patterning operation, and would then apply the word ordering operation, whereas spoken language, do to its more stringent "linearity", as Hockett and de Saussure would say, would rely more on word ordering than ASL. The language system would still have a language-specific structure and there would be language universals. This was just meant as one example of how these facts could be incorporated into a generative model, and isn't meant as a serious argument. If the weight of the evidence points in the other direction, and there are in fact no (innate) language-specific operations or architectural constraints, generativist models would be wrong. But from my perspective this is still an open question.


I don't see any reason to assume, automatically, that language is produced or understood by a sequence of operations as opposite to simultaneous operations. While we are metaphorically 'looking up the words', we are also dealing with what we expect to hear in the context of the conversation. At the same time we are analyzing the grammar, starting with the sound envelops of the phrasing etc. etc. At some point it all fits together and we have the meaning.
We should forget sequential operations in the brain unless we have some evidence for them in some particular context. Sequential operations would just be altogether too kludgey to even be a 'kluge'. Think simultaneous.
BLOGGER: TThe sequential task I was talking about was putting one word after another.


@ JanetK.
Good point.
Just for the sake of the argument, a hierarchy doesn't need to be sequential, and we could rephrase my thought experiment in terms of the parralell satisfaction of interface conditions, as in Jackendoff's parralel architecture,and, I think, Optimality Theory, both nonmainstream generative approaches.
I think, Optimality explicity states that there seems to be a ranking, hierarchy and valency of operational procedures and interface constraints. So, in Sign language, the ranking of the 'simultaneous linguistic expression interface' would simp,y be higher than in spoken language.

I don't know anything about this topic, but a google search for evidence for sequential operations in a wide array of cognitive operations gives a hint that there seem to be both simultaneous and sequential operations in cognition.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Bookmark and Share

Your email address:

Powered by FeedBlitz

Visitor Data

Blog powered by Typepad