The book Attention and Meaning; The Attentional Basis of Meaning.is downloadable from the Mind-Consciousness-Language website. It must be read online, but what do you want for free?
The German edition of The Ice Finders.
Some years ago a book of mine appeared and told the story of the discovery of the ice age. The idea of an ice age met a lot of resistance at first because it seemed profoundly unscientific. Glaciers the size of continents were unknown and sounded like the sort of fantasy that dreamers always propose. Back then scientific geologists believed that the same slow processes visible in 1840 were enough to explain all the geological markings on the earth. Furthermore, glaciers were believed to be unable to flow uphill. Rivers can only flow downhill, and what were glaciers if not frozen rivers? There was plenty of physical evidence of a recent ice age, left over moraines and large boulders scattered about, and every so often a geologist would look at this evidence and be converted. But that process was slow and it took decades for geologists as a group to come around.
I recall that story because I can see something similar with language studies right now. The idea that language works by piloting attention is so novel that it is resisted by sheer inertia. It too seems unscientific in the sense that a computer cannot work by using perception instead of symbolic concepts. Yet there is a steady process of individual conversions that has gathered enough steam behind it to support a collection of essays (see Attention and Meaning) and I feel confident that eventually the resistance will collapse. Already the arguments that the classical solutions "must be" on the right track can be answered.
A few weeks ago I wrote on this blog that I was no longer intimidated by the classical "must be" claims like hierarchical structure, displacement/movement, recursion and minimality. A reader, Gary Briscoe, has asked me to enlarge on my off-handed rejection, and I felt he had a point even though technical complaints have always been secondary to my main objection about generative linguistics which is that it does not connect language to anything about humanity, or culture, or history. Still, Briscoe is correct that since I mentioned four technical issues raised many years ago on a now-quiet blog called The Lame and the Blind I should clarify my reference.
It is obvious that language can be analyzed hierarchically: words combine into phrases which combine into larger phrases and clauses which can be sorted into subject and predicate which combine into a full sentence. The same can be said of mental processes in which sensory data becomes a percept and percepts become multi-sensory images. The question is whether this hierarchical analysis plays a role in the construction and interpretation of sentences. There is room to doubt. For example, the production of speech is so fast that there is barely time for serial processing, let alone hierarchical analysis and interpretation.
The L&B blog compares two sentences. English syntax forbids *Himself likes John, but allows I saw the picture of himself that John likes. L&B explains a differently organized sentence was originally generated in the mind and then the phrase of himself was moved through the hierarchy to its "overt phonological position."
Let's notice immediately that this sentence is intelligible only because the I-himself difference is unambiguous. If the sentence were He saw the picture of himself that John likes, we would assume it was a picture of whomever he is. If the picture were of John, the sentence would have to be something else, perhaps He saw the picture of John that John himself likes. So what happens to all that hierarchical movement and analysis? Can it really be that the substitution of one pronoun for another results in a completely different linguistic hierarchy?
Let's also note that the sentence is still awkward. If I were working on a manuscript, I would probably change the sentence to something like: I saw that picture John likes of himself.
It may be that L&B has chosen a poor example, but an alternative parsing of the sentence based on attention and working memory is possible. (See my paper, Attention-Based Syntax and an online paper by Stefan Frank, Rens Bod and Morten Christiansen published by the Royal Society in 2012).1
L&B states that some sentences interpret words as being in a different position from where they appear in public. For example, English usually puts the subject before the object, but in What did John read? the subject John appears after the object what. But is what really the object of the verb read? Perhaps it is the subject of the verb did.
I am not being completely serious with that flip answer, but I have a serious point. Why assume that the syntactical rules for interrogatives are the same as those for normal sentences? If we say that language works by piloting attention, we notice promptly enough that interrogatives like what and who do not pilot attention. They are attentional blank spots. Interrogatives have two parts: the section that directs attention, as in a normal, informative statement—John read—and the part that pilots attention nowhere—What did. There is no reason for asserting a priori that the interrogative structures are syntactically like the informative ones.
All languages can "combine phrases grammatically … to an infinite degree." Memory puts a stop to the intelligibility, but generative grammarians pay no attention to the psychology of the matter. That exclusion strikes me as arbitrary, but L&B's critical sentence says, "Not only is [recursion] universal to human language, but it also seems unique to human language. No other species has been convincingly demonstrated to [the ability] to detect or produce recursive patterns." So what? Language is useful because it allows people to learn things from one another. Apart from eusocial insects, other animals don't share news, so why expect linguistic features in their howls? I can list a great many aspects of language that are not found in animal signals. Why single out recursion for special notice?
A tennis referee cries, "Out." That's not recursive, but it is informative in a way unknown to other animals. The generative focus on such an abstract, secondary feature of language is symptomatic of the generative insistence on ignoring the function of language.
L&B writes, "It is a surprising and universal fact about language that dependencies between two elements in a sentence cannot be interrupted by a third element of the same type." The blog uses another interrogative example, but his statement would be more clear if he used a normal, informative sentence (as indeed it does in a Swahili example). Consider Peter gave his girlfriend Jane a book. In this sentence "Jane" is a dependency of "his girlfriend." What L&B finds "surprising and universal" is that we cannot interrupt these two; for example, we cannot say *Peter gave his girlfriend from London Jane a book. I don't deny that dependencies cannot be interrupted like that, the part I deny is that it is surprising. If you say that language works by directing attention, you would expect that interjections that disrupt attention are forbidden.
My correspondent, Mr. Briscoe, has asked me to "refute" these universals, and I doubt that I have done that. Instead, I have offered alternate discussions of the phenomena. It is up to each person to decide whether they prefer arbitrary, a priori assumptions that seem plausible they depend on computational processes available to a machine. I suspect that an explanation linking language syntax to function will eventually win the competition for explanation of observed facts.
1 According to attention-based syntax, the example sentence can be parsed:
[|I| /saw/ ||(the) picture| <of> |(himself)||] [<that> |John| /likes/]
The critical feature of this parse is that there are two complete topics (what I call bounded perceptions) which I have marked between square brackets [I saw the picture of himself] [that John likes]. Sentences are always easier when they take one topic at a time. Adding to the complication, both topics have the same object, "the picture of himself." I saw |object| and John likes |object|.
Overlapping sentences like this work best when the object of the first topic is the subject of the second one, e.g. [The dog bit the man] [who stole the television].
Complicating it still further, the object contains a noun and a pronoun (what I call static phenomena) and the pronoun is reflexive.
A solution is to separate the noun from the reflexive pronoun and put one part of the object in each topic: [I saw that picture] [John likes of himself]. Working memory keeps the break in attention from becoming a problem.
Note that the solution is not a computation. Instead, it requires editorial skill. Yet it is a skill that can be taught because it is based on an understanding of how sentences work.
One of the interesting features of the book Attention and Meaning is the way different authors have personal stories about how they found their way to interest in attention. Chapter 4 is by Kai-Uwe Carstensen, whose web page either boasts or confesses, "I am one of the few who believe that selective attention plays an important role in cognition, much more important than currently acknowledged." His chapter is titled, "A Cognitivist Attentional Semantics of Locative Prepositions," and states early on, "Attention-related phenomena… seem to have come to the fore only recently…. In this chapter, I will show that this is by no means warranted and that… attention must be regarded as a phenomenon at the heart of the field, and as an essential link in the relation of language and space" [p. 94].
He began his studies in the usual way, a graduate student with a distinguished professor as a teacher. He was interested in how locative prepositions work and began following an orthodox path. Prepositions like in, on, and above are particularly maddening because they seem so straightforward, but when you study them closely their logic seems to evaporate. Why does most of America say wait in line while New Yorkers wait on line? What is the common thread that unites a helicopter hovered over the house, clouds are over the sun, John lives over the hill, the game is over, etc.? Carstensen began taking some notice of attention's role in the early 1990s and in 1995 Gordon Logan published an important paper arguing that the discovery of space requires a shift in attention. Logan's experiments showed "spatial relations do not 'pop out' (i.e., are not directly consciously available as a whole) but always involve attentional shifts" . Sufferers from Balint's syndrome have, as one of their symptoms, an inability to shift attention between objects, and are, therefore, unable to perceive the spatial location of the object that does receive attention. It seems we do not see things in space by attending to an object but by looking at something and then another. As far as perception is concerned, space is the relationship between points of serial attention.
Carstensen developed as his hypothesis: "selective attention makes implicit spatial relations by imposing an order in the visuo-spatial processing of the involved objects" . In other words, space relations are specified by the order of shifting attention.
There might seem to be a very large number of ways attention can be shifted, but fortunately the gestalt psychologists have managed to reduce the number of things perceived to two categories: the figure and the ground. Carstensen calls the figure the LO (locative object) and the ground the (RO) reference object, but that has proved too confusing for me and I'm sticking with gestalt terminology. Gestalt psychology leaves us with only three possible attention shifts. (A fourth, ground to ground, does not work. One of the grounds becomes a figure):
Of course there are many more than three locative expressions, so changes in the focus of attention cannot be the whole story. Carstensen distinguishes between the linguistic and the conceptual roles of spatial location. The linguistic roles shift attention; the conceptual ones are the many culturally determined conventions for discussing space. As many as a third of all languages, for example, may not use right and left distinctions. These cultural details specify, so to speak, where to look for an object or ground. Examples,
A series of research questions present themselves.
This observation suggests that the universals of language and space may have to do with shifts of attention. All other aspects depend on the culture.
Sometimes I still enjoy listening to my old, analog LP records, even with their snaps, crackles and pops.
If we are going to argue that language is a system for harnessing attention, we ought to be clear which of the two general theories of attention we are talking about, information oriented or consciousness oriented.
Information oriented attention was proposed by Donald Broadbent in the 1950s and is still favored by artificial intelligence investigators who seek to model attention on a computer. It defines attention as a filtering process that buffers some input before it moves on to short term storage; however, it is very different from the sort of attention considered on this blog.
The Rosetta Stone
Last June I posted a three-part series titled "I'm Tired of Chomsky" in which I summarized Chomsky's theory, put forth an alternate theory, and reached some conclusions (Part 1, Part 2, Part 3, All 3 in 1 PDF). At the end I found that there was some overlap in our ideas about language:
But, as the saying goes, you can't beat something with nothing. Chomsky and his many admirers have produced an elaborate system of analysis that has its limitations—how seriously can anybody take an account of language that does not explain how we are able to communicate knowledge?—but has the great virtue of actually existing. I have felt for some time that somebody ought to produce an account of how language can evoke not just images, but complete ideas that hang together.
Giraffes check me out as I check them out.
Attention is much older than the genus Homo but we have turned it into a liberating power. Attention began as a reflex action. Something unexpected happens—there is a sudden noise, bright color, disgusting scent, hard poke—and an animal focuses on it, becomes aware of it. I was once on a walk in Zambia and far away, maybe a quarter of a mile distant, viewed from one ridge to another, a giraffe came out of a clearing and began walking down a slope toward water. It was the first time I ever saw a giraffe before it saw me. One of the people I was with whispered, "Look," and the giraffe heard the word. It stopped and stared straight at us. Then it turned and retreated back into the woodlands. There you have the classic animal use of attention: focus, awareness, action. This behavior is millions of years old and allows animals to include some adaptability in their actions. Animals with attention have an either/or switch in their system. They can focus, become aware and act in either of a couple of ways. They are not bugs accidentally orbiting a light bulb until either they drop in exhaustion or the light goes off.
I started this blog almost nine years ago, not being sure I had enough for nine days' worth of posting. To get myself started I prepared about 15 posts in advance of launch. As I began my research, almost immediately I came across the notion of "joint attention," two or more individuals paying attention to the same thing and knowing they are sharing their attention. Joint attention turns two individuals into a self-conscious unit, and supports cooperation by enabling members of the unit to think of themselves as an us. Linguistic interactions depend upon joint attention. In an early post I reported where this research had led me: "If you are ever pressed to state the difference between humans and the rest of the world's fauna in 10 words or less, try this one: We alone pay attention to the thoughts of others." I ended the post, "Once the world had creatures who paid joint attention to each other's subjective ideas, people had appeared. It may have been a long time before they could pronounce things clearly, and organize their speech into recursive sentences, but they were already people."
"I am not told. I am the verb, sir, not the object,"
—Alan Bennett, The Madness of King George
One of the regular frustrations of studying for this blog comes from the number of papers I read by people who argue as though, because language and mathematics both manipulate symbols, they can both be described by the same generalizations. They cannot. Take for example the differences that arise from the presence of verbs in language and their absence in mathematics.
If we think of a sentence as working like a solar system, then the sentence's sun is its verb. The sentence gets its dynamism from the verb's gravity and all parts of the sentence are related to one another through the verb. Mathematics lacks that kind of unifying authority..
Ancient coin showing Fides, goddess of trust. The acceptance of coins depends on placing one's trust in their continued worth.
Why do only people have language? Because only people trust one another with their secrets.
What secrets do you mean? Secrets of the heart? Secrets of the heart and more. The essential premise of natural selection concerns competition. In most cases it is a war of all against all. In that struggle, if you are not one up, you are one down; ties are not good enough. Why not? Imagine two competitors, both equally fit for the task before them. In that case, the victor will be random. In a coin toss, it is always better to have a rigged coin. Thus, everything known only to one individual provides a competitive edge and is best kept secret.
I was pained this week to see a column in the New York Times (here) announcing the pending death of Oliver Sacks. Sacks wrote the news himself, so of course the report was both sharp and humane. Ever since Awakenings and The Man who Mistook His Wife for a Hat, he has proven that a scientific imagination can bring much to the observation of human nature. It is good to be reminded of that fact, since the stereotype scientist is of a man who is logical, abstract, and utterly bewildered by ordinary human behavior.
Thank heavens for parents who post baby babbling videos on YouTube.
The basic fact of this blog is that now the whole human species uses language, but at one time none of our ancestors did. The basic question of this blog is; what happened to makes us a verbal species?
A basic fact of the world holds that although language is natural in adults it is absent in all newborns. So what happened in between?
Before I get distracted by too much nit-picking, let me get to the summary paragraph: Thomas Scott-Phillips' book, Speaking Our Minds, contributes seriously to the study of language origins. First and foremost, it demands that pragmatics—the study of language in its social context—be included in the effort to understand language origins. What's more, it makes good on its case. Pragmatics has been underplayed and anybody who thinks about language origins should read and study the book. If the book were not so danged expensive, I would even urge you to buy a copy. (By the way, I've mentioned Scott-Phillips before—see Reality Blogging—and I remember him as a promising fellow at the Barcelona Evolang conference of 2008.)
One of the running quarrels on this blog concerns the age of language. I think it began about 1.8 million years ago while a great many linguists and archaeologists date it, at most, as 0.1 million years ago. The main argument in favor of what I may call a recent origin is archaeological. Symbolic artifacts start showing up late in the record of human activity. That's why I was pleased when one of my regular blog readers, Vic Sarjoo, drew my attention to a letter in the latest Nature reporting the discovery of an engraving made on a shell 0.5 million years ago. The shell was likely drilled open, the mussel inside eaten, and then the shell was preserved as a tool. It was also decorated with a zig-zag pattern, or maybe the etching had some practical purpose and was not a decoration at all. Either way, it contradicts many assumptions about human evolution. The Homo group has been marking up things for a very long time. I look forward to the time we find some Oldowan tool scratched with the news, Zog made this.
A zig-zag scratched shell does not tell us much about language, but it does remind us that the arguments that language is even younger than the Homo sapiens species are based on dubious evidence.
There's an interesting paper titled The Latent Structure of Dictionariesfloating around the Internet. Written by a Canadian-led team, it forces clearer thinking about words.
Dictionaries rest on a well-known paradox. They use words to define words. So I might look up the word justice and read "the quality of being just; fairness." Ok. So I look up fairness and find "free from favoritism, self-interest, or preference in judgment." Oh, boy. I could look up all those words too, but a black hole emerges before me. The task stretches out to infinity.
Thanks to the computer, however, the endless task can be accomplished. There are, after all, a finite number of words in the dictionary. Let's say there are D words defined in a dictionary. Not all of these words are used in defining other words. For example, the dictionary defines the word cockroach but does not (I'm guessing, here) use the word in any of D's vast text of definitions. We can symbolize these unused words by the letter C and remove all of them from D. That process leaves us with a shorter list, call it D1. (D1 = D – C)