The latest issue of the Journal of Human Evolution reports finding butchered remains from 1.8 million years ago. The site, in modern Algeria, includes bones that were scarred by tool cuts and tools that show signs of wear and tear.
I'm taking this find as supporting archaeological evidence for collaborative behavior amongst Homo pre-sapiens.
One of the pleasures of this blog has been/is the occasional encounter with an important book. The top 4 were probably:
To this distinguished list I want to add Louder than Words: the new science of how the mind makes meaning by Benjamin K. Bergen. It is as important a book as I have found on this blog. Maybe it is even one of the most important I've read in a lifetime of reading about language. A statement like that, of course, tips my hand. This book vindicates a great deal that I have argued over the years on this blog: words work by piloting attention; language is a tool for sharing perceptions; to start speaking our ancestors did not have to get smarter, just more social.
It is not that Bergen says these things. He is not concerned with why we speak or what it accomplishes. How we manage it is his subject. But that's what makes the book so pleasing. It is a report from an experimental scientist full of data that happens to agree (more or less) with my own lines of inquiry.
Bergen is an associate professor of cognitive science at the University of California (San Diego) and a protégé of George Lakoff, a cognitive linguist at UC Berkeley. Lakoff is co-author of a fine book that appeared in 1980, Metaphors We Live By. I read the book more than 30 years ago, but as I recall it's thesis is that metaphors are critical to understanding any abstract statement. Metaphors are more than rhetorical devices. Without them we would only be able to speak about purely concrete subjects.
It might seem a little surprising, therefore, that most of Bergen's new book is about how we understand purely concrete sentences like The polar bear hid his nose. The approach turns out to have good sense behind it. It provides a very clear foundation for the assertion that concrete statements are understood by simulating the perceptions and motions they describe. Metaphors build on that perceptual layer. Abstract statements then build on the metaphorical layer. Bergen discusses all three types of statements but his evidence is richest when he reports his many experiments on concrete sentences.
Lakoff and Bergen have a common enemy, the theory of Mentalese, most thoroughly explored by Stephen Pinker but assumed by many others as well.
Mentalese defines language as a system of symbols organized by syntactic rules. Its great strength is that this definition exactly matches computer languages. To program a machine to use a language, all you need is a dictionary and a grammar. The famous Chomsky hierarchy of languages seems to take it for granted that natural languages (human languages) fall somewhere among this listing of types of machine languages.
Bergen's thesis rejects mentalese. In place of a dictionary, it places perceptions and motor activity. Take the sentence The polar bear hid its nose. To interpret it the mentalese way, a machine or brain looks up the various words in the dictionary and uses syntactical rules to divine the abstract relationships between the words. Bergen says no, as we read the sentence we activate the very same neurons in the brain that we use when we see a polar bear or a picture of a polar bear. For the nose, we active our nose perceiving neurons. For the verb to hide, we activate the motor neurons used to hide a nose (by putting a paw/hand in front of the nose).
There are three reasons (at least) for preferring Bergen's explanation over the mentalese account. First of all, there is the fact that brain imagery confirms that brains do work the way Bergen says they do. I won't linger over this point as I have made it myself many times on this blog.
Second, Bergen's account goes a long way to explaining all the stuff we understand all the unstated material in a sentence. Polar bears are white and operate against a white background, but their nose is black, so they might want to hide their noses to achieve even greater invisibility. We may never have thought about that point before, but once made we can see the way it works. There is no way to understand the role of whiteness in the mentalese method of turning to a dictionary, unless the information is specifically included in the dictionary. Even if we grant that the dictionary does include all this color detail, we see the problem. It would be nice to understand how we can get more out of language than the words seem to say rather than merely explain it all away with the guess that we already knew what was implied. Language would be much less useful if it could not bring us news.
Third, Bergen has performed hundreds of experiments to support his point. The book reports on over two hundred experiments by Bergen, his graduate students, and likeminded investigators. The method gives the book a certain text-book tedium, but the effect is to break down all resistance and say okay, okay, I believe you.
The experiments fall into several general categories. There are the ones where reading a word or sentence disrupts your ability to interpret a picture. It sounds crazy, but reading the word juggle slows your ability to interpret a stick drawing of a man checking his watch. Why should that be? One explanation might be that when you read the word juggle you activate motor neurons in a way that contradicts the way you would use your hands if you were to check your watch. That explanation might sound far fetched if the experiment were not one of many in which such word/visualization interferences occur. There are also the opposite kinds of experiment where words and images support each other. Then there are the metaphorical effects like rating a politician as being more serious, if you are holding something heavy in your hand.
The point of all these experiments is to see how far Bergen can push the idea that meaning is a simulation of the perception and motor activities carried out in the brain. He does a good job of it, although as is usual in the world of hypothesis and experiment, he makes a firmer case against his rival than in proving his own theory.
Mentalese is on the ropes. It cannot account for these experiments. The best it seems to manage is 'so what?' All this interference, support, and metaphorical effect does not prove that simulation determines meaning. To which the retort is no, but it sure says something is going on that Pinker, Chomsky, et al. did not expect.
Perhaps Berger's biggest problem is that putting meaning on perception—something this blog has done for years—still doesn't answer the question of the mechanics of meaning. Perception is as big a mystery as language. We see a polar bear and know it is a polar bear, how does that happen? I've never had to worry about that issue on this blog because, however it works, the answer is older than human origins. As waiters say, "It's not my table." But someday the puzzle has to be faced.
As far as this blog's direct concern—language origins—goes, the implications are important. The most commonly published date for language origins is 100 to 50 thousand years ago. That is said despite the fact that humans have all sorts of physical adaptations to speech that cannot possibly be so recent. It also ignores the evidence that genus Homo has been cooperative for millions of years and that until very recent times, the Homo brain has for some reason or other been getting bigger and bigger. So what's the evidence that language is a recent invention?
If you accept the mentalese definition of language as a system of structured symbols, it is reasonable to think that the species did not use language until it started using symbols. When was that? Some evidence points to a few hundred thousand years ago, but not everybody accepts that. It is hard to deny the archaeological evidence that by 50,000 years ago symbols were to be found.
But if the mentalese definition is wrong, then there is no reason to stick with symbols as language's sine qua non. Concrete remarks could be very old indeed, with metaphors and abstractions coming later. So, of course, I welcome this account of Dr. Bergen's many experiments.
I will follow up this review with further posts.
How can you punch up this sentence? I have submitted my application for a job with the EPA?
(a) I have happily submitted my application for a job with the EPA;
(b) I have submitted my request for employment at the EPA;
(c) I have applied for a job with the EPA;
(d) I submitted my application for a job with the EPA.
Most editors would probably favor (c) which reduces a verb phrase to a simple verb. It also simplifies the reader's task. Compare the basic functional structure of I have submitted my application with I have applied for a job. The first directs the reader's attention to a trivial detail, the application, while the second points toward the heart of the matter, the job.
Those interested in improving their writing skills should master this little trick: transform the noun in a verb phrase into the verb: e.g., give a demonstration à demonstrate (or show), be in violation of à violate/break/disobey.
I've taken these examples from a new book on verbs, Vex, Hex, Smash, Smooch: let verbs power your writing by Constance Hale. Is there a whole book's worth of reading about verbs? I'm willing to try, for I am a lover of verbs. I've never forgotten a scene in the play The Madness of King George III and coming to attention when the king made this royal pronouncement, "I am the king. I tell. I am not told. I am the verb, sir, not the object." When I saw that on stage, I grinned.
The elephant is a young male raised alone in a Korean zoo. He has taught himself a way to stick his trunk in his mouth and produce a variety of recognizable words, recognizable to Korean speakers that is.
The New York Times has a story whose main purpose seems to be to explain away the effect. Move along folks, nothing to see here.
A number of animals can learn to produce words—whales, a variety of birds, so it is not incredible that another species has been added to the list. Many other animals can learn to respond to words.
This blog has said many times that we have ample evidence that other animals are smart enough to at least get started speaking words and phrases, so we need some other explanation than intelligence to learn why—as a rule—only humans talk.
A second recurring theme on this blog is that the biggest pressure for language is social. We are much more communal than our nearest ape relatives, so it is not surprising that we have a tool that cements each human community together. Nor is it surprising that the animals that start mimicking human speech are also highly social creatures who have been robbed of their society.
Zoos have much to answer for. As a boy who often visited the zoo in Washington, DC I became quite familiar with a number of bears who had been driven insane by their captivity and would make constant, repetitive actions. A young elephant raised apart from all other elephants is probably a unhappy and bored as a human child would be if it was raised in solitary confinement and would probably try to make contact with whatever guards were available.
The Korean elephant offers somebody a real chance to see how far language mimicry can be taken. The elephant has already done the seemingly impossible of figuring out a method of making recognizable words. It would be quite something if a person came along who (a) loves elephants and (b) is willing to work with this one to learn how deeply the contact can go. There will probably never be a more willing pupil.
The story of evolution, human origins, and language origins just took a new turn onto a different road. Mendelian genetics has now been shown to be incomplete and awaiting an Einstein to clarify the situation. According to the old genetics we inherit genes that determine the traits that will be passed on. The synthesis of Mendel and Darwin described the selection of genes/traits. Nature has just published a set of six papers reporting that we inherit more than genes.
It has been apparent for some time now that some genes control other genes, but exploration of the DNA molecule also turned up what has been called "junk DNA;" those are strings of molecular information that do not produce proteins. Most shocking was the observation that most of the DNA molecule was junk; genes were the rare part. Now it turns out that the junk is not junk at all—it regulates, coordinates, and manipulates the genetic minority. We knew there was a little bit of such regulating, but this turns out to be a Pacific Ocean of control. Wow!
Today is one of those days like the one when Balboa stood silent on a peak in Darien. We can look at one another with wild surmise, but there is plenty of exploration to be done. The modern synthesis of Darwin and Mendel will have to be revised because Mendelian genetics leaves out too much. Human origins needs to be reconsidered because we need to understand better how changes to the jDNA (as, for the moment, I'm calling the newly important "junk" DNA) change the species. What is the role of jDNA in evolution? Does it speed things up or make the process even slower? I can imagine a lot of very specific questions for what it means to the origin of language, but until we have mapped more of this new ocean they cannot be well formulated. Meanwhile, I'm looking around for news about what's out there in this new ocean.
"What I am sure of is that this is the science for this century," [Ewan] Birney [of the European Bioinformatics Institute] said [to the Washington Post]. "In this century we will be working out how humans are made from this instruction manual."
I see Discover has a report on Bart de Boer's work on vocal tracts in australopithecus. It would be surprising to learn that "Lucy" sounded like something other than a female ape, but it needs to be checked out. de Boer is the number one authority on vocal tract development.
Sorry to be a couple of days late with this but I just noticed an obituary for George Miller whose work I have admired for many decades. I tend to think of him as a founder of psycholinguistics (along with Chomsky) but he did many more things than thatl. One of his main techniques was to point a promising graduate student or two in a promising direction and let them become the founders of a new subdivision of cognitive psychology.
Steven Pinker has posted an important essay on group selection. You can gather its thesis from the title, “The False Allure of Group Selection.” Since I am on record saying that group selection (really, multilevel selection) was critical to the evolution of language, I read the essay with strong interest. Let me say right off that I was astonished to find that the essay makes no remarks about the evolution of language. Pinker is a famous proponent of language’s evolutionary origins and biological basis, but he says nothing of group selection and language. Instead he criticizes ideas that group selection explains religion, culture, and nations. I am skeptical of those claims too. Pinker is a fine writer and I got several chuckles out of his examination of various shallow appeals to group selection. Was I laughing at my own doom?
What is in Dispute
My argument is not that Pinker is wrong, but incomplete. We both agree that individuals compete. I go on to say that there are multiple levels of selection. Populations compete and selection occurs at this level as well as the individual level.
Pinker rejects my position by saying, “ I don't think it makes sense to conceive of groups of organisms (in particular, human societies) as sitting at the top of a fractal hierarchy with genes at the bottom, with natural selection applying to each level in parallel ways.”
This statement makes clear to me what Pinker does not get about where I stand. I don’t put genes “at the bottom” of the “fractal hierarchy.” So let’s sort out the basic points about where I stand:
So Pinker has it wrong when he says multilevel selection puts genes at the bottom of the hierarchy. Genes are at every level of the hierarchy, but they are never alone.
Some Peculiarities of Speech
When I began this blog in 2006 I assumed that selection at the individual level would support the whole story. I changed my mind, first because of my concentration on language. Later, a more communal view of Homo led me to notice a series of biological universals that don’t seem explicable in terms of individual selection, kinship selection, or mutual backscratching (a.k.a. reciprocal altruism).
Language functions differently from other signal systems as shown by the speech triangle: two or more individuals pay joint attention to a shared topic.
A common example of the triangle in action is a teaching session. The teacher speaks about a topic; the student listens and learns. Teachers can be kin, but around the world they are not limited to them. Teachers can be expecting a mutual payback from the student, but around the world there are people who teach because they enjoy passing on knowledge. Teaching is essential to maintaining a group’s intellectual capital, but it provides no visible reproductive advantage to the teacher.
The speech triangle is an equalizer. I’m walking along the street and meet a colleague. I volunteer that a store three blocks away has a great deal on watermelons. Thanks, says the colleague who now has some information that was my secret. Pinker would try to explain this in terms of mutual backscratching, but I’m not so sure. Most other animals are not so deeply into sharing information.
We know that captive chimpanzees can learn to use words and phrases but in the wild they never tell one another anything. They communicate to control. This kind of discretion is easy to explain in terms of individual selection. A chimpanzee who knows where there will be some ripe fruit has an advantage over its fellows. A chimpanzee who blabs his news has given up an advantage. The fitness score of the chimpanzee who keeps secrets is almost certainly higher than the blabbermouth’s score. Thus, even though groups might benefit from language, it is not going to evolve among chimpanzees. This kind of reasoning makes it easy to explain why language never evolved with other species, and hard to explain why humans have such a hard time keeping secrets.
.Jean-Louis Dessalles argues that reputation for wisdom, helpfulness, and trustworthiness is the factor that enabled language to evolve. People who never share their knowledge gain poor reputations for either not knowing anything or for not caring about other people. But reputation as a force is peculiar in itself, a social factor that plays a very limited role throughout most of the animal kingdom. Caring what people think about you is a means of social control that in itself is hard to explain outside group-level selection.
Another peculiarity of language is how much more powerful it is than anything else in the animal kingdom. Since captive apes can use words and we can assume that whenever our ancestors began sharing information they were intellectually ready. But how did we get so much smarter than the other apes?
Suppose you have a society of phrase users and a mutation enables a speaker to utter something more sophisticated, perhaps a true sentence. Why would that mutation spread to other individuals? What advantage does it give? Is there some pressure that pushes individuals to speak in a more complex manner? I’m open to suggestions, but it is much easier to imagine group benefits than individual ones. Groups that can share more complex ideas than their rivals are able to plan and cooperate at a deeper level than groups that converse only at the ape level.
Group competition could push us well beyond the intellectual powers of a chimpanzee. At the individual level there is not much advantage being far superior to the others. The race may go to the swiftest, but there is no point in running twice as fast as the others. But cooperative groups can gain an advantage and replace less cooperative groups. Then some group in that set of winners becomes even more cooperative and outcompetes the more old-fashioned groups.
Other Peculiarities of Human Communities
Evidence of group selection is not limited to language. Let’s notice some other peculiarities of humans that are hard to explain via individual selection but easy to see as group selection.
How Are Such Things Possible?
I am not doubting evolution, gene theory, or survival of the fittest. Nor do I doubt the importance of reproductive success. Reproductive success varies greatly within a population, and evolutionary theory holds that the more successful a carrier or gene is at reproducing itself, the more “fit” it is. Thus, a carrier/gene that produces, on average, 3 offspring is more fit than one that averages 2 offspring. The average fitness (w) is found by taking a population (n) and dividing it into the number of next-generation descendants (n’); that leaves us with this equation:
w = n’/ n
Suppose, for example, that a population has 800 individuals who can run at 30 miles per hour for 10 minutes and 1000 members who can maintain 35 miles per hour for 10 minutes. Then in the next generation we find 850 30 mph runners and 1100 35 mph runners. The calculations are 850/800 and 1100/1000. The results: w30=1.0625; w35=1.1. The faster runners have the higher fitness score and eventually their speed seem likely to predominate throughout the whole population.
In the example above, n is the number of individuals, but it can just as easily be the number of groups. For example, suppose a population has 25 groups with an average of 50 individuals who speak true sentences with a subject, object, and verb. The population also has 123 such groups who speak in phrases. A generation later there are 30 groups speaking sentences and 125 groups speaking phrases. The calculations report: wsentences=1.2 while wphrases=1.016. The math tells the story. Groups speaking in sentences should ultimately replace the larger number of groups limited to phrases.
Since, mathematically speaking, the carrier n can be a group or an individual, multilevel-selection deniers are forced to argue that as a matter of practical reality all selection takes place at the group level. I say no, n can be a group as easily as it can be an individual. And remember that whatever the n it represents a carrier and a gene simultaneously.
Pinker replies: “Granted, it is often convenient to speak about selection at the level of individuals, because it’s the fate of individuals (and their kin) in the world of cause and effect which determines the fate of their genes. Nonetheless, it’s the genes themselves that are replicated over generations and are thus the targets of selection and the ultimate beneficiaries of adaptations.”
The first sentence—beginning with Granted—suggests that Pinker sees that the gene and the individual carrier are two sides of the same coin, but he hasn’t digested the fact. That leads him to a vapid conclusion which merely asserts that after all genes are what selection is all about.
His phrase, “targets of selection” strikes me as a truly empty metaphor. Selection is not an agent with purposes (or targets); it is an outcome. Pinker’s phrase is an effort to say the genes are ultimately more important than the carrier, but he has no reasoned justification for favoring one side of the coin over the other.
His second phrase, “ultimate beneficiaries of adaptations” refers to genes alone. I suppose I could say, no, the carriers are the ultimate beneficiaries. After all, the carriers are what actually taste life, but that’s just a matter of viewpoint. Both carriers and genes “benefit” from the adaptations and needless quarrels result from favoring one side or the other.
The philosopher Daniel Dennett sure resembles Charles Darwin.
The Atlantic magazine's website has a brief piece by Daniel Dennett on the relation between Alan Turing's notion of a computer and Darwin's theory of natural selection. The basic connection is that Darwin defined a mindless process that produces complex life forms, and Turing defined a mindless process whereby machines can solve any problem that has a computable solution.
Dennett is probably the most important philosopher arguing that the mind-body distinction is false and that mind can indeed be fully explained in terms of material mechanisms. It is an interesting issue for this blog because of the relation between mind and language. Can a mindless machine compute any and all sentences in a language?
We know as a matter of fact that the other animals of the earth cannot generate sentences, so, following Dennett, humans must have evolved new computational abilities to support language. Did we do that?
In my account of speech origins I propose that while our ape brains were adequate to get us speaking words and phrases, they were not enough to get us speaking true sentences or speaking about subjective processes. A true sentence consists of two focal points of attention united by a verb. An example is John Wilkes Booth shot Abraham Lincoln. To understand this sentence you have to focus on both Booth and Lincoln at the same time, normally an impossibility, but the two men are held together by the verb shot. We can imagine the scene with Booth. Lincoln, and the gun together and we pay attention to all at once because we understand it as a unitary event. Apes in their use of sign language have shown no hint of being able to create true sentences like this.
In my account, speech was originally used to direct attention to the concrete world, but eventually we developed the ability to speak about subjective things too. For example, Jack wrestled with Jill's idea. This is a true sentence with two focal points—Jack and Jill's idea. But the unifying verb—wrestled—is a metaphor. Some other verb might be possible—e.g., struggled, grappled… but they are all metaphors. No concrete verb gets at whatever it is Jack is doing. We can use computer verbs like tried to process but that's a metaphor too. I believe the ability to use metaphorical verbs—and perhaps metaphors in general—had to evolve to produce modern language.
The implication of Dennett's essay is that we must have evolved some Turing machines which could compute true sentences and metaphors. Yet, I confess that I doubt that such was the case.
Most people assume that to speak you have to understand what you are saying, but Dennett's point is that understanding is not necessary if you have a sufficiently well programmed computer. Take, for example, machines that play chess. I am old enough to have seen that whole story develop. In the 1950s the mastery of chess was often cited as a task for humans alone. Efforts to program machines to play chess were so limited that "toy" versions of the game with a few squares and pieces were the best most programmers could manage.
Back in the 1960's there was a big argument over whether the best approach was to mimic human thought or to rely on the "brute force" of a computer's great speed and data storage. I seem to recall a story in Scientific American from those days that said mimicking human thought was the more promising approach.
By the early 1980s chess playing machines were available and they used brute force. I bought one. It quickly became apparent that the only way a player of my feeble skills could beat the machine was to have a clear strategy in mind. A strategy was some long-range, general goal whose details I could not specify but which I could imagine sharply enough to guide my judgment about positions. With a strategy I could choose moves and eventually see my way to victory.
The other way to play chess is to use tactics, basically finding a set of specific moves that result in a stronger position. Chess machines use tactics. They follow moves and give the resulting outcome a score. They pick the move that leads to the highest score. I can only see a couple of moves ahead and if I relied on tactics alone, I would fail. The machine could evaluate more moves than I could, but with a strategy, if I stayed alert, I could win. I would make a move to support my plan and the computer would respond with an irrelevant move. Eventually, my strategy would overwhelm the machine. But by the early 1990s the story was different.
The best chess applications by that time were able to look deeply enough to give even some grandmasters a rough go. For me it was hopeless. The better players would hang on until they reached the end game. Chess end games—when each side is reduced to a couple of pieces—are especially strategic. Typically, players try to reach a situation, then aim for another situation, and then perhaps another. Machines still had a hard time with that kind of purposeful behavior.
In May 1997, however, an IBM machine called Deep Blue defeated Gary Kasparov in two out of three games. Kasparov was the greatest player of the time and possibly of all time. Deep Blue was able to compute so many steps ahead that it turned Kasparov's strategies into mere tactics and it had stored all possible end game positions and what move to make. Without even knowing that it was playing chess or what a pawn is, Deep Blue was able to out maneuver a man who understood the positions more profoundly than any other person alive.
Can something similar be accomplished using language? Can a mindless machine produce literature? The chess story reminds us that what many had once declared impossible can be done, but to do the impossible a machine must find a way to simulate purposeful behavior by generating a long string of computational steps. Can the production of meaningful sentences be reduced blind steps?
Language evolves in two ways—one is like natural selection, lacking in purpose. Phonetic changes, for example, don't matter per se. A coin might be called a penny or a benny or a venny. The important thing is that sounds distinguish words enough that listeners can catch which coin is meant. To the extent that language can change without changing meaning, language can evolve mindlessly.
But some changes do result from purposeful changes in meaning. I have noticed that a new verb has appeared in the past month. Americans can now say things like Governor Romney has etch-a-sketched his position on immigration. (*For non-Americans I explain the etymology of etch-a sketch beneath this post.) Until quite recently Etch-a-Sketch was a proper noun, but now it I have heard Democratic commentators use it as a verb, a metaphorical synonym for opportunistic change.
Suppose we had a computer that was fully up to date on the American form of English on May 1, 2012 and then in June was confronted with the etch-a-sketch verb. The computer's dictionary would identify Etch-A-Sketch as a proper noun with an –ed suffix and conclude that this is a noun being used as a verb. But what does it mean? Is there a step by step process going from the definition of the word as a noun to its use as a verb?
Sometimes there might be, when the verb simply means to use the noun, as in He Photoshopped his picture. But this new use of etch-a-sketch is metaphorical. I understand it by imagining the Etch-A-Sketch shaking and covering up an old image, ready to display something else. I also know the context of where the word came from and catch the implication of insincere opportunism as well.
Understanding a sentence is like watching a chess game and grasping the player's strategy, something chess-playing machines still cannot do. The observer sees a move and makes a leap, grasping the purpose supporting the move. In understanding a metaphorical sentence the listener must leap to the relevant references and see the purpose that justifies them.
Can a machine do this in some step-by-step manner? I don't like to say never, but I don't know what those steps would be.
Is the brain a computer?
Dennett takes it for granted that the brain is a computer, and although many agree with him the assumption is not proven. And if we believe that human chess playing involves strategic thinking, there is strong evidence that we think in a manner unavailable to chess-playing machines. The fact that machines can beat us at the game is not evidence that our brains are inferior computers anymore than the fact that automobiles can outrace us proves that our bodies are powered by inferior internal-combustion engines.
The assumption that the brain is a computer relies on our understanding of matter. Back when Galileo was laying down the rules of scientific thinking, he said it must stick to primary qualities, i.e., measurable qualities. Sensations, judgments, and purposes are secondary qualities and not to be included in scientific explanations of phenomena. That's why Lamarck's account of evolution was dismissed by people like Lyell and Darwin as unscientific. It appealed to a secondary quality, purpose. Darwin found the way to explain evolution without appealing to purpose.
But science's success does not mean that secondary qualities do not exist. It would be hard to persuade people that they don't have sensations, don't make judgments, and don't have purposes. The mind-body distinction is widely debated by philosophers and psychologists, with opponents of the distinction confident that they are attacking some form of spiritualism. Furthermore, the details of the distinction are vague. If you want to replicate the mind in a mechanical body, you are unsure how to prove you've done it beyond Turing's suggestion of seeing if a person can be fooled by a machine.
The primary-secondary quality distinction, however, has a more scientifically acceptable provenance, and makes for a more clear challenge. To cross the primary-secondary quality divide, a mindless machine would have to follow a series of steps to get from primary-based knowledge (scientific, measurable knowledge) to secondary-based knowledge (humanistic, metaphorical knowledge).
One challenge of this test is to prove the presence of secondary knowledge. I happen to believe that elephants have sensations, make judgments, and behave purposefully, but I cannot prove it. Elephants may be mindless, as Descartes said they were. But Descartes also said that the presence of language proves the existence of the human mind. So lets look at language. Can a machine which can detect only primary qualities compute sentences that express secondary qualities.
Language works at the conscious level, forcing things into our attention, and appealing routinely to secondary qualities: Wow, she's a looker; Try this. It tastes great; I'm going to write a blog so I can understand the details of the subject; I think 'Casablanca' is better than 'Citizen Kane.' Are sentences like these computable? Come to think of it, the etch-a-sketch metaphor is also based on a secondary-quality. The opportunism it implies is subjective, like beauty or grandeur. So computing a sentence about Romney etch-a-sketching his way to a position requires crossing the primary-secondary divide.
Even I am enough of a programmer that I could have a computer look into a database and pull out one of these sentences. But we know that is not how we create our own secondary–quality sentences. The etch-a-sketch sentence, for example, reflects a novel use of a noun as a verb. It cannot have been pre-stored in our brains. We either have some way of computing our way across the divide, or our brains are not Turing machines.
Chess machines cross the strategy-tactics divide by extending its analysis so many steps that the difference between strategy and tactics disappears. The difference between primary and secondary qualities seems to be more categorical, but in some cases that difference makes crossing the divide easier. You just need a simple table to convert wave length (primary quality) into colors (secondary quality). Other secondary qualities—like favorite color, contrasting colors, and an illustrator's pallet—have no counterpart in the world of primary qualities.
Another difference between chess and language is that chess is a complete system. The rules refer to moves made by defined pieces on a prescribed board. Language is not complete. At any moment a speaker might say something like His face turned the color of Grandma's cherry pie. Analogies can come from anywhere. Yet they cannot come from everywhere. His face turned the color of Granma's first novel will not do. Computing all the sentences of a language, and only the sentences of a language may be impossible.
The Etch-a-sketch is a popular toy used for drawing. It displays a silver screen and lets a user draw images. To produce a different drawing you shake the toy and the silver powder erases the old image.
Earlier this year, when Governor Romney was campaigning against other Republicans to get the party nomination for president, he advertised himself as a "severe" conservative. At some point a reporter asked one of Romney's aides how they planned to appeal to less conservative voters when the whole nation was voting. The aide replied that presenting a less conservative Romney would be as easy as changing a picture on an Etch-A-Sketch. There was a hullaballoo about Romney's apparent hypocrisy and the campaign's cynicism implied by this image.
A bit later spokespeople for the Obama campaign began using a new verb: to etch-a-sketch.