Who Is the Master Who Makes Colorless Green Ideas Sleep Furiously?

[Originally posted March 9, 2008. This re-post has been lightly copy-edited]

“Language of thought” theory (LOT), originally developed by Jerry Fodor in the 1970s, and now championed most famously by Steven Pinker in The Language Instinct and The Stuff of Thought, presumes a pre-literate conceptual language, sometimes called mentalese, upon which our conscious, tangible symbolic language is based. This language of thought is imagined to be innate, and thus a universal substrate for all human language from Algonquin to Finno-Ugric to Brooklynese.

The LOT hypothesis is an outgrowth of Noam Chomsky’s nativist theory of a “universal grammar,” which in its turn was a response to the reigning behaviorist paradigm of the day. Behaviorism never fully rebounded from Chomsky’s critique (though it’s found new expression in the speculative protoscience of memetics) , and we’re all better off for this. But beyond this, nativism has not proved to be very fruitful in our understanding of cognition, serving mostly (through no fault of Chomsky’s) to fortify the sociobiological argument that our cultural norms reflect hard-wired biological determinants, which emerged to help us manage the challenges of our paleolithic beginnings.

There are a number of logical problems with the LOT hypothesis, with perhaps the most obvious being that words, unlike numbers, are not static and precise through time, as they would need to be if they were subject to unconscious translation into and out of mentalese. The number represented by the numeral 2, for example, can be counted on to always be the same. But what is indicated by the modern English words love, doctor, faith, fish, holiday, circus, atom, fairy, wealth and savage, just to name a few, has wildly varied just in the few hundred years we’ve been using this form of the language. If there was some kind of inborn uber-language which determined the meanings expressed in our own spoken languages, it’s difficult to see how it could permit this kind of semantic drift.

The LOT model is built on the metaphor of computer processing, so it is instructive to ask how well a computer would function if different things were intended by the same terms during successive installs of a piece of software. It seems plausible to many of us living now to imagine that human language rests on a logical foundation just like a computer program: after all, we can perform logical calculations, just as a computer can, and most of our expressions appear to be logically grounded. But the question to ask is not what we can do now, but what humans or humanoids could and did do at the dawn of language, at least 50,000 years ago, perhaps much earlier. The rudiments of formal logic didn’t appear on the scene until less than 3,000 years ago, with the Greeks, and weren’t developed into a complex system until the 20th century. This would be a strange course of events if formal logic were built into the structure of our cognition from the start, which is what LOT proposes.

As best we can tell, for the first several thousand years of our existence human cognition took the form of what we now derisively call “magical thinking,” or myth. This is the environment into which language was originally born and given to develop. There is little in the linguistic and ethnographic data to presuppose a rational thought process underlying pre-modern language, and a great deal to suggest something very different.

Ernst Cassirer notes that the primacy of mythological thinking presents a significant problem for the “realist” view. The common line is that myths were erroneous explanations of objects and phenomena, given the lack of adequate tools and resources to understand these objects and phenomena for what they really were. But this description is based on a misunderstanding of mythical thinking; it presumes that from the very first, humans were concerned with explanations. The problem is that to formulate the questions that these explanations are supposed to answer, one must already have a language, and a fairly well-developed one. As Cassirer writes, in Language and Myth (1946):

It seems only natural to us that the world presents itself to our and inspection and observation as a pattern of definite forms, each with its own perfectly determinate spatial limits that give it its specific individuality.

But it becomes difficult to see how these forms might have been experienced before there was a language to conceive them in. It would seem that ideation and language require each other. But then we are faced with the problem of winnowing. Cassirer continues:

What is it that leads or constrains language to collect [classes of objects] into a single whole and denote them by a word? … As soon as we cast the problem in this mold, traditional logic offers no support … for its explanation of the origin of generic concepts presupposes the very thing we are seeking to understand and derive, the formulation of linguistic notions.

Cassirer was writing 40 years before Pinker’s first books on language, but provides an apt preemptive critique of the LOT thesis. How would this putatively inborn, genetically determined linguistic structure have supported a conceptual schema so radically different from our own, and so different from what its own nature would predict, for so many thousands of years?

Cassirer provides numerous examples of the slow progression of mythological ideation from the earliest and simplest myths to the appearance of logical reasoning, and we could turn to any prominent cultural anthropologist for additional demonstrations. But there is interesting evidence of a more recent provenance as well, in the autobiography of Helen Keller, who very explicitly asserts that she had close to no inner life at all before she was taught sign language:

Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect.I was carried along to objects and acts by a certain blind impetus… [N]ever in a start of the body or a heart-beat did I feel that I loved or cared for anything. My inner life, then, was a blank without past, present, or future, without hope or anticipation.

We shouldn’t read too much into one self-reported anecdote, of course. Keller was a special case, born with sight and hearing only to lose it at nineteen months, so she was exposed to spoken language for a not insignificant period of time. But it is intriguing to note how non-conceptual her cognition was before she learned to use language.

In the March 10 issue of The New Yorker, John Lancaster writes of a similar, though more everyday, predicament when it comes to the most precisely descriptive regions of experience, as in the appreciation of wine, or perfume. He begins with a story about his “discovery,” after long resistance, of what oenophiles call “graininess” in red wine. Before the experience, he had rejected the term as rhetorical overkill–something that many people with less refined palates (myself included) are quick to presume when encountering such seemingly fantastical language. But when he finally noticed graininess (after many failed attempts), he conceded it was the perfect word, and not nearly as figurative as he had imagined. Here’s the interesting part, which I was not expecting to find in a New Yorker article on olfactory perception:

What’s more, in tasting it I realized I’d encountered versions of it–milder, more restrained–before. Now I knew what grainy tannins were. Most taste experiences work like that. A taste or smell can pass you by, unremarked or nearly so, in large part because you don’t have a word for it; then you see the thing and grasp the meaning of a word at the same time, and both your palate and your vocabulary have expanded.

This is exactly the opposite of the common sense view, that objects and phenomena precede their names (though to be fair, someone had to be the first person to call a wine “grainy.”) Is it possible that our understanding of the world expands and develops not before we describe it, and not because we describe it, but as we describe it? This seems much more plausible than the Darwinian explanation, in which we are in constant stenographic response to a world of given stimuli; and because the latter has us spinning our wheels, culturally, over alleged biological imperatives from a world long past, the possibility that we participate in our description of the world also seems much more likely to allow some actual evolution of thought, philosophical, scientific, and moral.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s