“Speech Writing and Sign” in “Speech Writing And Sign”
I HAVE DISCUSSED linguistic representation in order to understand what it means to say that language represents human experience. But what do we mean when we say “human language”? What boundary conditions do we place on this phenomenon?
Sapir defined language as
a purely human and non-instinctive method of communicating ideas, emotions, and desires by means of a system of voluntarily produced symbols. [1921:7]
From this definition, we can identify three distinct definitional trends that have emerged in contemporary linguistic discussions.
HUMAN AND NON-HUMAN COMMUNICATION
From one perspective, the definition of language must distinguish between human and nonhuman means of communication (Sapir’s “purely human and non-instinctive”). This position has been championed by Hockett (Hockett, i960; Hockett and Ascher, 1964), who has characterized communicative behavior with respect to a set of “design features” (figure 3.1), of which only humans are reputed to exhibit all thirteen (figure 3.2). Others interested in differences or similarities between animal and human communication (notably Sebeok, e.g., 1968, 1977) have been less convinced of the ease with which sharp lines of demarcation can be drawn between human and nonhuman species. The issue has become particularly volatile in the last decade with the growing number of experiments in which non-human primates have been taught an array of visual “languages.” The most popular format, used by Beatrice and R. Allen Gardner (e.g., 1969), Roger Fouts (e.g., 1973), Herbert Terrace (e.g., Terrace and Bever, 1976), and Francine Patterson (e.g., 1978), has been a system based on American Sign Language,1 although Premack and Pre-mack (e.g., 1972) experimented with plastic chips, and Duane Rum-baugh (e.g., 1977) has used a computer panel. It is currently a matter of lively debate as to whether it is appropriate to call the communicative skills of these primates “language,” or at least “language-like” (see Harnad, 1976; Rumbaugh, 1977; Terrace, 1979).
Fig.3.1 Hockett’s Thirteen Design Features of Animal Communication. From “The Origin of Speech” by Charles F. Hockett, p. 91. Copyright © 1960 by Scientific American, Inc. All rights reserved.
Fig.3.2 Distribution of Design Features among Eight Systems of Communication. From “The Origin of Speech” by Charles F. Hockett, pp. 94-95. Copyright © 1960 by Scientific American, Inc. All rights reserved.
EIGHT SYSTEMS OF COMMUNICATION possess in varying degrees the 13 design-features of language. Column A refers to members of the cricket family. Column H concerns only Western music since the time of Bach. A question mark means that it is doubtful or not known if the system has the particular feature. A blank space indicates that feature cannot be determined because another feature is lacking or is indefinite.
LANGUAGE AS SOCIAL EXCHANGE
A second definitional stance has focused on the idea of language as social exchange (in Sapir’s words, a “method of communicating ideas, emotions, and desires”). How important is an understanding of the communicative function to the study of human language? Linguistics texts commonly remind us that “the purpose of language is to communicate,” but the notion of communication presented is often no more precise than it is in everyday language (“My parents and I don’t communicate any more”).
One reason for this lack of precision has been the ambiguous position that the very notion of “communication” has held within the discipline of linguistics. Is the social role of language a nicety one can study if one feels so inclined, or is it basic to the linguistics enterprise itself? Sociolinguists such as Hymes and Halliday argue that an understanding of the communicative context—who is speaker, who is being addressed, what is happening extralinguistically—is part of the linguist’s job.
On the other hand, Chomsky takes a position on the status of “communication” in a linguistic theory reminiscent of his position on semantics. While he has no objection to such study, he does not see how a better understanding of semantics (or of communication) will further the construction of a theory of grammar. Throughout his formal writings, the notion of language as social exchange is conspicuously absent. Instead, Chomsky studies the abilities of an idealized speakerlistener with no limits on productive or perceptual performance and no relationship to the actual language community.
Many of his readers have concluded that Chomsky denies the importance of thinking about the communicative role of language. John Searle, who studies language as the performance of (social) speech acts, characterizes the commonly held perception of Chomsky’s position as follows:
There are two radically different conceptions of language . . .: one, Chomsky’s, sees language as a self-contained formal system used more or less incidentally for communication. The other sees language as essentially a system of communication. [1974:30]
In his recent work, Chomsky has attempted to refute the assumption that his language theories preclude any serious consideration of social interchange. Thus, in his response to Searle’s comments, Chomsky asserts:
I have never suggested that “there is no interesting connection” between the structure of language and “its purpose,” including communicative function, nor have I “arbitrarily assumed” that use and structure do not influence one another. . . . Surely there are significant connections between structure and function; this is not and has never been in doubt. [1975:56]
He further points out that some of the disagreement arises because of confusion over what the term communication means:
It is hard to know just what people mean when they say that language is “essentially” an instrument of communication. If you press them a bit and ask them to be more precise, you will often find, for example, that under “communication” they include communication with oneself. Once you admit that, the notion of communication loses all content; the expression of thought becomes a kind of communication. These proposals seem to be either false, or quite empty, depending on the interpretation that is given, even with the best of will. [1979:88]
This point is illustrated with two examples:
As a graduate student, I spent two years writing a lengthy manuscript, assuming throughout that it would never be published or read by anyone. . . . Once a year, along with many others, I write a letter to the Bureau of Internal Revenue explaining, with as much eloquence as I can muster, why I am not paying part of my income tax. I mean what I say in explaining this. I do not, however, have the intention of communicating to the reader, or getting him to believe or do something, for the simple reason that I know perfectly well that the “reader” (probably some computer) couldn’t care less. [1975:61]
These points are well taken, but are they as powerful as Chomsky believes? To begin with, he would never have been able to write his manuscript or write to the IRS if he had not first learned language from a community of communicating language users.2 The claim that monologue or even unarticulated thought does not depend on a linguistically communicative base is as absurd as claiming that alphabetic writing has no strong dependence on spoken language. Moreover, especially in the case of writing to the IRS, it is difficult to believe that Chomsky has no notion of a reader in mind as he composes his letter. In writing letters, we adjust our salutation, level of formality, number of presuppositions, and so forth to the specific reader or class of readers we have in mind. The fact that he bothered to mail the letter would indicate that he believed at least some minimal communication was about to transpire.
Nevertheless, one point implicit in Chomsky’s writing should be emphasized in any discussion of language as communication: not all linguistic exchanges between people are intended to communicate (whether information, feelings, or needs). We sometimes use language to hide our feelings or to conceal information, as when we engage in small talk to avoid expressing what we actually think. For this reason, it may be more appropriate to speak of language as a means of social interaction (which might include concealment) rather than necessarily as a means of communication.
LANGUAGE AS A FORMAL SYSTEM
A third definitional perspective is concerned with the formal (in the sense of “form” or “structure”) properties of language (Sapir’s “system of voluntarily produced symbols”). Chomsky has been the most forceful proponent of this position, which he clearly presents in Syntactic Structures:
I will consider a language to be a set (finite or infinite) of sentences, each finite in length and constructed out of a finite set of elements. All natural languages in their spoken or written form are languages in this sense, since each natural language has a finite number of phonemes (or letters in its alphabet) and each sentence is representable as a finite sequence of these phonemes (or letters), though there are infinitely many sentences.
And more important,
The fundamental aim in the linguistic analysis of a language L is to separate the grammatical sequences which are the sentences of L from the ungrammatical sequences which are not sentences of L and to study the structure of the grammatical sequences. [1957:13]
One problem which arises in studying language as a formal system is deciding what human activity is properly “linguistic,” and what is not. (For simplicity, we may speak of “linguistic” vs. “communicative but nonlinguistic” behavior, keeping in mind the caveats about the term communication expressed above.)
Clear delineation between linguistic and nonlinguistic communication becomes increasingly difficult as linguists become less arbitrary and more reflective in their establishment of guidelines for linguistic research. For example, it has traditionally been assumed that linguistic messages can be transmitted and received only through the auditoryvocal channel; gestural communication has typically been excluded as a means of transmitting linguistic information. The underlying assumption is that, while spoken language is divisible into discrete entities whose patternings can be characterized by identifiable rules, gestural (or, more broadly, kinesic) communication is not. (Writing is also excluded, but for other reasons.) Hence, Crystal writes:
Despite its importance, the visual system of communication in humans does not have by any means the same structure or potential as the vocal–there is nothing really like grammar, for instance, and there is a very finite vocabulary of gestures indeed in any culture–and the linguist does not therefore call it language. [1971:241]
The a priori restriction of language to the vocal-auditory channel3 has been challenged from two quarters in recent years: from those, such as Birdwhistell (e.g., 1970), who have argued that the paralinguistic behavior with which we supplement (or replace) speech is also “linguistic,” and from linguists studying the structure of American Sign Language.
The issue here, as Crystal himself realizes, is not merely one of modality per se. Rather, it is whether a visual mode of expression necessarily lacks “the same structure or potential as the vocal,” whether there is anything “really like grammar,” or whether a visual vocabulary is inferior to that of a spoken language. Underlying Crystal’s exclusion of visual communication from the domain of language is the assumption of a clear understanding of the requirements for vocabulary and grammar in spoken languages.
FORMAL CRITERIA FOR HUMAN LANGUAGE
Human language is commonly defined with respect to three structural properties: having a finite lexicon, having a finite set of collocation rules, and combining lexical items grammatically to yield a potentially infinite set of well-formed constructions (e.g., Chomsky, 1957). These criteria seem straightforward enough, and yet, when we try to apply these abstract criteria to languages which people in society use, we encounter a problem: What are the bounds on the conditions we have set? In general, we want our linguistic analysis to account for all the actual well-formed utterances which occur and also say something about those which don’t (but might have) and about those which don’t (and couldn’t). But, in addition, if our linguistic description is to have any concrete relation to the language people are actually using, it will need to be more specific about the theoretical possibilities.
These concerns are reflected in the following quantitative questions about our initial definition of human language:
What is the minimum number of words a language can have? What is the maximum?
What is the minimum number of combination rules a language can have? The maximum?
Is there an actual limit to the potentially boundless number of generated sentences?
To these can be added questions which explicitly ground language in human social experience:
What is the semantic domain of reference of a language? (Are the things, events, or ideas, being talked about present at the time of discussion? Is the language used to refer to only a restricted area of experience, such as religion or sailing ships?)
What is the social context in which the language is used?
Our difficulty in answering the first three sets of questions makes our answers to the last two all the more important.
THE LEXICON
How many words does a human language need? Is there a minimum number necessary? It has been suggested that while bees can, in principle, transmit an unlimited number of messages (e.g., Langacker, 1973:19), their “lexicon” (the round dance, the wagging, dance, and the speed and directionality of each—see von Frisch, 1953) is too restricted for their communicative system to be considered a language. We can pose the same question with respect to dialects or historical stages of any human language (insofar as dialects or historical periods are distinguished lexically). How many isoglosses does one need to bundle together before the whole package can be considered a dialect distinct from any other? Similarly, how many words must be different in a language at time A and at time B to decide that times A and B constitute significantly distinct “stages” in the history of that language?
The same problem arises with respect to sublanguages, that is, specialized forms of vocabulary and /or syntax used by restricted groups of people engaged in joint or parallel activity (sailors, astronauts, CB radio operators—see chapter 5). While one can write a dictionary (and, where appropriate, a grammar) of those items which are distinct from the language of the general community, how do we determine whether we are indeed dealing with a distinct language?
Or consider the issue of pidgin languages. Pidgins have a relatively small number of lexical items, relying instead on simple rules of syntax to produce enough terms for labeling the environment. In Neo-Mela-nesian (currently called Tok Pisin), we find
grass bilong head = “hair”
grass bilong face = “beard”
Yet the presence of a productive means of creating a large lexicon from a comparatively small number of primes cannot be a condition for excluding such a scheme from the domain of language. German, to cite the best-known example, uses a similar principle of lexical productivity, recombining old elements to form new words:
fern = “far,” “distant”
Fernglas = “binoculars”
Fernsehen = “television”
The only obvious difference is that Neo-Melanesian retains an analytic structure,4 while German is synthetic, combining all morphemes into a single word.
Implicit in all of these questions is the equally fundamental question: Is there a minimum number of types of words a language must have? (This leads directly into the question of semantic domain of reference.) For example, a human language must have at least some common names (or else participants in a conversation would have to share all the same experiences in order to understand one another). Must a language also have proper names, or can these be adequately replaced by definite descriptions? Empirically based discussions of universals in spoken language have included both proper nouns and common nouns as necessary elements in human language (e.g., Hockett, 1963). However, these universals have resulted from post hoc observations rather than deductions from more fundamental conceptions about human language.
Another consideration with respect to naming is the degree of abstraction of class terms. Chair and furniture are both class terms, yet the latter category includes more items than the former, and is commonly assumed to represent a higher level of abstraction. Should a definition of language place any constraints on the number of super-ordinate categories? While such a question is rarely posed today (we tend to assume that any language has “enough” superordinate terms to fill its needs), the issue was central to turn-of-the-century discussions of “primitive” languages. It was argued that primitive peoples had primitive mentalities, which were not capable of abstraction:
The nearer the mentality of a given social group approaches the pre-logical, the more do these image-concepts predominate. The language bears witness to this, for there is an almost total absence of generic terms to correspond with general ideas, and at the same time an extraordinary abundance of specific terms, those denoting persons and things of whom or which a clear or precise image occurs to the mind. [Lévy-Bruhl, 1926:170]
Should Lévy-Bruhrs claim of an “almost total absence of generic terms” ever be substantiated empirically, we would need to judge whether or not systems lacking a significant number of generic terms are indeed languages.
Still other questions arise in considering whether there is a minimum number of types of words a language must have. For instance, must a system have metaphors in order to be considered a language? Although practically no one has considered metaphor a sine qua non for human language when constructing formal definitions of language,5 we nevertheless tend to express surprise when a nonhuman exhibits the ability to construct metaphors—for example, the accusation by Koko the gorilla that her trainer was a dirty bad toilet (Patterson, 1978) or Washoe’s famous nonce form water bird for “duck.”
How many words can a human language have? Probably no one knows all the words in Webster’s Third International. Is this merely a “performance” limitation on human memory, or are there theoretical reasons why this is so? In terms of the questions concerning semantic domain and social context of use, can limitations on vocabulary be derived from practical limitations on the number of areas of inquiry and social contexts a human being normally encounters?
Within the formal purview of linguistic inquiry, the question of a maximum becomes important in determining the feasibility of using a particular mode of representation for encoding linguistic messages. Goody and Watt (1968), among others, have argued that pictographic or ideographic written representation would be impractical as a productive means of written language because such systems would require too many “lexical items” to record everything which needs to be represented. In fact, there is evidence in Sumerian,6 Hittite (Gelb, 1952:115), and Chinese (ibid., p. 118) that as these forms of writing moved from pictographic to ideographic systems, the number of lexical items (and the variations with which particular items were produced7) measurably decreased. However, Goody and Watt’s (1968) initial claim that written Chinese has so many lexical items as to render it learnable by only a handful of people, later had to be retracted (Goody, 1968), as there are indeed a large number of literate Chinese. Whether this number is “large enough” (and whether Romanization would make universal literacy possible among the Chinese) is an issue which brings hypothetical questions about lexical maxima into the concrete domain of language pedagogy and use.
GRAMMATICAL RULES
The same questions concerning the content of a language’s lexicon also apply to its combination rules. If there is a minimum number of grammatical rules a communication system must have to be a language, does the dance of the bees qualify, since the only “combinations” are the recursive additions of more loops or wags in a given period of time? In the realm of human language, Pig Latin has only three rules of its own: remove the first sound of the word, place it at the end, and add [eI]. Is this still English with an added twist, or is it a new language? Precisely the same question can be asked of dialects, sublanguages, or historical stages which are differentiated from their nearest linguistic cousins by only a handful of combination rules.
Is there a minimum number or sets of types of grammatical rules a language must have? For example, must every language have definable word order conventions? The issue of whether American Sign Language has such word order constraints has led some linguists to question its linguistic status (e.g., Crystal and Craig, 1978). Further research might reveal that restrictions on word order–whatever they may be–are less stringent in signed languages than in speech. If so, we will need to rethink our understanding of what grammatical rules are necessary conditions for language, and whether word order is among them (see chapter 7).
What about a maximum of combination rules? Because there is no language for which linguists believe they have satisfactorily enunciated all the grammatical rules (though their numbers are presumed to be finite), it may seem premature to ask whether there is an upper limit on the number of rules a human language might have. Yet contexts exist in which such a question has been or might be raised. Lévy-Bruhl, in fact, took extreme grammatical complexity as a sign of primitiveness:
We find that the languages spoken by peoples who are the least developed of any we know–Australian aborigines, Abipones, Andaman Islanders, Fuegians, etc.–exhibit a good deal of complexity. They are far less “simple” than English, though much more “primitive.” [1926: 22]
Although his logic is suspect (one wonders, for example, how he would have interpreted the complexities of Sanskrit), his comments do suggest an interesting question. As mentioned above, for a language to be learnable, it must have class terms. If, by analogy, each of our sentences was governed by different grammatical rules (which were potentially productive, yet never yielded more than a single sentence each), the language would, likewise, be unlearnable.
In the discussion so far, I have assumed that there are grammatical rules that can, in principle, be identified and that will generate all (and only all) the well-formed formulas (i.e., sentences) in the language. There are two perspectives from which to challenge this assumption. The first requires us to rethink the generalizability of a single grammar across data, across speakers, or across time. The second is more radical still, for it questions the very “linguistic competence” that has become a byword in linguistic analysis. Let us examine these two perspectives in turn.
One of the most important issues confronting the linguist is how much of a language he can expect to account for by his rules. The Chomskian school, developing an economic metaphor, has implied that exceptions to grammatical rules should be very “costly,” and therefore, rules should be constructed so as to yield the smallest number of exceptions possible. Like latter-day counterparts of Karl Verner, transformational grammarians have implied that actual language can be wholly generated by rules; our task is to find them.8
There have, however, been many schools of linguistics which have rejected the Chomskian cost-benefit analysis of language. Sapir, while acknowledging that grammar is “a universal trait of language,” still recognized the existence of exceptions:
Were a language ever completely “grammatical,” it would be a perfect engine of conceptual expression. Unfortunately, or luckily, no language is tyrannically consistent. All grammars leak. [1921:39]
Jakobson (e.g., 1962) assumed that languages are necessarily always in a state of grammatical imbalance: through historical change, regularity is produced in one part of the linguistic system, which in turn produces irregularity in some other part. Vachek modifies this stance somewhat, arguing that
the permanently imperfect balance, as we take it, is not exclusively due to the mechanical, “unwanted” consequences of a previous therapeutic change, but simply to the fact that the undeniably present integrating tendency (a tendency whose therapeutic nature is obvious) can never assert itself fully in the system of language. [1966:33]
In more recent American linguistics, studies of language variation (e.g., Labov et al. 1972; Sankoff, 1973) have helped to point up the differences between regularity and homogeneity. Predictable variation within individual’s idiolects and across speakers may provide important clues about comparative social status or prospective historical change.
But what happens when we go a step further and question whether language use is indeed as regular as linguists tend to assume? When we list all the “regular” exceptions, make note of all the predictable variations—is there any language left that defies description? The question is a delicate one, since it immediately raises the specter of prescriptivism. As mentioned in Chapter 1, American linguistics in particular championed the description of languages in their own terms, not as they were presupposed to be. In contemporary times, this position has been echoed by linguists such as Labov (e.g., 1972a), who have argued that Black English is as much a “language” as standard white English in that both are governed by stateable, productive rules.
Our question, though, is different: What if there are no rules? Is grammar really the linguist’s reconstruction of our internalized knowledge, or is it, at least in part, a norm imposed by teachers of English in literate societies? We are not suggesting that all–or even most–of grammar is imposed from the outside. Every language which has ever been observed has displayed strong degrees of regularity, and we know that nonlinguistically trained informants are quite capable of making a considerable number of linguistic judgments.9
In discussing Wayne O’Neill’s political objections to imposing standard, middle-class English upon all speakers (and writers), John Simon (1979) points up an important distinction between prescriptivism and random use of language. O’Neill, apparently, is objecting to the former, yet his own writing is littered with constructions that seem to defy anyone’s grammatical arsenal, regardless of its leniency. Grammatical rules should yield consistency; still, as Simon demonstrates, O’Neill sometimes has singular subjects agreeing with plural verbs (“the reader or beholder have”), misuses adverbs (“we do not understand at all well”), and does not identify referents of his pronouns. One might argue that such errors can be explained by performance factors. However, since the examples appeared in print in such documents as the Harvard Educational Review, one wonders where competence can be tapped if not in the writing which is presumably edited before publication. The point is not to condemn O’Neill’s grammar, but to ask whether the average speaker of English indeed “knows” the rule governing agreement between the verb and a compound subject joined by or. We may dismiss this rule as an example of prescriptivism, but we are still faced with the problem of how to decide which verb to use in such instances–apparently many of us exhibit free variation. How much of language, therefore, evades grammatical “competence” rules? I shall return to this issue in chapter 6.
PRODUCTIVITY
Much of what I have already said about lexicon and grammatical rules is pertinent to the question of how few or how many combinations of elements in principle can be, or in practice are, generated by a human language user. The fewer the items and types of arrangements, the fewer the significantly different strings which can be produced. However, the number is still potentially infinite if recursion is admissible (e.g., “the very very very . . . very black cat”), but this type of recursion has little to do with actual language use.
Following the Chomskian model that distinguishes between competence and performance, it has been tempting to assume that, given the appropriate circumstances, all users of a language could make equally productive use of the theoretically available components of the language (the lexicon and combination rules). However, once this hypothesis is tested on actual language users, such a possibility becomes insignificant. To begin with, any measurement of potential generativity of communicative significance (such as excluding recursion of adjectives, repeated conjunctions) will reflect education, intelligence, personality, conditions of language acquisition, and mental and physical health–hardly traditional matters of linguistic theory. For example, deaf people who learn to speak “grammatically” typically restrict their sentences to a limited number of syntactic patterns. Such restrictions are not surprising, since these are the patterns on which the deaf have been repeatedly drilled. Similarly, shyness in speaking before others or discomfort in writing may be reflected in constrained oral or written syntax. The extent of the language users “passive” language skills (e.g., oral comprehension or reading) remains uncertain so long as these skills remain untested; we cannot assume the presence of a skill for which we have no evidence.10
Moreover, there may be stable social conditions which affect the amount of productivity possible for an individual or even within a language. For example, the use of phatic communication–talking for the sake of keeping the speech channel open rather than for conveying information (Malinowski, 1923)–does not seem to be universally acceptable in language communities. P. Gardner has observed that the Paliyans of South India “communicate very little at all times and become almost silent by the age of 40. Verbal, communicative persons are regarded as abnormal and often as offensive” (1966:398; cited by Hymes, 1974:32). Another type of restricted productivity is found in pidgin languages (see chapter 5). Pidgins are used by a restricted segment of a population (typically adult males), in a restricted set of contexts (e.g., transacting business with individuals not sharing one’s native language), and with very little if any stylistic variation (Labov, 1971). But does this last criterion actually distinguish between any pidgin and any full fledged language? Although it is difficult to know how to measure stylistic potential, we might hypothesize that language communities which do not prize verbal eloquence (such as the Paliyans) might be just as deficient in stylistic variation as a pidgin which has not moved toward creolization.
MODES OF LINGUISTIC REPRESENTATION
I have suggested that the traditional definition of human language with respect to lexicon, combination rules, and productivity is not sufficiently specific for actual linguistic analysis. I have also explained that the question of how many (or few) words, construction types, or novel utterances a language actually has depends mainly on its use by the language community or the individual user. However, most of my examples have been chosen from spoken language, the traditional domain of linguistic inquiry. I shall challenge this a priori restriction of language to spoken language in order to demonstrate that the criteria for speech–and the inadequacies of these criteria–are similarly applicable to written and sign language. Therefore, we need to drop our presuppositions about which modalities do or do not yield linguistic representations and begin once more with the primary relationship between first-order representation and experience. Only after delineating possible communicative representations can we decide which are and which are not linguistic. Figure 3.3 summarizes the distinctions to be developed in the subsequent discussion.
Fig. 3.3 The Linguistic Representation of Experience
The first distinction in Figure 3.3 is between representations which are perceived aurally and those which are perceived visually. Alternatively, these two types of representation might be described with respect to their production, but the characterization would have become more cumbersome (aurally perceived representations can be produced by the human voice, by computers, by drum beats). For convenience, we will speak of “visual representation” and “auditory representation,” by which we will mean “visually perceived representation” and “aurally perceived representation.”
A second level of distinction can be made with respect to the durability of each representation. In the case of auditory representation, the possibility of a durable representation is only as recent as the phonograph and the tape recorder. Paradigmatically, however, auditory representation is ephemeral (or “rapid fading”—Hockett, i960). For the representation to be perceived, the percipient must be face-to-face with the message sender.
At first glance, this stipulation might appear to reflect an accidental feature of spoken language exchange—witness dictaphones, telephones, and television. In fact, it is the physical proximity of the interlocutors which defines, in normal situations, the conventions of spoken language and which distinguishes it from writing (durable visual representation). A comparison of written and spoken language will make these differences clear. In the process, I shall also illustrate two points made in Chapter 1: that spoken language is illuminated by contrast with other linguistic modalities; and that the factors that condition linguistic structures are not all formally linguistic.
The critical difference between spoken and written linguistic representation–and the difference from which a number of other defining characteristics can be derived–is the proximity between sender and receiver. Paradigmatically, speech is face-to-face communication, while writing is used when sender and receiver are separated by time, distance, or both (the word paradigmatically indicates the exclusion of derivative cases such as spoken language used in telephone calls or tape recordings, or written language used to pass messages during a lecture). If the intended receiver is not present when the message is formulated, the message must be given durable form—which, before the age of electronics, meant inscription on stone, use of a stylus on clay, or drawing with durable material like ink on paper or animal skin. Since the gathering of materials and the composition of a written message is generally more time consuming than speaking, the sender of a written message must determine whether the message is sufficiently important to warrant the extra effort.
On the other hand, the absence of an interlocutor involves several organizational considerations. The most obvious of these is that, since written language cannot exploit the suprasegmental or paralinguistic features which accompany spoken language (Chao, 1968:11), the written code must in some way compensate for the information which would be lost in a so-called transcription of spoken language. Punctuation and paragraphing are typical ways in which this information can be restructured, but so is the use of parallel sentence structure (for instance, to compensate for building excitement in one’s voice) or use of foreign words (as the written equivalent of a “stuffy accent”). More important is the fact that writing, unlike speaking, lacks feedback from the receiver.
The vast majority of spoken, face-to-face communication (leaving aside rhetorical speech or ritualized conversation) is not planned. If the speaker says something which is not clear, he can get clues from his audience–a puzzled look; a query, “What do you mean?”–that can set him back on course, or help him clarify his remarks. Spoken language is “essentially a dialogue–a continuing give-and-take interaction” (Wrolstad, 1976:22), in which “possible confusions or misunderstandings can always be cleared up by question and answer” (Goody and Watt, 1968:51). But in writing, no such immediate restructuring is possible. The sender must anticipate the difficulties his receiver may have and engage in all the dialogue internally before committing his message to durable written form. As Socrates put it, “written words seem to talk to you as though they were intelligent, but if you ask them anything about what they say, from a desire to be instructed, they go on telling you just the same thing for ever” (Phaedrus cited by Goody and Watt, 1968:51). Consequently, the writer’s presentation is generally more cautiously and reflectively formulated than that of the speaker. It is thus predictable that written language should tend to be stylistically more formal than spoken language.
The difference between speech and writing can be illustrated by analysis of the sentence “Kilroy was here.” The first difference we notice is that “Kilroy was here,” written by Mr. Kilroy, is not a construction that is possible in a similar context in speech. If we say, “Jones was in London,” the utterance refers to an earlier occasion of Jones’s presence and not to a current trip to the British Isles. However, the writing of “Kilroy was here” could only be done at the time (in the present) that the event occurred.11
Here is a situation in which a single experiential event cannot be described the same way in spoken and written language. If we accept this observation, then we want to explore the difference. To do so, we need to look at the choice of the words Kilroy, was, and here somewhat more closely.
Kilroy, rather than we or I, must be used because there is no interlocutor present to know to whom the first person pronoun refers; in spoken language, such a problem of reference would not arise. We write was (not is) because we are formulating the difference in the speaker-auditor relation between writing and speech, that is, we are envisioning a future auditor who is not now present. The use of here is particularly interesting, since it not only distinguishes speech from writing but also distinguishes between different writing technologies. Here can be used when the writing is done on a quasi-permanent structure like a rock face or a subway car, but not on a piece of paper which can change location and thus give the future audience no indication as to where Mr. Kilroy had actually been.
This distinction between presence or absence of interlocutor is also critical in the comparison of writing and sign. Writing (a form of durable visible representation) is crafted with the basic premise that the percipient is not present when the message is formulated; ephemeral visual representation (which includes sign language) presupposes the interlocutor’s presence.12 The contrast between writing and sign cannot be made so neatly as that between writing and speech, for, although all three systems have some unique characteristics, writing and speech are far more similar than are writing and sign. (The justification for this statement will become clear in chapters 5, 6, and 7).
Speech (the linguistic aspect of ephemeral auditory representation), writing (the linguistic aspect of durable visual representation), and sign language (the linguistic aspect of ephemeral visual representation) will need to be examined to determine which of these aspects constitutes language. My remarks can, at best, be exploratory. Moreover, this book cannot give sufficient attention to nonlinguistic auditory and visual representation. The growing literature on nonspeech messages sent through the auditory channel (e.g., Sebeok and Umiker-Sebeok, 1976, on whistle systems) as well as the rich literature on art as representation (e.g., Gombrich, i960, 1972; Goodman, 1976) will require careful study before definitive comments can be made about their linguistic counterparts.
Language function as a determiner of language structure is an appropriate subject with which to begin this inquiry. If human language is a means of social exchange, then the purpose of that exchange will have a strong influence on the particular form the linguistic representation takes; it may even affect the choice of modality. An understanding of the term functional perspective is therefore critical to our discussion of speech, writing, and sign as particular forms of linguistic representation.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.