“Sight Sound and Sense” in “Sight, Sound, and Sense”
Sign Languages and the Verbal/Nonverbal Distinction
Sign languages have always held a fascination for the curious. As coded substitutes for elements of speech, they can easily be put to secret uses, and thereby take on the glamour of hidden intelligence. Making possible what seems impossible without speech, sign languages reveal at the same time to all who see them in use a mode of expression shared by primates, related to mammalian behavior, and ultimately tied to most visual communication of the animal kingdom. If only this were not true, if there were no relation between human gestural systems and animal behavior, then the whole matter could be dealt with and dismissed as Bloomfield thought he could do: Sign languages would simply be derivatives of spoken languages, with or without writing systems as intermediate stages. But the problem is more complex than that, and although a Bloomfieldian notion of sign languages is not now in favor, it is not completely mistakenin many uses, sign languages are exactly and by intention coded substitutes for otherwise unexpressed language utterances. Books, pamphlets, and series have recently been published for those engaged in teaching deaf children; and in all of these gestural signs (gSigns) are presented as code substitutes, not just for individual words of English but also for inflectional suffixes like -ing, -s, and -ed and for derivational affixes like ־ ity, -ment, and un-. The authors of these codebooks are all proceeding from a tacit or open assumption that a sign language can be made into a complete, adequate, explicit, and simple code for representing English. Fortunately, or unfortunately, depending on one's point of view, they have not discovered that complete, adequate, explicit, and simple descriptions, codes, or grammars for any natural language are ideals not realized in practice. Also a code, but far older than these lexical coding efforts, is a system that makes use of the already existent alphabetical writing of spoken symbols. Employing gSigns made with one hand or both to represent the graphic symbols conventionally used, with or without additional instrumentality, is an ancient practice indeed and may be as old as graphic representation itself (Stokoe, 1975a :345-60).
Much of the research herein reported has had the support of NSF Grant SOC 74 147 24.
If it were not for troublesome facts, all this could be put together and called verbal: Lexical gSign codes, manual alphabet codes, and graphic codes all represent words. In that case, the radically different use of the human body in face-to-face or group interaction would as obviously be nonverbal. But such a simplistic categorization is what this paper is intended to challenge. By directing attention to a number of studies of sign language, I hope to show that verbal and nonverbal are misused terms and that what they refer to are not categorical opposites but related matters. Just as we now know that space and time, like matter and energy, relate in ways that can be quantified, to realize that verbal and nonverbal behavior, so called, are related and not opposed may open the way to more rewarding research strategies.
Four uses rather than two are open to human gSigns: (1) They can substitute for some element or other of a spoken or written utterance. (2) They can denote emotions or certain general ideas directly. (3) They can do both of these at once. (4) They can also do something else not yet specified but to be discussed here. In the usual terminology of our day, the first of these four uses would be called verbal and the second nonverbal. There is little evidence that the third use has been carefully considered. But the fourth use is usually completely unsuspected, although it exists and has for a long time. This fourth use makes the simple bipolar nomenclature verbal/nonverbal inadequate, even with the admission of the third, mixed use of gSigns.
Examples of the first two uses beyond what has just been said are easy to find and have been written about increasingly of late by linguists and psycholinguists. The mixed use of gesture may take place when what is called a "natural" gesture enters into the performance of a code substitute; e.g., a "natural" gesticulation of a speaker talking about getting something is a grasping motion toward the body. In one or more Manual English lexical codes this motion is utilized with the addition of some manual alphabet configuration before the hand grasps—e.g., g for 'get', о for 'obtain, or r for 'receive', etc. The alphabetic may not blend perfectly with the emblematic use, but an observer unfamiliar with the alphabetic code may still interpret the action correctly—at least he is unlikely to mistake the gesticulation as an attempt to symbolize 'throw away’.
The fourth use of gSigns is distinct from all the foregoing. In it the individual sign does not denote or substitute for any element of a spoken or written utterance or structure, nor does it express an emotion or any such general idea, as emblems usually do. Instead the gSign in this fourth use directly and originally expresses an element of sign Ianguage itself, whether the element be lexical or grammatical.
One characteristic of language that distinguishes it from some other semiotic systems is its multiplicity of levels of organization, or strata, as they have been called. In this respect gSigns as linguistic manifestations are typical. If we take for the next portion of the discussion the hypothetical position of a native user of American Sign Language (ASL), we will find that the hand held with the thumb and first two fingers extended and spread apart is not a gSign at all but part of many such signs. If the hand is held palm out with the fingers pointing up, it becomes a numerical, not a verbal, sign and denotes 'three'. It need not mean 'three' to users of other sign languages or to nonsigners: Deaf signers in the Midi denote '3' by holding this hand in a horizontal position and moving it downward (Sallagoity, 1975); hearing nonsigners are likely to denote '3' by holding the thumb over the pinky, leaving the three middle fingers upright.
The semantic element or unit (Chafe, 1970) '3' remains in the ASL sign for 'sergeant' when this hand is drawn across the opposite arm, suggesting the triple chevron. But 'three-ness' has nothing to do with other signs that also utilize this hand; e.g., NO, AWKWARD, LOUSY, BUG, ROOSTER, DEVIL, DENMARK, SAIL, PARKED-CAR, etc. (The use of capitals represents both signs of ASL and their conventional English gloss.) Most of these examples are taken from A Dictionary of ASL (Stokoe, Casterline, Croneberg, 1965, 2e 1976), but recent research by Battison, Markowicz, and Woodward (1974) has found out something even more characteristically linguistic about the "3-hand." It is not a simple symbol at all; it is more akin to a phonemic than to a phonetic symbol : Unlike the Morse code symbol for '3', which is always "di-di-di-dah-dah," but instead like English initial /θ/, which may occur as [Θ] or as [f] in the childish pronunciation "free" for three, this hand is what appears when a portion but not all of ASL signers perform certain of the signs that other signers use the V־hand for. That is, in uttering SMOKE, READ, and other signs, many signers of ASL extend and spread apart only the index and second finger, keeping the thumb in. Other signers extend the thumb in these signs. This difference is analogous to such variations in English as thing/ting and going/goin, both in phonological and in sociological dimensions.
An even more striking identification of sign language phonology with universal phonology (see Battison, 1976) appears when another sign language community is considered. So far we have seen that two different hand configurations, the "3-hand" and the "V-hand," are used by ASL to keep meanings apart, e.g., TWO/THREE. But we have also seen that in specifiable circumstances the 3-hand can be substituted in the same sign for the V-hand by persons or groups also specifiable. As native users of the language we would know all this, out of awareness, and so be able to distinguish when the thumb extension was distinctive and when it was not distinctive.
With the same limited set of features (thumb, index finger, middle finger, extension/non-extension, spreading/non-spreading), ASL has also another set of signs and distinctions in which they are differentiated. The hand that has the extended index and middle fingers side by side and in contact (non-spread) composes out of these features another sub-morphemic element, the H-hand (so called in 1960 and 1965 descriptions, because when the context is alphabet coding it denotes h in horizontal presentation but u when upright and ո when pointed downward). But this ASL H-hand as a phoneme has both thumb-extended and thumb-not-extended realizations.
The situation in American Sign Language may be summed up thusly: When the features are (1) closed fist, (2) extension or not of thumb, (3) of index finger, (4) of middle finger, and (5) spreading or not of fingers adjacent; then (a) ± thumb extension is sometimes phonemic and sometimes not, and (b) ± thumb extension in the context of — spreading is non-phonemic buf can be accounted for by various sociolinguistic techniques.
If we now look at Danish Sign Language (DSL), we find that the feature ± thumb extension has phonemic force regularly, but the feature ± spreading seldom or never does. Thus in DSL, what we have called the V-hand and the H-hand are both used to realize one DSL phoneme, but what we have called the 3-hand has an unspread realization as well. This kind of relationship and organization of detail on feature, phonemic, and lexic levels is neither verbal, in the sense of relating to the word forms or word strings of another language, nor nonverbal, in the sense of unrelated to language.
Another reason that the terms verbal and nonverbal are misleading in the present context is that information that may be left implicit and matter that must be expressed in any utterance are differently distributed when the utterer moves from spoken to gestural expression. It is natural, note, for us who hear to consider moving from speech to signing—we outnumber the deaf by one thousand to one or two (Schein and Deik, 1974). But for the deaf, it is just as natural to start with signing and to consider speech and language from a signer's viewpoint (i.e., as alien and strange). Moreover, the large questions of both language origin and language evolution would be begged if sign language activity were to be called verbal and signs were to be accounted words. Sign languages contain material and relationships of kinds designated both verbal and nonverbal in unexceptionable usage of the terms. But it is time to deal in another way with these terms.
Semiotics as a discipline has been supportive of sign language studies, and Sebeok individually has been instrumental in furthering scientific investigation into ways that gSigns are used (see especially Approaches to Semiotics 14 and 21 and Sign Language Studies). Regarding nonverbal Sebeok has written:
This deceptively simple phrase, widely bandied about and incorporated in a large miscellany of book titles ... is well nigh devoid of meaning or, at best, susceptible to so many interpretations as to be nearly useless. [1975:9]
Logically the term nonverbal might become more satisfactory as its co-term verbal became more precisely defined. However, just as Sebeok finds nonverbal given unclear and conflicting interpretations, the term verbal is used as badly or worse. Worse, I think, if human consequences can be considered in judging a case of terminology. In the jargon of educators and counselors and interpreters who work with the deaf, verbal represents one end of a bipolar opposition, and their term low-verbal deaf both mislabels and condemns. Behind the term there may be a trace of sociolinguistic relevance: A deaf person’s competence in English is likely to be much less than his competence in a sign language. But no person’s competence should be judged solely on performance in a second language to which he can have little or no direct exposure, and the competence in sign language is completely ignored or is unsuspected by those who apply, and by many who hear or read, the label lowverbal deaf and who go on to infer deficits in language competence, cognitive skills, and intellect.
Rather than catalog the misuse of the term verbal and its compounds I would like to consider whether the term may have any legitimate application. Perhaps Sebeok is correct and we might better abandon the term nonverbal and so verbal too. Still it may be useful to look at what this distinction may have meant had there not been this confusion of tongues. The study of sign language has accomplished, I believe, what scientific investigation often does: It has found apparent opposites to be otherwise related.
In a large accomplishment, two such different creatures as dogs and dolphins were found alike in structure. In a small way, those who examine sign languages have found that verbal and nonverbal are not the opposites of Boolean algebra: It is not true that U = ν+v ('the universe of discourse consists entirely of verbal and nonverbal'). Behavior labeled with these terms may be related in more interesting ways than by simple negation.
One definition of verbal stems from classical studies, where one is, theoretically at least, bilingual in Latin and English and verbal is but the adjective form of the noun word. A word, then, is an idea that may be expressed internally in ways not yet well understood, or externally so as to be perceptible to others sharing the same convention of expression. In the preceding sentence, all that follows idea is restrictive; a word is not an idea only but an idea symbolized somehow, and so a sign in the full semiotic sense.
An idea may be expressed either in a vocal structure or in a gSign structure—we see in this the major contribution of sign language study. If word (of a language) and gSign (of a sign language) are equivalently expressions of, or signs for, ideas, the term verbal in its fullest sense should denote almost as much as does the term language. The abstraction language is larger, then, because the kinds of ideas we are discussing are related to each other in several kinds of ways. Some are nominal ideas, others verb-like (verbal in another sense of the word). Within this major classification are subclasses: Noun ideas contain, but do not consist entirely of, such features as potent, human, inanimate, uncountable; verb-ideas denote states of things, processes things undergo, or actions (Chafe, 1970). But besides the relation of idea to idea, words relate to words grammatically: State verbs appear in English sentences as adjectives placed after the pseudo-verb be; process verb ideas appear in English with a patient noun as subject; but verbs combining both process and action features require us to put the patient in English as object and the agent as subject. Thus it is indeterminate whether the term verbal should refer to semantic relations (between ideas) or to syntactic relations, both deep and superficial (between words).
When a word, however, is given vocal expression—not with one vocal sign for one denotatum of course but with the involvement of all kinds of submorphemic, doubly articulated structure—then it is simply a spoken word, and is called verbal in the sparse taxonomy of the legal profession. Semioticians may prefer to see verbal used as adjective standing for anything pertaining to words, but the legal term verbal does preserve the distinction of Vocally expressed' as opposed to ‘written’ or otherwise documented. Curiously, in Italian common law, a gSign, viz., the index finger held up to the temple by a conspirator in a kidnap attempt, and used as surrogate for the phrase ‘kill him’, has been treated as "verbal" evidence (Leonard Siger, pers. comm.).
The source of much difficulty in terminology and in thinking about these matters is this polysemanticity of verbal. A word is both idea and the idea's expression. In triadic Peircean semiotics, a word is at once a sign interpreter, a significant, and a vehicle; i.e., Peirce includes our terms idea and word but also the possessor of the idea and the utterer or perceiver of the word. What is most disturbing about those who bandy about the term verbal is not so much their ignorance of Peirce's semiotic as the vicious assumptions their usage conceals. By the use of the terms verbal, low-verbal, and nonverbal they reveal perhaps their unconscious fear of the primitive animal side of human nature (Mindel and Vernon, 1971:83). But it is just from this part of human inheritance that gSign and vocal activity arises. Those who use these terms thus make speech (and hearing) instead of intellect and language the measure of man and the use of gSigns for language in their eyes is a badge of inadequacy.
To go back to a more satisfactory application, verbal pertains to words expressed either vocally or by gSigns. The essence of verbality is semiotic: The sign is denotatum, the sign is vehicle, the sign is interpreter—all in a special relationship normally called linguistic, though it is broader than normal linguistics concedes.
The term nonverbal then ought to describe whatever has nothing to do with ideas linked by grammar to spoken or to signed words. But the miscellaneous usage that Sebeok reviews in The Semiotic Web (1975) has no such logical consistency. Instead, for the most part, the best of current research into gestural, facial, bodily, and spatial behavior— though called nonverbal—is semiotic; i.e., it deals with ideas and their interrelation in nonvocal expression. Some of this research even deals directly with vocal expression—what the voice does as long as it is not the production of vowels and consonants, i.e., the segments of speech, now as closely guarded by syntacticists as they once were by phonemicists from identification with the phonology of sign languages.
To demonstrate that there are in sign languages of the deaf: (1) phonology as regular submorphemic structure, (2) grammar as the rules and transformations of syntactical patterns, and (3) semantics as a precise referential relation of the world of thought to the world of sound and sight—to demonstrate all this would take us in many directions.
One of these directions may even be backward. For some years the burden of proof was on students of sign language, and skeptics demanded "But is it really language?" Abbott (1975) puts the question less aggressively: "How highly encoded is a sign language? and In what parts of the system is there more abstract encoding, in what parts less?" The answers confirm the position taken here that words and gSigns of a deaf sign language are the same kind of signs. Abbott finds that sign language verbs are highly encoded; i.e., they contain several kinds of grammatical and semantic information in addition to their root meanings encoded in parallel, abstract, and economical ways. Questions like his about coding are more easily verified than bald inchoate questions about the nature of language.
In another part of sign language grammar a different degree of encoding is found, but even though person reference is less encoded than verb system, this part of a visually received language is interesting. Deaf signers designate self, person signed to, and others and do so singly, dually, trially, or plurally in the same basic way our whole species does, by looking, attending, pointing with hand or face or more saliently with finger pointing or eye gaze. Henderson (MS.) has identified nine semantic types and mapped the pronoun words, tokens, of a great many languages into the nine. Except for three-pronoun systems, he finds any number from two to nine linguistic tokens for the nine types. The usual case is for one of the tokens to represent two or more types; e.g., contemporary English you refers both to one addressee and to several or many. Users of American Sign Language express all nine types of person designation with distinct sign tokens (see also Friedman, 1975).
This matter, deixis, brings us back to another kind of distinction, sometimes referred to by the terms verbal and nonverbal, but for which the terms language and non-language (as Burling [1970] uses them) would be more exact. Human beings use language and are often the topics of language utterances, but they are not language. Even when a dyad or any larger group interacts, the roles of sender-of-signals, addressee-of-signals, and denoted-by-signals are non-language; but the terms being used by the sender-of-signals are language whether vocal signs or gSigns express them, if the sender is operating within the rules of a spoken or signed language. The same behavior, however (looking at addressee, gesturing self, or another), when used by a speaker who knows none of the forms and rules of a formal sign language, cannot be termed linguistic in the narrow sense of the term.
Another kind of encoding, commonly considered uniquely linguistic, is that expressed in dependent relative clauses further marked as restrictive. For a non-grammarian all this might be described as economy of effort or as packing information compactly. It differs in subtle ways from simple conjoining. Thus in the sequence of signs, EARLIER DOG CHASE CAT COME HOME, there are two propositions or sentences. Liddell (MS.) has found that if the head, face, and gaze of the signer are unmarked by difference from "normal" throughout the sequence's performance, the ASL sequence is equivalent to: 'The dog chased the cat and came home a while ago'. But if the signer tilts the head back a little, raises the eyebrows, and produces pronounced naso-labial folds by facial contraction while making the first four signs of the sequence, then returns all systems to "normal" for the last two signs, the equivalent in English is : 'The dog that chased the cat earlier came home'.
This discovery by Liddell will startle those who doubt that a language not symbolized vocally can have such structure. It also marks an important advance in sign language studies. I. M. Schlesinger (1970) concluded that Israeli deaf singers could understand correctly from drawings the kind of relation expressed in sentences containing subject, verb, direct object, and indirect object, but he found no evidence of any system for signaling these relationships in their signing. It is clear that he was looking for some equivalent in manual behavior of the same kind of units expressed in speech. Students of sign languages who know that manual sign vehicles are only part of the whole gSign linguistic system did not find Liddell’s breakthrough surprising. Baker and Padden (MS.) have found that utterances in spontaneous ASL communication may show identical manual features but contrast in meaning and structure according to facial behavior and gaze. The same kind of head and face activity Liddell describes is to be seen in a videotape made in the Linguistics Research Laboratory at Gallaudet College in 1972. In the four-year-old transcription it reads: ME REFUSE GO UNLESS HE GO. The transcription suggests that UNLESS is a sign, but the tape shows no manual activity at this point; instead there is a backward head movement. In the light of Liddell's discovery it seems much more probable that the head tilt after the first GO puts the whole clause meaning 'unless he goes too' into a restrictive dependent relation to the refusal. Recent inspection of the tape also shows that the head indeed is held back tilted to the end of the sequence and that with or a little after the energy peak of the GO sign at the end the head makes a forward sharp nod ('affirmative?).
Gestural activity of the kind here described is neither nonverbal in one common usage of the term nor verbal in the sense of a directly coded substitution of visible action for normal English elements and structures. Restrictive relative clauses, or a contrast between relative dependency and coordinate conjunction (i.e., between embedding and conjoining), might with much more justice be termed nonverbal, because this contrast is a matter of logical relationship and so more abstract than any linguistic expression of it.
Another way in which nonvocal activity shows all the properties of language systems has been surveyed by Battison (MS.). He has found a class of words in ASL, i.e., signs, that have entered the language as loan words from English. Loan words generally make excellent indicators both of linguistic change and of cultural contact. Thus when a list of color terms in Kikuyu includes buru, the observer rightly guesses that before European contact there was no term for blue in the language. Also correct is the conclusion that buru is English blue passed through the Kikuyuans' phonological filtering system. But to take a word from a spoken language and make it a word of a sign language involves more change than that of replacing an I with an r. One channel for such lexical intercultural traffic is finger spelling. ASL signers use fully finger-spelled words to a greater or lesser extent in normal interaction. Abbreviated finger-spelled words came into ASL (and French SL before it) as borrowed signs simply by letting the hand in the configuration for the initial letter make some slight action; e.g., days of the week (except Sunday), color terms (except for black, white, and red: Stokoe, MS.), names of major cities.
But the several dozen loan signs Battison has found and described form a special class of ASL signs in formation, meaning, and functional distribution. Unlike finger spelling, which is used more in deaf-with-hearing interchanges, these loan signs are almost exclusively used in deaf-with-deaf interchanges. They are short words in English (usually needing only two, three, or four handshapes for complete finger spelling). They are signs of ASL belonging to a subclass formationally by virtue of using change of the active (dez: Stokoe, 1960) hand during performance. And some of them are highly polysemantic; in this the direct opposites of proper names, which are often reduced to initialdez signs (Stokoe, Casterline, and Croneberg, 1976:291-93 [1965]; Meadow, 1975).
Three examples will be examined briefly here to show how meaning and form, or sign vehicle and sign denotatum, are related in this portion of a nonvocal but fully verbal system. The original words are do, all, and what. Battison finds that these have become naturalized in ASL as several signs, clearly related in formation but with distinctly different uses. Thus DO1 means what shall I do?' D02 means what are you going to do (about it)?' D08 signifies 'things to do', 'chores', and D0 4commands 'do something—anything!' All likewise becomes signs infleeted to fit different structures of ideas and signs. AL! applies to the parts of a conceptual whole taken at once; AL2 stands for all the items in a list or series; AL3 designates all the people in a group facing the signer; and AL4 means 'always' in the restricted sense of 'all the time from a past reference point'.
Battison reports (pers. comm.) that many of this class of loan signs in ASL are displacing signs long attested as being a part of the lexicon. One of the explanations he suggests is that the loan word AL is easier and less awkward in the making than ALL made with the two flat hands and a circle-touch action. Or it may be that because the older ALL is uninflectible it is being displaced by the one-handed AL, which can be varied in position, direction, and action to fit the idea of totality to different kinds of concepts—another example of parallel processing of information, i.e., encoding becoming highly abstract.
Another loan sign in the Battison study is WT 'what?' (In it a trace of the manual alphabet w' handshape appears as the open hand is closed to a ‘t’ during horizontal movement.) The older sign WHAT (Stokoe, Casterline, and Croneberg 1976:235 [1965]) uses two hands but may always have had an equivocal status between true ASL and English manually encoded. I can remember being puzzled in the past when signers instead of using the WHAT sign I expected used a sign usually glossed 'name' but having a range of meanings and uses quite unlike that of English name.
The work of sign language researchers, only skimmed over here, makes it clear that applying the term nonverbal to sign languages is not only pointless but counterproductive. Members of a culture whose language has five or six inflected forms that translate English all and as many for do and several formationally different terms for what in all its uses cannot any longer be said to lack the ability to think abstractly, to operate on a "high-verbal" level, or to be skillful readers of "nonverbal" cues but without "verbal" skill.
If a clearer terminology should result from the study of sign languages, then semiotics will be repaid for its fostering of the at times scarcely tolerated study. And if a more humane, less egocentric and ethnocentric view of a special minority, the deaf, can displace prejudice, then studying sign language can be justified. But there is another implication in this semiotic strand that most engages me and others. Untangled it may lead toward better understanding of the origins and evolution of language and human behavior.
In the Fall 1975 number of Sign Language Studies, Goldin-Meadow and Feldman (see also Goldin-Meadow, MS.) report on a discovery that young deaf children, without access to any sign language and unable to hear, lip-read, or make any speech audibly, nevertheless use gestures. At first these gestures have only general referential relation to whole situations. Later they are specialized so that some express nominal ideas, others verbal ideas. Next, gestures of these two kinds are joined in two-sign structures. Still later, more complicated structures are made from a growing lexicon of gSigns. This behavior was not learned by the children from the mothers in the study; only some of the mothers ever did use two gestures syntactically, but those who did so combined gestures several six-week intervals after these structures appeared in the children's behavior.
This finding by Goldin-Meadow and Feldman should be considered along with two others: (1) Normally hearing children begin to put words together in two-word syntactic structures at the end of their second year of life; and (2) deaf children in homes where deaf parents sign without restraint begin to put their ideas into two-sign syntactic structures at the end of their first year of life.
In the Winter 1975 issue of Sign Language Studies, Adam Kendon describes another surprising discovery—not about deaf sign language but about another kind of gSign activity. That speakers gesticulate when speaking is common knowledge, but Kendon's analysis of both vocal and gSign activity on motion picture recordings shows that Ianguage utterances normally have two outputs: speech and gesticulation. He finds as well, that at those moments in a discourse when the vocal output is interrupted or suspended, the idea structure continues to be expressed in gSigns. Gesticulation, then, is not mere embellishment of spoken discourse with illustrative gestures but is the coeval or elder partner of speech in the work of expression.
Kendon reports this research in support of the gestural theory of language origins most fully articulated in the work of Hewes (1973, 1974). What Kendon has found perfectly matches the observations of deaf infants' language development in speaking and in signing house-holds. To state the point simply: Communication in the human individual begins with gSigns, or gSigns in a behavioral matrix, not yet differentiated from touch. As well equipped as all human individuals are with a genetic propensity for language—even perhaps with an in-born ability to distinguish language sounds (Eimas, 1975)—they begin communicating in a less specialized way than that of adult language, using gSigns at first. Only later do (hearing) infants master the long-to-learn art of phonology and still later the whole grammar of the Ianguage adults around them use. But it is not through the study of development in the usual and the special individual only that we can see something like the phylogenetic development of gSign into speech-sign recapitulated. The gSign capabilities of chimpanzees and gorillas will tell us something more about languages and sign languages and about what is verbal and what nonverbal.
Prophets may be able to see ahead better than most because they are quicker to see what is presently around us all. Hewes suggested the use of sign language in raising a chimpanzee at the time when the Hayes experiment with Vicki was concentrating on teaching speech signs. The foresight of the organizer of this semiotic program is well known, but I would like to suggest that Sebeok's prophetic powers served semiotics well when he encouraged long ago the research of West (at Indiana University) in Plains Indians' sign language and of Stokoe in deaf sign language.
REFERENCES
Abbott, Clifford. 1975. "Encodedness and Sign Language."Sign Language Studies 7:109-20.
Baker, Charlotte. 1975. "Regulators and Turn-Taking in American Sign Language Discourse." MS. Working paper, Department of Linguistics, University of California, Berkeley.
Baker, Charlotte, and Padden, Carol. 1976. "Facial Features in ASL Conversation." MS. Working paper, Linguistics Research Laboratory, Gallaudet College.
Battison, Robbin. 1974. "Phonological Deletion in American Sign Language." Sign Language Studies 5:1-19.
______. MS. Ph.D. dissertation, University of California, San Diego.
Battison, Robbin, and Jordan, I. King. 1976. "Cross-Cultural Communication with Foreign Signers: Fact and Fancy." Sign Language Studies 10:53-68.
Battison, Robbin; Markowicz, Harry; and Woodward, James. 1975. "A Good Rule of Thumb: Variable Phonology in ASL." In Analyzing Variation in Language, R. Fasold and R. Shuy, eds. Washington, D.C.: Georgetown University Press, pp.291-302.
Burling, Robbins. 1970. Man’s Many Voices. New York: Holt, Rinehart & Winston.
Chafe, Wallace L. 1970. Meaning and the Structure of Language. Chicago: University of Chicago Press.
Charrow, Veda. 1975. "A Psycholinguistic Analysis of 'Deaf English'."Sign Language Studies 7:35-46.
Eimas, Peter D. 1975. "Speech Perception in Early Infancy." In Infant Perception, L. B. Cohen and P. Salapatek, eds. New York: Academic Press.
Friedman, Lynn A. 1975. "Space, Time, and Person Reference in American Sign Language."Language 51:940-61.
Frishberg, Nancy. 1975. "Arbitrariness and Iconicity: Historical Change in American Sign Language." Language 51:696-719.
Goldin-Meadow, Susan. MS. "The Representation of Semantic Relations to a Manual Language Created by Deaf Children of Hearing Parents." Ph.D. dissertation, University of Pennsylvania.
Goldin-Meadow, Susan, and Feldman, Heidi. 1975. "The Creation of a Communication System: A Study of Deaf Children of Hearing Parents." Sign Language Studies 8:225-34.
Greenlee, Douglas. 1973. Peirce's Concept of Sign. Approaches to Semiotics, T. A. Sebeok, ed., paperback series 5. The Hague: Mouton.
Henderson, T. S. T. MS. "Pronoun Structure. Logical Terms and Language Types." Working paper, Department of Linguistics, Ottawa University.
Hewes, Gordon W. 1973. "Primate Communication and the Gestural Origin of Language." Current Anthropology 14:5-24.
_____. 1974. "Language in Early Hominids." In Language Origins, R. Wescott, G. W. Hewes, and W. C. Stokoe, eds. Silver Spring, Md.: Linstok Press.
Kendon, Adam. 1975. "Gesticulation, Speech, and the Gesture Theory of Language Origins." Sign Language Studies 9:349-73.
Liddell, Scott. MS. "Dependent Clauses in ASL." Working paper, Salk Institute, California.
Meadow, Kathryn B. MS. "Name Signs as Identity Symbols in the Deaf Community." Paper presented at the "Culture and Language in the Deaf Community" symposium, Amercian Anthropological Association, Mexico City, 1974.
Mindel, Eugene, and Vernon, McCay. 1971. They Grow in Silence. Silver Spring, Md.: National Association of the Deaf.
Sallagoity, Pierre. 1975. "The Sign Language of Southern France." Sign Language Studies 7:181-202.
Schein, Jerome D., and Deik, Marcus. 1974. The Deaf Population of the United States. Silver Spring, Md.: National Association of the Deaf.
Schlesinger, I. M. 1970. "The Grammar of Sign Language and the Problems of Language Universals." In Biological and Social Factors in Psycholinguistics, J. Morton, ed. Urbana: University of Illinois Press.
Sebeok, Thomas A. 1975. "The Semiotic Web: A Chronicle of Prejudices." Bulletin of Literary Semiotics 2:1-63.
Stokoe, William C. 1960. "Sign Language Structure." Studies in Linguistics: 8[o.p.]. Silver Spring, Md.: Linstok Press.
_____. 1972. Semiotics and Human Sign Languages. Approaches to Semiotics, vol. 21, T. A. Sebeok, ed. The Hague: Mouton.
_____. 1975a. "Classification and Description of Sign Languages." In Current Trends in Linguistics, vol. 12, T. A. Sebeok, ed. The Hague: Mouton, pp.345-71.
_____. 1975b. "The Shape of Soundless Language." In The Role of Speech in Language, James F. Kavanagh and James E. Cutting, eds. Cambridge: MIT Press.
_____. MS. "Color Terms in American Sign Language."
Stokoe, William C.; Casterline, D.; and Croneberg, C. 1976 [1965]. A Dietionary of American Sign Language on Linguistic Principles, 2d ed. Silver Spring, Md.: Linstok Press.
Woodward, James C., Jr. 1974. "Implicational Variation in American Sign Language: Negative Incorporation."Sign Language Studies 5:20-30
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.