Semiotics never tells you what to think, only how to think and how to probe beneath the surface.
Solomon (1988: 13)
T he theoretical tools that semiotics makes available for probing cultural systems do not serve primarily to produce quantitative data or general models of human group behavior. Rather, they are useful for sketching a detailed and revealing portrait of Homo culturalis as a meaning-seeking creature. These tools are particularly effective for unraveling the tribal roots of day signifying orders. Indeed, the two terms, tribe and culture, are essentially synonymous from a semiotic perspective. As anthropologist Desmond Morris (1969: 5) has aptly put it, even in modern-day complex societies, the human being refuses “to lose its tribe.” A major focus of semiotics is thus to sift out the tribal residues from signifying orders, distilling from them their universal properties. To the semiotician, the foods people eat, the facial decorations they put on, the words they invent, the objects they make and use, the myths they tell, the rites they perform, the sexual practices they engage in, the arts they appreciate, the stories they tell are all rooted in basic properties of signification.
Homo culturalis is a direct descendant of Homo signans, “the signer.” The Stone Age sketches on cave walls of jumping and dying animals give unequivocal testimony to how truly advanced and sophisticated Homo signans was as a “representer” or “modeler” of the world. Indeed, the distinguishing characteristic of the human species has always been its remarkable ability to represent the world in the form of pictures, vocal sounds, hand gestures, and the like. This ability is the reason why, over time, our species has come to be regulated not by force of natural selection, but by “force of history,” i.e. by force of the accumulated meanings that previous generations have captured and passed on in the form of signs. As opposed to Nature, culture is everywhere meaningful, everywhere the result of the innate human need to seek meaning and order in existence.
General or theoretical semiotics is the science that studies signs and how they produce meanings. It seeks to unravel the nature, origin, and evolution of signs. If there is any one finding of semiotic research that stands out from all others it is that, despite great diversity in the world’s sign systems, the difference is more one of detail than of substance. All sign systems continue to serve the original functions for which they were designed—to allow humans to represent the world in some meaningful way—revealing strikingly similar properties across cultures. Cultural semiotics is the science that applies sign theory to the investigation of signifying orders. Since the middle part of the twentieth century, it has grown into a truly enormous field of study. It now includes, among other things, the study of bodily communication, aesthetics, rhetoric, visual communication, media, myths, narratives, art forms, language, artifacts, gesture, eye contact, clothing, advertising, cuisine, animal communication, rituals—in a phrase, anything that has been invented by human beings to produce meaning.
The purpose of this chapter is to sketch a general picture of what cultural semiotics is and purports to do. We will start by tracing a historical outline of the study of signs, as a background to current practices in theoretical semiotics, taking a digression to assess the goals and methodology of the so-called cognitive science enterprise. We have decided to do this because since the mid-1980s this science has become highly influential in shaping views about human nature and culture; consequently, it cannot be ignored in a text like this one. We will then outline the main disciplinary sources that cultural semiotics draws upon in order to carry out its particular mode of investigation. Finally, we will discuss basic principles of cultural semiotic analysis.
The modern-day practice of semiotics traces its origins to the writings of the Swiss linguist Ferdinand de Saussure and the American philosopher Charles S. Peirce. But interest in signs reaches back several millennia. The first definition of sign as a physical symptom came from Hippocrates (460–377 BC), the founder of Western medical science, who established semeiotics (from semeion “mark, sign”) as a branch of medicine. The physician Galen of Pergamum (139–199 AD) further entrenched semeiotics into medical practice more than a century after Hippocrates, a tradition that continues to this day in various European countries: e.g. in Italy the study of symptoms within medicine is still called semeiotica.
The physician’s primary task, Hippocrates claimed, was to unravel what a symptom stands for—a symptom being, in effect, a semeion that stands for something other than itself. For example, a dark bruise, a rash, or a sore throat might stand respectively for a broken finger, a skin allergy, a cold. The medical problem is, of course, to infer what that something is:
Medical diagnosis is, in effect, basic semiotic science, since it is grounded on the principle that a physical symptom stands not for itself but for an anomalous state or condition. Substituting [A] for the something in the above illustration and [B] for the something else, a symptom can be defined formally as the relation [A stands for B]. In the remainder of this manual, this formula will be abbreviated to [A ≡ B].
The study of how “things stand for other things” became the prerogative of philosophers around the time of Plato (c. 428-c. 347 BC), who suggested that words, for example, were deceptive “things” because they “stood for” reality not directly, but as idealized mental approximations of it. As an example of what Plato meant, consider the concept to which the word circle calls attention. Circles do not really exist in Nature. They are ideal forms: i.e. when geometers define a circle as a series of points equidistant from a given point, they are using idealized logical thinking. They are not referring to actual physical points. So, Plato argued, an object existing in the physical world is called a circle insofar as it resembles the ideal form as defined by geometers. The circles that people claim to see in Nature are approximations of this form. Thus, the meaning implied by the word circle is unlikely to have been pried out of Nature directly.
Plato’s illustrious pupil Aristotle (384–322 BC) accepted his mentor’s notion of ideal forms, but he also argued that these were discoverable from observing the actual things that exemplified or “contained” them in the world. Together with the Stoic philosophers (Stoicism was a Greek school of philosophy founded by Zeno around 308 BC), Aristotle took it upon himself to investigate the “stands for” phenomenon in human representation more closely, laying down a theory of the sign that has remained basic to this day. He defined the sign as consisting of three dimensions: (1) the physical part of the sign itself (e.g. the sounds that make up a word such as red); (2) the referent to which it calls attention (a certain category of color), (3) its evocation of a meaning (what the referent entails psychologically and socially). Aristotle added that these three dimensions were simultaneous in the sign. And, as Aristotle correctly claimed, it is indeed impossible to think of a word like red (a vocal sign made up of the sounds r-e-d), without thinking at the same time of the color category to which it refers (the referent), and without experiencing the personal and social meaning(s) that such a referent entails. In philosophical theories of the sign ever since Aristotle, this simultaneity has been modeled as a triangular relation:
The next major step forward in the study of signs was the one taken by St. Augustine (354–430 AD), the philosopher and religious thinker who was among the first to distinguish clearly between natural (nonarbitrary) and conventional (arbitrary) signs, and to espouse the view that there was an inbuilt interpretive component to the whole “stands for” process. A natural sign is one that was created originally by someone to simulate some perceivable property of its referent (e.g. the word chirp was fashioned obviously to imitate the sound made by a bird); a conventional sign, on the other hand, is one that makes no apparent allusion to any perceivable sensory feature of its referent. St. Augustine’s notion of an interpretive component was consistent with the hermeneutic tradition established by Clement of Alexandria (150?–215? AD), the Greek theologian and early Father of the Church. Hermeneutics is the study of how to interpret ancient texts, especially those of a religious or mythical nature. Clement established the method of ascertaining, as far as possible, the meaning that a Biblical writer intended on the basis of linguistic considerations and relevant sources. Clement also maintained that the interpreter should not ignore the fact that the original meaning of the text developed in the course of history, and that the act of interpretation was bound to be influenced by cultural factors.
St. Augustine’s views lay largely forgotten until the eleventh century, when interest in the nature of human representation was rekindled by Arab scholars who translated the works of Plato, Aristotle, and other Greek thinkers. The result was the movement known as Scholasticism. Using Greek classical ideas as their intellectual framework, the Scholastics wanted to show that the truth of religious beliefs existed independently of the signs used to represent them. But within this movement there were some—the nominalists—who argued that “truth” was a matter of subjective opinion and that signs captured, at best, only illusory and highly variable human versions of truth. The French theologian Peter Abelard (1079-c. 1142) proposed an interesting compromise to the debate, suggesting that the “truth” that a sign purportedly captured existed in a particular object as an observable property of the object itself, and outside it as an ideal concept within the mind. The “truth” of the matter, therefore, was somewhere in between.
No doubt the greatest intellectual figure of the medieval era was St. Thomas Aquinas (1225–1274), who combined Aristotelian logic with the Augustinian theory of representation in a comprehensive system of thought that became the most acclaimed theory of knowledge in Roman Catholicism. Aquinas argued that signs allowed humans to reason about scientific and philosophical truths rather effectively, since they were derived from sense impressions, but that the tenets of religion were beyond sensory and rational comprehension and, therefore, had to be accepted on faith. Medieval perspectives on signs culminated with the iconoclastic ideas of John Duns Scotus (c. 1266–1308) and William of Ockham (c. 1285-c. 1349), both of whom stressed that the Platonic ideal forms were merely the result of signs referring to other signs, rather than to actual things.
Renaissance and Post-Renaissance Views
After the Florentine intellectual Marsilio Ficino (1433–1499) translated Plato’s writings into Latin in the fifteenth century, a new freer mood of debate emerged in Western academies and in society at large. Shortly thereafter, the discovery of heliocentricity by the Polish astronomer Nicolaus Copernicus (1473–1543)—the theory that the sun is at rest near the center of the universe, and that the earth, spinning on its axis once daily, revolves annually around the sun—along with the scientific work of the English philosopher and statesman Francis Bacon (1561–1626) and the Italian physicist and astronomer Galileo Galilei (1564–1642), established reason and science, not religious faith, as the primary standards of knowledge-making. By the seventeenth and eighteenth centuries philosophers like Thomas Hobbes (1588–1679), René Descartes (1596–1650), Benedict Spinoza (1632–1677), Gottfried Wilhelm Leibniz (1646–1716), and David Hume (1711–1776) argued that even the properties of the human mind could, and should, be studied as objectively and as rationally as physical phenomena, foreshadowing the birth of scientific psychology in the nineteenth century. The eccentric Italian philosopher Giambattista Vico (1688–1744) was among the few who went against this grain, proposing instead a method for unraveling the nature of mind which, even today, would be considered unorthodox at best and pseudo-scientific at worst. But Vico’s method is very much in line with current semiotic thinking, since it entailed exploring the underlying meanings of ancient myths and symbols, and studying the signifying properties of metaphor as the basis for understanding human cognitive processes (Danesi 1993).
Like Vico, the British philosopher John Locke (1632–1704) also attacked the prevailing belief of his times that mental states could be studied on their own, independent of sensory experience. For Locke, all information about the physical world came through the senses and all thoughts could be traced to the sensory information on which they were based. A little later, the Irish philosopher George Berkeley (1685–1753) even cast doubts on the human mind’s ability to know the world at all as it really is, while the German philosopher Immanuel Kant (1724–1804) speculated that the mind was predisposed by its nature to impose form and order on all its experiences, thus creating, ipso facto, its own particular brand of sense-based knowledge. Kant’s views laid the groundwork for Romantic philosophers like Georg Wilhelm Friedrich Hegel (1770–1831), Friedrich Nietzsche (1844–1900), Edmund Husserl (1859–1938), and, later, Martin Heidegger (1889–1976) to put forward the view that reality itself was a figment of the human imagination, created by the mind to help it cope with the impulses of human instincts, passions, and desires.
It was John Locke who introduced the formal study of signs into philosophy in his Essay Concerning Human Understanding (1690), anticipating that it would allow philosophers to understand the interconnection between representation and knowledge. But the task he laid out, of discovering the properties of the sign, remained virtually unnoticed until the Swiss linguist Ferdinand de Saussure (1857–1913) and the American philosopher Charles S. Peirce (1839–1914) took it upon themselves to provide a scientific framework that made it possible to envision even more than what Locke had hoped for—namely, an autonomous field of inquiry centered on the sign. The subsequent development of semiotics as a distinct scientific domain, with its own methodology, theoretical apparatus, and corpus of findings, is due to the efforts of such twentieth-century scholars as Charles Morris (1901–1979), Roman Jakobson (1896–1982), Roland Barthes (1915–1980), A. J. Greimas (1917–1992), Thomas A. Sebeok (1920–), and Umberto Eco (1932–).
A large part of the increase in the popularity of semiotics in the late twentieth century can be traced back to the publication of several highly popularized critiques of Western culture by the social critic Roland Barthes (1915–1980) in the 1950s and 1960s. These put on display the power of semiotics to demystify the persuasive consumerist rhetoric that came forward at mid-century to uphold and glorify the ascent onto the world stage of a global pop culture. The popularity of semiotics increased even more so with the publication in 1983 of a bestselling medieval detective novel, The Name of the Rose, written by one of the most distinguished practitioners of semiotics, Umberto Eco (1932–). Incidentally, Eco (1976: 7) has defined semiotics as “the discipline studying everything which can be used in order to lie,” because if “something cannot be used to tell a lie, conversely it cannot be used to tell the truth; it cannot, in fact, be used to tell at all.” This is, despite its apparent facetiousness, a rather insightful characterization of semiotics, since it implies that we have the capacity to represent the world in any way we desire through signs, even in misleading and deceitful ways. This capacity for artifice is a powerful one indeed. It allows us to conjure up nonexistent referents, or refer to the world without any back-up empirical proof that what we are saying is true. As the linguist Aitchison (1996: 21) aptly puts it, the amazing thing about language is not that it allows us to represent reality as it is, but rather that it affords us “the ability to talk convincingly about something entirely fictitious, with no back-up circumstantial evidence.” Arguably, culture itself is one “big lie,” given that it constitutes a radical break from our biological heritage—a break that has forced us to live mainly by our wits. As Prometheus predicted in Aeschylus’ (525?-456 BC) great ancient drama Prometheus Bound, the capacity for lying has ensured that “rulers would conquer and control not by strength, nor by violence, but by cunning.”
The primary goal of theoretical semiotics is to document and theorize about a remarkable capacity of the human brain—the capacity to produce, understand, and make use of signs. The [A ≡ B] formula we used above to represent how a symptom is deciphered is, in effect, the general formula for the sign. The [B] part is known technically as the referent. There are two kinds of referents that signs capture, concrete and abstract: (1) a concrete referent, like the physical color designated by the word red, is something existing in reality or in real experience and is, normally, perceptible to the senses; (2) an abstract referent, like the notion represented by the word democracy, is something that is conceptual, i.e. something formed within the mind. Now, as the semiotician Charles Morris (1938, 1946) suggested, signs are powerful mental tools precisely because they allow human beings to “carry the world around in their heads,” so to speak. This is known psychologically as displacement, the ability of the human mind to conjure up the things to which signs refer even though these might not be physically present for the senses to perceive and identify. The displacement property of signs has endowed the human species with the ability to reflect upon referents at any time and in any situation whatsoever within “mind-space.”
A sign can be defined formally as anything—a word, a gesture, etc.—that stands for something other than itself (the referent). The word dog, for instance, is a sign because it does not stand for the sounds d-o-g that compose it, but rather for a domesticated carnivorous mammal (Canis familiaris) related to the foxes and wolves and raised in a wide variety of breeds:
The ability to make and use signs makes it possible to know and to remember what is known. As the great Russian L. S. Vygotsky (1978: 51) aptly remarked, the “very essence of human memory is that human beings actively remember with the help of signs.” The overall goal of theoretical semiotics is, arguably, to unravel how signs allow human beings to know. The meanings of signs are the data that semioticians collect, and meanings are what they attempt to understand. Indeed, the three basic questions that guide all semiotic investigation are (1) What does something mean? (2) How does it mean what it means? (3) Why does it mean what it means?
To get a firmer grasp of how theoretical semiotic method unfolds, consider the word red. This is easily recognizable as a sign because it does not stand for itself, the sounds r-e-d that compose it, but rather for a color gradation of approximately 630 to 750 nanometers on the long wave end of the visible spectrum. This is the sign’s referent, namely, a category of color that is distinct from other categories that are labeled yellow, red, green, etc. Together, all such referents compose a particular domain of reference that allows speakers of English to talk and think about the physical phenomenon of color.
Knowledge of color entails knowledge of this domain. Clearly, this kind of knowledge is culture-specific. The very same color category to which the word red calls attention could have been represented differently in another culture: e.g. two words could have been used which, together, would cover the category to which red calls attention; or the referent captured by red could have been included within a larger category of color. Now, not only does the sign red make it convenient to refer to a specific color category in a displaced way, but it also conditions its users to anticipate its presence in other domains of reference. In effect, it becomes a productive resource for further meaning-making activities: i.e. it can be used to create new referents, as can be seen in expressions such as the red light district, red flag, etc.
This cursory “semiotic analysis” of red illustrates, in microcosm, how semiotic method is conducted. It also shows, again in microcosm, that Homo culturalis is by nature a sign-maker and a sign-user, and because of this, for at least a hundred thousand years h/er evolution has not been regulated by force of natural selection alone, but by “force of cultural history,” i.e. by force of the accumulated meanings that previous generations have captured in the form of signs and passed on in cultural contexts. Signs are the result of the need that human beings the world over have to understand the world around them in conceptual ways. That is the central characteristic of the human species, which is called, not uncoincidentally, the “sapient” species (Homo sapiens).
As mentioned, the establishment of semiotics as an autonomous science was made possible by the theories of the sign put forward by Saussure and Peirce at the threshold of the twentieth century. Semiotics was fashioned by these two thinkers as a structuralist science, i.e. as a mode of inquiry aiming to understand the sensory, emotional, and intellectual structures that undergird both the production and the interpretation of signs. The premise that guides structuralist semiotics is, in fact, that the recurring patterns that characterize sign systems are reflective of innate structures in the sensory, emotional, and intellectual composition of the human body and the human psyche. This would explain why the forms of expression that humans create and to which they respond instinctively the world over are so meaningful and so easily understandable across cultures.
The linguist Ferdinand de Saussure (1857–1913) was born in Geneva. He attended science classes for a year at the University of Geneva before turning to language studies at the University of Leipzig in 1876. Specializing in philology, the study of language history, he published his only book when he was still a student, Mémoire sur le système primitif des voyelles dans les langues indo-européennes (1879), an important work on the vowel system of Proto-Indo-European, considered the parent language from which the modern Indo-European languages have descended. Saussure taught at the École des Hautes Études in Paris from 1881 to 1891. A while later he became a professor of Sanskrit and comparative grammar at the University of Geneva.
Although Saussure never wrote another book, his teaching proved highly influential. After his death, two of his students compiled his lecture notes and other materials into a seminal work, Cours de linguistique générale (1916), translated into English in 1959 as Course in General Linguistics. The book established a series of theoretical distinctions and notions that have become basic to the scientific study of language. And, as will be discussed in the next chapter (§3.3), his definition of the sign in that book has become a basic methodological blueprint for the investigation of signs, communication systems, and culture. Incidentally, Saussure used the term semiology, rather than semiotics, to refer to the scientific study of signs. He coined this term in obvious analogy to other scientific disciplines with names ending in the suffix -logy, which derives from the Greek term for “word,” logos. Saussure’s term reflected, in fact, his belief in the supremacy of language among representational systems. Nowadays, Hippocrates’ original term, semeiotics, more commonly spelled semiotics, revived by the philosopher John Locke and adopted by Charles Peirce and Charles Morris, is the preferred one.
Saussure also remarked in the Cours that any true science of signs should include both synchronic and diachronic branches of investigation. The former would study signs at a given point in time, normally the present, and the latter how they change, in form and meaning, across time. As a simple case-in-point of what diachronic analysis entails, consider the word person. Recall from above that one of the questions that semioticians ask in carrying out their research is why a sign means what it means. Looking for an answer to the question of why this word means what it means today involves probing its origin and history. In ancient Greece, the word persona signified a “mask” worn by an actor on stage. Subsequently, it came to have the meaning of “the character of the mask-wearer.” This meaning still exists in the theater term dramatis personae “cast of characters” (literally “the persons of the drama”). Eventually, the word came to have its present meaning of “human being.” This diachronic analysis of person also provides insight into why we continue to this day to use theatrical expressions such as to play a role in life, to interact, to act out feelings, to put on a proper face[mask], and so on to describe the activities and behaviors of “persons.”
The American philosopher Charles Sanders Peirce (1839–1914) was born in Cambridge, Massachusetts, and educated at Harvard University. Between 1864 and 1884 he lectured intermittently on logic and philosophy at Johns Hopkins and Harvard Universities. In 1867 he turned his attention to the system of logic created by the British mathematician George Boole (1815–1864), and he worked on extending Boolean logic until 1885. Peirce became known during his lifetime primarily for his philosophical system, called pragmatism, according to which no object or concept possesses inherent validity or importance. The significance of something, he claimed, lies only in the practical effects resulting from its use or application. The “truth” of an idea, therefore, can be measured by the empirical investigation of its usefulness. Peirce’s pragmatism was incorporated by William James (1842–1910) into psychology and by John Dewey (1859–1952) into education, profoundly influencing modern-day psychological theories and educational practices. As we shall see in the next chapter (§3.3), Peirce provided a fundamental typology of signs that, as will become evident throughout this book, can be applied profitably to the study of signifying orders.
Semiotics is often confused with the study of communication systems, a domain that falls instead under the rubric of communication science. Although the two share much of the same theoretical and methodological territory, communication science focuses more on the technical study of how messages are transmitted (vocally, electronically, etc.), whereas semiotics pays more attention to what messages mean and to how they are put together.
Among the first to study the technical features of communication systems was the American electrical engineer Claude E. Shannon (1948), who became famous for developing the mathematical laws governing the transmission, reception, and processing of information. Shannon also introduced the following key terms into the study of communication: sender, receiver, encoding, decoding, medium, information content, channel, noise, redundancy, and feedback.
In Shannon’s model of communication, message transmission occurs between a sender (such as a person speaking) who encodes a message—i.e. uses a code such as language to construct it—and a receiver who has the capacity to decode the message—i.e. to use the same code to understand what the message means. To get the message across to the receiver, the sender must use some means or device to convert it into a physical form in some medium—the voice, books, letters, telephones, computers, etc. A verbal message, for instance, can involve natural transmission, if it is articulated with the vocal organs; or else it can be transmitted by means of markings on a piece of paper through the artifactual medium of writing; and it can also be converted into radio or television signals for mechanical (electromagnetic) transmission.
Shannon also introduced the key notion of information content (I) as a measurable mathematical quantity. With this term he did not intend to refer to the meaning of the transmitted message, but rather to the probability that it will be received from a set of possible messages. The highest value of I = 1 is assigned to the message that is the least probable. On the other hand, if a message is expected with 100% certainty, its information content is I = 0. For example, if a coin is tossed, its information content is I = 0, because we already know its result 100% of the time—i.e. we know that it has a 100% probability of ending up as either heads or tails. There is no other possible outcome. So, the information carried by a coin toss is nil. However, the two separate outcomes “heads” and “tails” are equally probable. In order to relate information content in this case to probability, Shannon devised a simple formula,I = log2l/p, in which p is the probability of a message being received and log2l/p is the logarithm of 1/p to the base 2. Log2 of a given number is the exponent that must be assigned to the number 2 in order to obtain the given number: e.g. log2 of 8 = 3, because 23 = 8; log2 of 16 = 4, because 24 = 16; and so on. Using Shannon’s formula to calculate the information content of the message “single coin toss” will, as expected, yield the value of 0, because 20 = 1. Shannon used binary digits, 0 and 1, to carry out his calculations because the mechanical communications systems he was concerned with worked in binary ways—e.g. open vs. closed or on vs. off circuits. So, if “heads” is represented by 0 and “tails” by 1, the outcome of a coin flip can be represented as either 0 or 1. For example, if a coin is tossed three times in a row, the eight equally possible outcomes (= messages) that could ensue can be represented with binary digits as follows:
- 000 (= three heads)
- 001 (= two heads in a row, a tail)
- 010 (= a head, a tail, a head)
- 011 (= a head, two tails)
- 100 (= a tail, two heads)
- 101 (= a tail, a head, a tail)
- 110 (= two tails in a row, a head)
- 111 (= three tails)
Information content is measured in terms of binary digits, or bits for short. Any outcome with a probability of 1/2 carries one bit of information; any outcome with a probability of 1/4 carries two bits of information; and so on. In the above list of outcomes, the probability of one outcome, say 000 or 111, is 1/8 and thus carries three bits of information.
Shannon defined channel as the physical system or phenomenon carrying the transmitted signal: e.g. vocally-produced sound waves are transmitted through air or through an electronic channel (e.g. radio). The term noise refers to some interfering element (physical or psychological) in the channel that distorts or partially effaces a message. In radio and telephone transmissions, noise is electronic static; in voice transmissions, it can vary from any interfering exterior sound (physical noise) to the speaker’s lapses of memory (psychological noise). However, as Shannon demonstrated, communication systems have the feature of redundancy built into them for counteracting noise. This is the predictability that certain units or features of information will occur in a given type of message. For instance, in verbal communication the high predictability of certain words in many sentences (“Roses are red, violets are...”), the patterned repetition of elements (“Yes, yes, I’ll do it; yes, I will”) are all redundant features of language which increase the likelihood that a verbal message will get decoded successfully. Finally, Shannon used the term feedback to refer to the fact that senders have the capacity to monitor the messages they transmit and modify them to enhance their decodability. Feedback in human verbal communication includes, for instance, detecting reactions (facial expressions, bodily movements, etc.) in the receiver that indicate the effect of the message on h/er.
Developed originally as a theoretical framework for improving the efficiency of telecommunication systems, Shannon’s model has come to be known as the “bull’s-eye” model of communication, because it essentially depicts a sender aiming a message at a receiver as if at a bull’s-eye target. Because this model came forward to provide a comprehensive framework for representing information, independently of its specific content or meaning and of the devices that carried it, it was appropriated in the 1950s and 1960s by linguists and psychologists as a general framework for investigating human communication systems. Although many semioticians have been openly critical of the view that human communication works according to the same basic mathematical laws as mechanical information systems, the general outline and notions (encoding, decoding, etc.) of the bull’s-eye model have proved to be highly convenient for relating how communication unfolds between human beings.
In the mid-1970s, a movement known as cognitive science came to the forefront in North American academies as a promising and exciting new field for studying human consciousness, fashioned primarily from insights and research techniques derived from the domain of artificial intelligence (AI) research. Since it appears to have many of the same methodological features as semiotics, it merits discussion here.
Despite its AI orientation, the roots of the cognitive science enterprise lie, actually, in the field of psychology as a scientific mode of inquiry. When Wilhelm Wundt (1832–1920) founded the first laboratory of experimental psychology in 1879 in Leipzig, he laid the groundwork for establishing a new scientific discipline of the mind, separate from philosophy, which he claimed would have the capacity to discover the “laws of mind” through a method of controlled experimentation with human subjects. This became the epistemological rationale for most of the experimental work in psychology conducted throughout the first five decades of the twentieth century. By the late 1960s, however, a new cadre of psychologists abandoned the experimental approach, seeking instead parallels between the functions of the human brain and those of computer programs. Computer terms like “storage,” “retrieval,” “processing,” etc. became part of the emerging new lexicon of what came to be known as the cognitive movement in psychology, remaining, to this day, basic expressions for describing mental functions within psychology proper. Not unexpectedly, this led to the idea that conscious intelligence worked according to computational procedures; and this, in turn, led to a full-blown cognitive science movement by the mid-1970s. As Howard Gardner (1985: 6) has aptly pointed out, from its very outset this new enterprise was shaped by the view that there exists a level of mind wholly separate from the biological or neurological, on the one hand, and the social or cultural, on the other, that works like an electronic computer. Even though not all cognitive scientists think in this way, this “AI bias” remains, as Gardner (1985: 6) phrases it, “symptomatic” of the cognitive science enterprise to this day. By modeling mental processes in the form of computer programs, cognitive scientists insist, everything from emotions to problem-solving can be understood better.
The basis for this view is the mathematical concept of a Turing machine developed by Alan Turing (1912–1954). Turing showed that four simple operations on a tape—move to the right, move to the left, erase the slash, print the slash—allowed a computer to execute any kind of program that could be expressed in a binary code (as for example a code of blanks and slashes). So long as one could specify the steps involved in carrying out a task and translating them into the binary code, the Turing machine—now called a computer program—would be able to scan the tape containing the code and carry out the instructions successfully. In 1950, shortly before his untimely death in his early forties, Turing went one step further by suggesting that one could program a computer in such a way that it would have to be declared “intelligent.” This notion has become immortalized in the Turing test, which goes somewhat as follows. Suppose an observer is in a room that hides behind one of its walls a programmed computer and, behind another one of its walls, a human being. The computer and the human being are allowed to respond to the observer’s questions only in writing—say, on pieces of paper which both pass on to the observer through slits in the wall—so that the observer cannot tell directly who is the computer and who the human being. Now, if the observer cannot identify, on the basis of the written responses, who is the computer and who the human being, then s/he must conclude that the machine is “intelligent.” It has therefore passed the Turing test.
Although Turing himself was well aware of the shortcomings of his test for establishing truly “intelligent activities” in the human sense, openly admitting that it would be impossible to program a computer to understand the more spiritual aspects of human consciousness, to some cognitive scientists his clever test suggested not only that humans were, in effect, special kinds of protoplasmic Turing machines, whose cognitive states, emotions, and social behaviors were therefore not only representable in the form of computer-like programs, but also that mechanical machines themselves could eventually be built to think, feel, and socialize like human beings. As Minsky (1986), Konner (1991), and other radical cognitive scientists have insisted, even the concept of the soul is really no more than a fanciful notion produced by the intelligence of the most advanced Turing machine so far produced by evolutionary forces, and consciousness itself is really no more than an operation of this machine designed to allow individuals to express and modify their emotions and their impulses.
An ingenious rebuttal to the Turing test, and thus to the entire cognitive science paradigm, was put forward in the early 1980s by the American philosopher John Searle (1984). Known as the Chinese Room counter-argument, Searle’s rebuttal goes somewhat as follows. When it processes symbols during the Turing test, a computer does not know what it is doing. Just like an English-speaking person who translates Chinese symbols handed to h/er on little pieces of paper by using a set of rules, also provided for h/er, for matching them with other symbols, while knowing nothing about the story contained in the Chinese symbols, a computer has no sense whatsoever of the story contained in human symbols and communication. It is beyond the capacities of a Turing machine to understand human stories, because their meanings lie in psychic, historical, and cultural realities that lie beyond the computational functions of an electronic machine.
The cognitive science movement is really no more than a contemporary rendition of the “Cartesian project” that ushered in the modern era of rationalistic science. In their insightful book Descartes’ Dream, Davis and Hersh (1986: 7) describe this project as “the dream of a universal method whereby all human problems, whether of science, law, or politics, could be worked out rationally, systematically, by logical computation.” This project seemed realizable when the engineer Claude Shannon demonstrated that information of any kind, in both animal and mechanical systems of communication, could be described in terms of binary choices between equally probable alternatives (above, §2.3). By the 1950s, enthusiasm was growing over the possibility that computers could eventually carry out human thinking processes, since the brain was thought increasingly to be really no more than a special kind of Turing machine operating on the basis of a binary code as yet unknown. By the 1960s, phenomenal advances in computer technology seemed to make Descartes’ dream a highly realizable one.
In our view, the Cartesian project will never be realized because it is beyond the nature of a machine to seek meaning to its existence. This is a need that is peculiar to the human condition and is beyond reproduction in mechanical form. It is also the basis for representational activities such as art works, scientific theories, and the like. AI theories and models of consciousness can perhaps give us precise information about how some forms of thinking unfold, especially those that involve deduction; but they tell us nothing about why consciousness came about in the first place. Moreover, there is no such thing as a true “theory of consciousness,” because it is impossible for a human mind to come up with a set of objective axioms for capturing all the truths about itself. In 1931, when the logician Kurt Gödel (1906–1978) demonstrated rather matter-of-factly that there never can be a consistent system of axioms that will capture all the truths of arithmetic, he showed, in effect, that the makers of the axioms could never extricate themselves from the making of their own axioms. Gödel made it obvious that mathematics was made by people, and that the exploration of “mathematical truth” would thus go on forever so long as humans were around. Like other products of the human imagination, the world of numbers lies within the minds of humans. So too does the world of AI theories of human consciousness.
Like cognitive scientists, semioticians too are interested in how the mind works, and especially in how it produces and understands signs. The main difference between the two disciplines, as they are currently practiced, lies in the fact that the cognitive science agenda is shaped by a search for a pattern of similarity between natural and artificial intelligence systems, whereas the semiotic agenda is shaped, by and large, by a search for the biological, psychic, and social roots of the human need for meaning, or as Searle put it (above, §2.4), for the story behind human symbols and forms of expression.
As an applied interdisciplinary science, cultural semiotics enlists not only the notions of theoretical semiotics in its investigation of cultural forms of expression, but also the insights coming out of the cognate fields of psychoanalysis (as already discussed in the previous chapter, §1.2), psychology, anthropology, archeology, linguistics, and neuroscience. The interweaving and blending of ideas, findings, and scientific discourses from these disciplinary domains is the distinguishing feature of the semiotic approach to culture analysis.
Cultural semiotics is interested, for instance, in any finding or insight coming out of the field of psychology that is relevant to how signs are produced and understood. Particularly relevant to its objectives are the findings of the Gestalt school (German for “configuration”). Gestalt psychology traces its roots to the early work on the relationship between form and content in representational processes by Max Wertheimer (1880–1943), Wolfgang Köhler (1887–1967), and Kurt Koffka (1886–1941), as well as to the work on metaphor conducted by Karl Bühler (1934, 1951), the Wurzburg psychologists (e.g. Staehlin 1914), and Ogden and Richards (1923). The two primary objectives of Gestalt psychology are (1) to unravel how the perception of forms is shaped by the specific contexts in which the forms occur; and (2) to investigate how forms interrelate with meanings. One of the more widely-used techniques in semiotics, known as the semantic differential, was actually developed by the Gestalt psychologists Osgood, Suci, and Tannenbaum (1957). This will be discussed in the next chapter (§3.5).
Since its emergence in the previous century, scientific psychology has been caught in a tug of war between two radically different views of human mental functioning, environmentalism and innatism. From the former point of view, humans are seen as being born with their minds a tabula rasa, assuming their nature in response to the stimuli they encounter in their social environments. From the latter perspective, humans are also seen as malleable organisms, but they are not viewed as being born with an empty slate. Rather, in the terminology of cognitive science, they are seen as being “hard-wired” from birth to learn and behave in certain biologically-programmed ways. The acquisition of language, for instance, is said to occur through the operation of an innate language acquisition device (LAD) which is governed by the rules of a universal grammar (UG). Humans have no more control over their LADs than they do over their breathing. Of course, they can set up obstacles to block the functioning of their LADs, just as they can prevent themselves from breathing: i.e. they can refuse to process input by shutting themselves off from what is being said around them.
Recently, a new school has emerged, known as evolutionary psychology, that has been attempting to reconcile these two opposing perspectives (e.g. Pinker 1997). Taking their impetus from sociobiology, evolutionary psychologists attempt to explain human behaviors in terms of evolutionary patterns and by comparison with primate behaviors. According to this perspective, widely popularized by the zoologists Robert Ardrey (1966) and Desmond Morris (1969), human rituals such as kissing and flirting, for instance, are explained as modern-day reflexes of primate and early hominid behaviors. Aggression in males is viewed as a residue of animal territoriality, one of several mechanisms by which animals control access to critical resources. Males are described as competing for territories, either fighting actual battles or performing ritual combats as tests of their strength. Weaker males are portrayed as incapable of holding a territory or as being forced to occupy less desirable locations. Accordingly, aggression in modern human males is seen as a reflex of this mechanism. This kind of reasoning is extended to explaining all our feelings, thoughts, urges, artistic creations, and the like. All these are construed to result from the evolutionary processes started by our hunter-gatherer ancestors. Using population statistics, and making correlations between selected sets of facts, evolutionary psychologists aim to show that human traits of all kinds are inherited through the genetic code, not formed by individual experiences in cultural contexts.
The claim that there is a biological basis to psychic and social behaviors is, of course, partially true; but it is not totally true. By associating cultural forms of expression with primate behaviors, evolutionary psychologists are in effect engaging in unfounded extrapolations about human nature on the basis of simple observations of animal activities. There is of course no counter-argument against this form of reasoning. On the other hand, there is no concrete evidence to support it a priori either. Ultimately, that is the most serious flaw of evolutionary psychology—it is only speculation based on Darwinian-type reasoning.
The data coming out of the field of cultural anthropology, too, are relevant to cultural semiotics, because they constitute a vast array of cross-cultural “facts-on-file.” Anthropologists obtain their information mainly by interviewing key informants, cross-checking their findings among several informants, and finally piecing the separate informant observations together with their own field notes. In describing a particular tribe, for example, they gather information about its location, passage and initiation rites, religious ideas, arts, myths, language, and then compare their findings to their own perceptions, so as to differentiate between responses peculiar to the society they are studying and those that can be surmised to be general to humankind. This method of investigation, known technically as ethnographic, is intended to clarify the roles of learned and innate behavior in the development of cultures.
Because they provide an important glimpse into a culture’s past, the findings and insights of archeologists are also useful to the goals of cultural semioticians. The artifacts recovered from the excavation of sites of past human habitations allow the semiotician to trace the origin and evolution of certain features of the culture’s signifying order. The archeological perspective, therefore, constitutes a diachronic dimension in semiotic analysis—one that is vital for understanding how and why certain signs and signifying orders might have originated.
Archaeologists use various techniques to establish the time sequences of activities that have left physical remains. Of modern methods for dating such remains, the radio-carbon technique is perhaps still the most widely used. The basis of this method is that living plants and animals contain fixed ratios of a radioactive form of carbon, known as carbon-14. This deteriorates at a constant rate after death, leaving ordinary carbon. Measuring the traces of carbon in pieces of charcoal, remains of plants, cotton fibers, wood, and so forth permits the objects to be dated as far back as 50,000 years, although the method is sometimes extended to 70,000 years. Uncertainty in measurement increases with the age of the sample. Archeologists establish chronology also through stratigraphic analysis—i.e. through an analysis of the time-ordered deposits of soil, organic materials, and remains of human activity. Deposits at human sites gradually build up and cover each preceding phase. The task of stratigraphie analysts lies in piecing together the remains of floors, storage pits, and other constructions in a way that is consistent logically with the deposit sequences or layers found at the site.
The modern science of linguistics is the twin sister of modern semiotics, since both trace their parentage to Saussure’s Cours de linguistique générale. Linguistics proper focuses on studying the forms and functions of sounds, words, and grammatical categories of specific languages, as well as the formal relationships that exist among different languages. Linguists divide language into various levels, of which the following three are the ones that have received the most scientific attention so far:
- This level is composed of the meaningful sounds of a language, known as its phonemes. Linguists distinguish phonological analysis from phonetic analysis, or the cataloguing and description of the raw sounds that humans are capable of making.
- This level is composed of the units, called morphemes, that carry meaning in a language. These may be word roots (as the blue- in blueberry), individual words (boy, play, need), word endings (as the -s in the plural form boys, -ed in the past tense form played), prefixes and suffixes (as the pre- in preview or the -ness in awareness), or internal alterations (sing-sang, man-men, etc.).
- This is the level where words and phrases are organized into sentences. The word order of most declarative sentences in English, for instance, reveals the underlying form [S-V-0] (= Subject-Verb-Object): e.g. Alexander (S) loves (V) school (O). The sequence [V-0-S] (Loves school Alexander), on the other hand, is normally not an acceptable one in English.
Among the first to use linguistic method as an investigative tool for studying culture in the 1920s were Franz Boas and his student Edward Sapir (chapter 1, §1.1). Challenging conventional analyses of language based on written traditions, these two anthropologists devised practical field techniques for identifying the phonemes and morphemes of unwritten languages. These techniques were synthesized, systematized, and elaborated by the American linguist Leonard Bloomfield in his 1933 book titled Language, which became a point-of-reference for detailed investigations of specific languages throughout the 1930s, 1940s, and 1950s.
In his Cours de linguistique générale (1916), Saussure distinguished between langue (French for “language”), the knowledge that speakers of a language share about what is acceptable in that language, and parole (“word”), the actual use of a language in speech. Saussure made an analogy to the game of chess to clarify the crucial difference between the two. The ability to play chess, he claimed, is dependent upon knowledge of its langue, i.e. of the rules of movement of the pieces—no matter how brilliantly or poorly someone plays, what the chess board or pieces are made of, what the color and size of the pieces are. Langue is a mental code that is independent of such variables. Now, the actual ways in which a person plays a specific game—why s/he made the moves that s/he did, how s/he used h/er past knowledge of the game to plan h/er strategy, etc.—are dependent instead on the person’s particular execution abilities, i.e. on h/er control of parole. In an analogous fashion, Saussure suggested, the ability to speak and understand a language is dependent upon knowing the rules of the language game (langue), whereas the actual use of the rules in certain situations is dependent instead upon execution factors (parole), which may be psychological, social, and communicative.
In 1957 the American linguist Noam Chomsky (1928–) adopted Saussure’s basic distinction, referring to langue as competence and parole as performance. Chomsky also entrenched Saussure’s belief that the aim of linguistics proper was the study of langue. Chomsky defined his version of langue (competence) as the innate knowledge that people employ unconsciously to produce and understand grammatically well-formed sentences, most of which they have never heard before. He then proposed a system of analysis, which he called transformational-generative grammar, that would purportedly allow the linguist to identify and describe the general properties of this innate knowledge, sifting them out from those that apply only to particular languages. In acquiring a language, both general grammatical processes and language-specific rule-setting mechanisms are activated in the child; the former, called universal principles in recent versions of Chomskyan theory, are part of a species-specific language faculty that has genetic information built into it about what languages in general must be like; the latter, known as parameters, constrain the universal principles to produce the specific language grammar to which the child is exposed. Although Chomsky assigns some role to cultural and experiential factors, he has always maintained that the primary role of linguistics must be to understand the universal principles that make up the speech faculty. Chomsky’s intractability in maintaining this position, in spite of research that has cast serious doubts upon it, understandably made him a target of bitter criticism throughout the 1980s and 1990s.
While linguistics proper has largely focused on studying langue since Saussure’s time, two main branches of linguistics have sprung up since the late 1950s that now deal directly with parole—sociolinguistics and psycholinguistics. The field of sociolinguistics aims to describe the kinds of performance behaviors that correlate with the use of language in different situations. For example, sociolinguistic research has found that pronunciation is linked typically to social class in many cultures. People aspiring to move from a lower class to an upper one attach prestige to pronouncing certain sounds in specific ways, even overcorrecting their speech to pronounce words in ways that those they wish to emulate may not. Psycholinguistics is concerned with such issues as language acquisition by children, the nature of speech perception, the localization of language in the brain, and the relation between language and thought. The term psycholinguistics was coined in 1946 by the psychologist Proncko in an article he wrote for the Psychological Bulletin. In 1951, the psychologist George Miller provided the fledgling interdisciplinary field of inquiry with its first research agenda, which was expanded a few years later by those who participated in the Indiana University conference on psycholinguistics (Osgood and Sebeok 1954). Today it is one of the more productive and fascinating branches of the language sciences.
Of special relevance to the semiotic analysis of culture is the hypothesis put forward in the mid-1930s by Edward Sapir’s famous student Benjamin Lee Whorf (1897–1941) that language shapes the specific ways in which people think and act. The question of whether or not the Whorfian hypothesis is a tenable one continues to be debated to this day. If the categories of a particular language constitute a set of strategies for classifying, abstracting, and storing information in culture-specific ways, do these categories predispose its users to attend to certain specific perceptual events and ignore others? If so, do speakers of different languages perceive the world in different ways? These are the kinds of intriguing questions that the Whorfian hypothesis invites.
This hypothesis has many antecedents. The two most important ones are the views of the philosopher Johann von Herder (chapter 1, §1.5), who saw an intimate connection between language and ethnic character, and the philologist Wilhelm von Humboldt (1767–1835), who gave Herder’s hypothesis a more testable formulation by positing that the categories of a specific language were formative of the thought and behavior of the people using it for routine daily communication. A contemporary descendant of the Herder-Humboldt-Sapir-Whorf approach to language is the school of linguistics championed by Ronald Langacker (1987, 1990) and George Lakoff (1987), known as cognitive linguistics. The main claim made by cognitive linguists is that language categories reflect cultural models of the world which, in turn, influence how the speakers of a language come to think, act, and behave. This claim will be examined more closely in chapters 5 and 6.
One other disciplinary domain from which cultural semiotics gleans many insights is neuroscience. As discussed in the opening chapter (§1.3), shortly after the advent of bipedalism, the brain of Homo culturalis started to expand rapidly, developing three major structural components—the large dome-shaped cerebrum, the smaller somewhat spherical cerebellum, and the brainstem. Information and theories about how the brain processes input and transforms it into representational structures are of obvious relevance to semiotics.
In humans, the brain is composed of about 10 billion nerve cells (neurons), which are together responsible for the control of all mental functions. In addition to neurons, the brain contains glial cells (supporting cells), blood vessels, and secretor organs. The cerebrum is the largest part of the human brain, making up approximately 85 percent of the brain’s weight; its large surface area and intricate development account for the superior intelligence of humans, compared with other animals. It is divided by a longitudinal fissure (indentation) into right and left, mirror-image hemispheres. The left hemisphere controls most of the body’s right side, whereas the right hemisphere controls most of the left side. The corpus callosum is the cable of white nerve fibers that connects these two cerebral hemispheres and transfers information from one to the other. Each cerebral hemisphere has an outer layer of gray matter called the cerebral cortex, about 3 to 4 mm. thick, and each is divided by fissures into five lobes. The two hemispheres are normally integrated in function, but each hemisphere is highly specialized.
The study of the brain goes back to ancient times, but the rise of an autonomous neuroscience traces its roots to the discovery in the previous century that the left hemisphere (LH) was the primary biological locus for language. It was the French anthropologist and surgeon Pierre Paul Broca (1824–1880) who made this discovery in 1861, when he noticed a destructive lesion in the left frontal lobe of the LH at the autopsy of a patient who had lost the ability to articulate words during his lifetime, even though he had not suffered any paralysis of his speech organs. Broca concluded that the capacity to articulate speech was traceable to that specific cerebral site—which shortly thereafter came to bear his name (Broca’s area). This discovery established a direct connection between a semiosic capacity and a specific area of the brain. Broca was also responsible for suggesting that there existed an asymmetry between the brain and the body by showing that right-handed persons were more likely to have language located in the LH. Today, neuroscience has confirmed both that mental functions originate in one or the other of the two hemispheres and that the motor control system and sensory pathways between the brain and the body are crossed—i.e. that they are controlled by the contralateral (opposite-side) hemisphere.
In 1874 the work of the German neurologist Carl Wernicke brought to the attention of the medical community further evidence linking the LH with language. Wernicke documented cases in which damage to another area of the LH—which came to bear his name (Wernicke’s area)—consistently produced a recognizable pattern of impairment to the faculty of speech comprehension. Then, in 1892 Jules Déjerine showed that problems in reading and writing resulted primarily from damage to the LH alone. So, by the end of the nineteenth century the research evidence that was accumulating provided an empirical base to the emerging consensus in neuroscience that the LH was the cerebral locus of language. Unfortunately, it also contributed to the unfounded idea that the RH (right hemisphere) was without special functions and subject to the control of the so-called “dominant” LH.
Following Wernicke’s observations, the notion of cerebral dominance, or the idea that the LH is the dominant one in the higher forms of cognition, came to be widely held in neuroscience. Although the origin of this term is obscure, it grew no doubt out of the research connecting language to the LH and out of the cultural link in Western society between language and the higher mental functions. It took the research in neuroscience most of the first half of the twentieth century to dispel the notion that only the verbal part of the brain was crucial for the higher forms of cognition, and to establish the fact that the brain is structured anatomically and physiologically in such a way as to provide for two modes of thinking, the verbal and the visual.
It was during the 1950s and 1960s that the widely-publicized studies conducted by the American psychologist Roger Sperry (1913–1994) and his associates on epilepsy patients who had had their two hemispheres separated by surgical section showed that both hemispheres, not just a dominant one, were needed in a neurologically-cooperative way to produce complex thinking. Then, in 1967, a century after Broca’s ground-breaking discovery, the linguist Eric Lenneberg established that the LH was indeed the seat of language, adding that the “critical period” for language to “settle into” that hemisphere was from birth to about puberty (chapter 1, §1.3).
In the 1970s research in neuroscience brought seriously into question the idea that the LH alone was responsible for language. The research suggested, in fact, that for any new verbal input to be comprehensible, it must occur in real-world contexts that allow the synthetic functions of the RH to do their interpretive work. In effect, it showed that the brain is prepared to interpret new information primarily in terms of its contextual characteristics. Today, neuroscientists have at their disposal a host of truly remarkable technologies for mapping and collecting data on brain functioning. Positron emission tomography (PET brain scanning), for instance, has become a particularly powerful investigative tool for neuroscientists, since it provides images of mental activities such as language.
It should be mentioned, for the sake of completeness, that the new technology has given us a glimpse into how the cortex is involved in producing various psychological functions, psychomotor movements, etc. However, there are other areas of the brain of which very little is known—such as the areas below the cortex, which are involved in the emotions. In evolutionary terms, these areas are older, tying us to our primate heritage. So, although much has been learned about the cortex since 1861, the brain in its totality still remains a largely mysterious organ.
To define semiotics as a science requires some justification. The question of whether or not the human mind and human cultures can be studied with the same objectivity as physical matter has always been a problematic one. Indeed, many semioticians refuse to call their field a science, since they believe that the study of signifying orders can never be totally objective. This is why many prefer to define it with terms like “activity,” “tool,” “doctrine,” “theory,” “movement,” “approach” (Nöth 1990: 4, Sebeok 1990). However, we are in agreement with Umberto Eco (1978), who sees semiotics as a science in the traditional sense of the word for five fundamental reasons (Figure 2.1).
We are, of course, aware that any claim to “scientific objectivity” is to be tempered with caution and wariness. This is not unique to semiotics, however. It has, in fact, become characteristic of all the physical sciences in the twentieth century ever since Werner Heisenberg (1901–1976), the German physicist and Nobel laureate, put forward his now famous indeterminacy principle during the first part of the century, which debunked the notion of an objective reality independent of culture and of the scientist’s personal perspective once and for all. Heisenberg claimed that reality was indeterminable outside of the individual observer’s participation in it.
it is an autonomous discipline;
it has a set of standardized methodological tools;
it has the capability of producing hypotheses;
it affords the possibility of making predictions;
its findings may lead to a modification of the actual state of the objective world.
To understand Heisenberg’s principle, consider a practical example. Let’s suppose that a scientist reared and trained in North America sees a physical event that she has never seen before. Curious about what it is, she takes out a notebook and writes down her observations in English. At the instant that our North American scientist observes the event, another scientist, reared and trained in the Philippines and speaking only the indigenous Tagalog language, also sees the same event. He similarly takes out a notebook and writes down his observations in Tagalog. Now, to what extent will the contents of the observations, as written in the two notebooks, coincide? The answer of course is that the two will not be identical. The reason for this discrepancy is not, clearly, due to the nature of the event, but rather to the fact that the observers were different, psychologically and culturally. So, as Heisenberg would have suggested, the true nature of the event is indeterminable, although it can be investigated further, paradoxically, on the basis of the notes taken by the two scientists. The semiotic analysis of culture, too, implies the “Heisenbergian” participation of the analyst in the act of analysis.
Every scientific enterprise is constructed on the basis of axioms, the primary criteria for distinguishing a scientific enterprise from a nonscientific one established by the ancient Greeks, most probably during the fifth century BC. The axioms of any science must be consistent with one another and few in number. The axioms that in our view have guided the semiotician’s exploration of culture in the last decade can be summarized as follows:
- Signifying orders the world over are constructed with the same core of signifying properties.
- This implies that there are universal structures of sense-making in the human species.
- Signifying orders are specific instantiations of these structures.
- Differences in signifying orders result from differences in such instantiations, as caused by human variability and fluctuating contextual-historical factors.
- Signifying orders entail culture-specific classifications of the world.
- These classifications influence the way people think, behave, and act.
- Perceptions of “naturalness” are tied to cultural classifications.
The first six axioms will constitute the subject matter of the remainder of this manual. But the last axiom requires some commentary here. As an example of what this axiom implies consider, for instance, the perception shared by people living in Western society today that the wearing of high-heel shoes is a “natural” thing for women but an “unnatural” one for men. In reality, the classification of a clothing item in terms of gender is a matter of historically-based convention, not of naturalness or lack thereof. As a matter of fact, in the Baroque seventeenth century, high heels were the fashion craze for noblemen and male aristocrats generally, who obviously considered it quite “natural” to wear them.
Cultural classifications are so deeply rooted in human beings that they can even subtly mediate how we experience the world. A sign selects what is to be known and memorized from the infinite variety of things that are in the world. Although we create new signs to help us gain new knowledge and modify previous knowledge—this is what artists, scientists, writers, for instance, are always doing—by and large, we literally let our culture “do the understanding” for us. We are born into an already-fixed signifying order that will largely determine how we view the world around us. Only if, hypothetically, all our knowledge (which is maintained in the form of codes) were somehow erased from the face of the earth would we need to rely once again on our instinctive meaning-making tendencies to represent the world all over again.
As another example of the seventh axiom, consider the concept of health. Although this might at first appear to capture a universally-shared meaning, in actual fact what is considered to be “naturally healthy” in one culture may not coincide with views of health in another. Health cannot be defined ahistorically, aculturally or in purely absolute terms. This does not deny the existence of events and states in the body that will lead to disease or illness. All organisms have a species-specific bodily warning system that alerts them to dangerous changes in bodily states. But in the human species bodily states are also representable and thus interpretable in culture-specific ways. This is why in American culture today a “healthy body” is considered to be one that is lean and muscular. Conversely, in others it is one that Americans would consider too plump and rotund. A “healthy lifestyle” might be seen by some cultures to inhere in rigorous physical activity, while in others it might be envisaged as inhering in a more leisurely and sedentary lifestyle.
Moreover, as the writer Susan Sontag cogently argued in her compelling 1978 book Illness as Metaphor, the signifying order predisposes people to think of specific illnesses in certain ways. Using the example of cancer, Sontag pointed out that in the not-too-distant past the very word cancer was said to have killed some patients who would not have necessarily succumbed to the malignancy from which they suffered: “As long as a particular disease is treated as an evil, invincible predator, not just a disease, most people with cancer will indeed be demoralized by learning what disease they have” (Sontag 1978: 7). Sontag’s point that people suffer more from interpreting their disease in cultural terms than from the disease itself is, indeed, a well-taken and instructive one.
Medical practitioners too are not immune from the influence of cultural symbolism. The body, as we shall see in chapter 4, is as much a source of symbolism as it is organic substance. Several decades ago, Hudson (1972) showed how this affects medical practices. He found that medical specialists trained in private British schools were more likely to achieve distinction and prominence by working on the head as opposed to the lower part of the body, on the surface as opposed to the inside of the body, and on the male as opposed to the female body. Hudson suggested that the only way to interpret such behaviors was in cultural terms: i.e. parts of the body, evidently, possessed a symbolic significance which influenced the decisions taken by medical students: “students from an upper-middle-class background are more likely than those from a lower-middle-class background to find their way into specialties that are seen for symbolic reasons as desirable” (Hudson 1972: 25).