Are we machines of the kind that researchers are building as “thinking machines”? In asking this kind of question, we engage in a kind of projection—understanding humanity by projecting an image of ourselves onto the machine and the image of the machine back onto ourselves. In the tradition of artificial intelligence, we project an image of our language activity onto the symbolic manipulations of the machine, then project that back onto the full human mind.
Winograd (1991: 220)
I n human life, there is virtually no object or artifact that is not imbued with meaning. Indeed, objects constitute particular kinds of signifiers with a broad range of connotative and annotative signifieds across the world’s cultures. In this chapter, our trek through the landscape of culture takes us through the domain of objects, yet another site inhabited by Homo faber (chapter 7), the ingenious maker of things. Moreover, we will visit the neighboring abode of a recent descendant of Homo culturalis—Homo technologicus, the maker of machines.
Like all the other dimensions and components of culture, the meanings of objects and machines are coded in terms of the signifying order and, therefore, reveal the same kinds of signifying properties that characterize, say, clothing, bodily presentation, language, buildings, etc. Studying why people make things, how they design their objects, what role these play in the evolution of a culture, is another important part of cultural semiotics. Although the terms object and artifact are often used interchangeably, they are distinguished in both semiotics and archeology as follows: objects are things found in the environment, artifacts are things made by humans. This distinction, however, is not necessary here, since our purpose is to focus on the meanings that things in general embody.
The objects that are made and used in a culture are hardly randomly produced “things.” They cohere into a system of signification that mirrors, in microcosm, the meaning dimensionalities of the entire signifying order. This is why archeologists reconstruct ancient societies on the basis of the artifacts they discover at a site. The jewelry, clothes, furniture, ornaments, tools, toys, etc. that they find there are the bits and pieces that allow them to reconstruct the ancient society’s system of objects that, in turn, allows them to reconstruct the society’s signifying order to various degrees of completeness. Artifacts provide truly valuable clues as to what the signifying order of an extinct culture was probably like. Especially significant in the study of ancient signifying orders is the analysis of objects that were thought to possess mysterious powers. Although all objects are thought to have intrinsic value across the world’s cultures, there are some that are thought to possess magical qualities.
An extreme manifestation of this belief is referred to as fetishism—the conviction that some inanimate objects, known as fetishes (from Portuguese feitiço “artificial, charm,” from Latin facticius “artificial”), are imbued with supernatural attributes. The fetish is typically a figure modeled or carved from clay, stone, wood, or some other material, resembling a deified animal or some sacred thing. Sometimes it is the animal itself, or a tree, river, rock, or place associated with it. In some societies belief in the powers of the fetish is so strong that fetishism develops into idolatry. In such cases, the fetishistic belief is actually an extreme form of animism—the view that spirits either inhabit or communicate with humans through material objects.
Animism is not limited to tribal or pre-modern cultures. On the contrary, it is alive and well even in modern Western cultures, whether or not people realize it. In addition to the fetishes that incite sexual urges or fantasies in some people—feet, shoes, intimate female apparel—there are many behaviors in our culture that can only be explained as the manifest effects of a latent form of animism. In the 1970s, for example, American society went mad for “pet rocks.” Many considered this fad simply a quick way to make money, foisted upon a gullible public spoiled by consumerism. But to some semioticians, that craze was, in effect, clear evidence of a latent form of animism. The same animistic tendencies can be seen in the common view held by even modern-day people that some objects are unexplainably magical. This is why, if they are lost, then impending danger is feared. If, however, they are found serendipitously—as for instance when one finds a “lucky penny”—then it is believed that the gods or Fortune will look auspiciously upon the finder.
Animism is, actually, a manifestation of an unconscious psychosemiosic process that can be called objectification. This refers to the fact that people perceive objects as having a necessary logic and raison d’être all their own, of which their makers may not be aware: i.e. their existence is believed to be already implicit in the formless matter of the universe, assuming actual material shape through human agents. This ingrained belief system would explain why objects are perceived to be related “genealogically” to each other—the making of one leading to the making of another and then to the making of yet another, and so on. Like works of art, objects are felt to be reifications (reflections) of innate forms of thought that seek expression in real-world physical forms. In dimensionality terms, objectification can thus be explained as the perception of an object as (1) something material, (2) whose particular (paradigmatically differentiable) shape is but one manifestation of the forms inherent in the human mind, that (3) generates a meaning in relation to the other objects and codes in a culture:
Objectification manifests itself in many behaviors and beliefs. One such belief is the perception that some objects are extensions of the physical Self. The jewelry people wear, the personal objects that individuals possess to adorn their abodes, and the like are all signifiers of physical persona. In Western culture this applies as well to the automobile, which is experienced by many as an extension of the body and thus as a protective shell of the Self (Richards 1994). In the public world of traffic, it creates a space around the physical body which is as inviolable as the body itself. Interestingly, but not unexpectedly, this manifestation of objectification is not confined to Western culture. The anthropologist Basso (1990: 15-24) found that the Western Apache of east-central Arizona, for instance, also perceive the car as a body. The Apache even use the names of body parts to refer to analogous automobile parts: e.g. the hood is called a “nose,” the headlights “eyes,” the windshield “forehead,” the area from the top of the windshield to the front bumper “face,” the front wheels “hands and arms,” the rear wheels “feet,” the items under the hood “innards,” the battery “liver,” the electrical wiring “veins,” the gas tank “stomach,” the distributor “heart,” the radiator “lung,” and the radiator hoses “intestines.”
A truly interesting manifestation of objectification that merits separate treatment here, because of the interest it has generated among cultural semioticians, is discernible in the kinds of meanings that toys have. This form of objectification is at times as extreme as any form of religious fetishism. Consider, for instance, what happened during the 1983 Christmas shopping season in the United States and Canada. That period is now often described by cultural historians as the Christmas of the “Cabbage Patch” doll craze. Hordes of parents were prepared to pay almost anything to get one of those dolls for their daughters. Scalpers offered the suddenly and unexplainably out-of-stock dolls for hundreds of dollars through classified ads. Adults fought each other in lines to get one of the few remaining dolls left in stock at some toy stores.
How could a simple doll have caused such mass hysteria? In our view, only an extreme form of objectification, bordering on fetishism, could have possibly triggered it. To see why this is so, consider more closely what toys mean. Children have always played with objects. In the child’s mind broom handles can be imagined to be swords, rocks balls, and so on. A toy, on the other hand, is an adult-made object imbued with the connotations that childhood has in a culture. Toys, as the logo for a major toy chain states, are indeed us (Toys “R” Us). Dolls are particularly meaningful because they are icons of the human figure to higher or lesser degrees of fidelity. As early as 600 BC dolls were made with movable limbs and removable garments, so as to reinforce their representation of human anatomy. When parents buy or make a doll, they are, in effect, giving their child an ersatz sibling or playmate. Dolls have been found in the tombs of ancient Egyptian, Greek, and Roman children. Evidently the objective was to provide the children with a lifelike human form, so that they could, in effect, play with someone else in the afterlife.
Interestingly, in many societies dolls also have religious and ritualistic functions. In the Hopi culture, for instance, kachina dolls are given as sacred objects to children as part of fertility rites. Even in Christian practices, dolls have been used since the Middle Ages to represent the Holy Family in the Nativity scene, as part of Christmas observations. In Mexico, dolls representing Our Lady of Guadeloupe are ceremonially paraded every year. And in some cultures of the Caribbean, it is believed that one can cause physical or psychological damage to another person by doing something injurious to a doll constructed to resemble that person.
The commercialization of dolls as both fashion icons and playthings for children can be traced to Germany in the early fifteenth century. The fashion dolls were made to depict the clothing of German women. Shortly thereafter, manufacturers in England, France, Holland, and Italy also began to produce dolls dressed in fashions typical of their respective locales. The more ornate ones were often used by rulers and courtiers as gifts. By the seventeenth century, however, simpler dolls, made of cloth or leather, were being used primarily as playthings by children.
During the eighteenth century, the human iconicity of dolls was improved considerably as manufacturing systems became more technologized. The fashion dolls looked so lifelike that they were often used to illustrate clothing style trends and were sent from one country to another to display the latest fashions in miniature form. After the Industrial Revolution, dolls became commonplace toys. Before then, most people lived in agricultural communities or settings. Children barely out of infancy were expected to share the workload associated with tending to the farm. There was, consequently, little distinction between childhood and adult roles—children were considered to be adults with smaller and weaker bodies. During the Industrial Revolution the center of economic activity shifted from the farm to the city. This led to the emergence of a new social order with different role categories and assignments. The result was that children were left with few of their previous responsibilities, and a new view of them surfaced. Children were proclaimed to be vastly different from adults, needing time to learn at school and to play. Child labor laws were passed and public education became compulsory. Protected from the harsh reality of industrial work, children came to assume a new identity in society at large as innocent, faultless, impressionable, malleable beings. The toys that were manufactured for the use of children soon became part of this new mythology (cultural perception) of childhood (chapter 10, §10.3).
Dolls came to be viewed as the playmates of little girls. By the early part of the twentieth century, it was assumed that all female children would want to play with dolls, and toys came to connote gender identity (chapter 4, §4.2). Noteworthy design innovations in dolls manufactured between 1925 and World War II included sleeping eyes with lashes, dimples, open mouths with tiny teeth, fingers with nails, and latex rubber dolls that could drink water and wet themselves. Since the 1950s, the association of lifelike dolls with female gender identity has been entrenched further by both the quantity of doll types being produced and their promotion by media advertising techniques (chapter 11, §11.4). Since their launch in 1959, the “Barbie” dolls, for instance, have become a part of the system of objects that are associated with little girls growing up in North America. Incidentally, the Barbie dolls also started the trend of buying clothing and accessories for dolls, thus enhancing their human iconicity even more.
The Cabbage Patch dolls were, in fact, intended to be virtually indistinguishable from the real thing. They even came with “adoption papers,” and each doll was given a name, taken at random from 1938 Georgia birth records. Like any act of naming, this conferred upon each doll a human personality. And, thanks to computerized factories, no two dolls were manufactured alike. No wonder, then, that the Cabbage Patch doll shortage created such frenzy—a pattern that has been repeated regularly, to varying degrees, at Christmas time ever since. Having toys is perceived as an intrinsic feature of the social and emotional life of children. So, in the same way that a parent would panic if the child’s physical life were threatened because of the lack of, say, food, the “Cabbage Patch parent” found h/erself panicking over the possibility of h/er child’s social and emotional life being threatened because of the lack of what was, at a denotative level, just a toy.
Some semioticians distinguish between an object proper, or inanimate material, and material of plant or animal origin, such as food. Although the topic of food is often given a separate treatment in semiotic manuals, we will deal with it under the rubric of objectification because our interest here is in the meanings associated with all types of things, including food.
At a biological level, survival without food is impossible. So, at a denotative level food is perceived to be a survival substance. But, once again, given the semiosic and representational nature of the human species, food and eating invariably take on a whole range of connotations and annotations in social settings. The term that is often used to designate the meanings that food entails is cuisine. This refers to what we eat, how we make it, and what it tells us about the makers and eaters. At the level of culture, cuisine is perhaps more precisely definable as the system of food codes that are found alongside, and interconnected with, the other codes in the signifying order. In terms of the dimensionality principle, food denotes a survival substance at a firstness level; it takes on specific annotative meanings for the individual at a secondness level; and it is imbued with social meanings derived from a culture’s food codes at a thirdness level:
The “Raw” vs. the “Cooked”
The anthropologist Claude Lévi-Strauss (1964) made an important distinction between raw and cooked food in the evolution of Homo culturalis. He saw the advent of cooking as the event that transformed human group life into cultural life. All animals eat food in its raw form, including the human animal; but only Homo culturalis cooks h/er food. According to Lévi-Strauss this transformation was accomplished by two processes—roasting and boiling—both of which were among the first significant technological accomplishments of early human cultures. Roasting is more primitive than boiling because it implies direct contact between the food and a fire. So, it is slightly above “the raw” in evolutionary terms. But boiling reveals an advanced form of technological thinking, since the cooking process in this case is mediated by a pot and a cooking liquid. Boiling was the event that led to the “cooked” form of eating. Interestingly, in some parts of the world the distinction between “the raw” and “the cooked” has been enshrined into the signifying order to connote social relations. In Hindu society, for instance, the higher castes may receive only raw food from the lower castes, whereas the lower castes are allowed to accept any kind of cooked food from any caste.
To get a sense of the intrinsic relation between cooking and culture, it might be instructive to imagine a “Robinson Crusoe” situation, i.e. an imaginary scenario drafted in imitation of Daniel Defoe’s (1660?–1731) famous novel, The Life and Adventures of Robinson Crusoe, which appeared in 1719. This is a fictional tale of a shipwrecked sailor, based on the adventures of a seaman, Alexander Selkirk, who had been marooned on an island off the coast of Chile. The novel chronicles Crusoe’s ingenious attempts to overcome the island’s hardships. It has become one of the classics of children’s literature.
We can suppose that, like Robinson Crusoe, a person has somehow been abandoned alone on an isolated island in the middle of nowhere to fend for h/erself. Without the support and security of a social ambiance, h/er first instincts will, of course, lead h/er to survive in any way that s/he can. In this situation, h/er need for food and water takes precedence over all else. When h/er need becomes desperate, s/he will hardly be fussy about how the “raw food” she finds on the island will taste. In effect, s/he will eat anything that will not kill h/er. The eating of raw food in such a drastic situation has only one function—to secure survival.
Now, suppose that after living alone on the island for a few days the person discovers other similarly-abandoned individuals, each one on some remote part of the island, all of whom speak the same language as s/he does. Since there is strength in numbers, the decision is made to live as a group. To reduce the risk of not finding enough food for everyone to eat, the group decides that it is wise to assign responsibility for the hunting and gathering of food to specific persons. Others are then assigned the task of developing the basic technology for cooking the food. Others still are given the task of actually cooking the food. These role assignments are determined by mutual consent, say, according to the demonstrated abilities of each individual when s/he was living alone. After a period of time, what will emerge from these role agreements is a proto-culture, based on a division of labor. As time passes, other social agreements and arrangements are established. At that point, it is quite likely that the cooking of food will become more and more routinized and even adapted to meet changing taste preferences among the individuals.
The purpose of this vignette has been to exemplify how raw food is tied to survival and cooked food to culture. Indeed, it might even be claimed, as it is by some anthropologists, that the cooking of food was the event that led to the invention of culture. When especially favorable food sources became available, early humans settled in permanent, year-round communities, learning to domesticate plants and animals for food, transportation, clothing, and other uses. With greater population concentrations and permanent living sites, cultural institutions developed, united by religious ceremonies and food exchanges. These early cultures soon developed complex belief systems with regard to the supernatural world, i.e. with regard to the forces of Nature and of the gods. Food thus became a part of ritual and a staple of symbolic life.
The above evolutionary scenario would explain why food reverberates with symbolism and why the world’s religious ceremonies revolve around it. The raison d’être of the Catholic Mass, for instance, is to partake of bread that has become the consecrated body of Christ. But even in the secular domain, we schedule breakfast, lunch, and dinner events ritualistically on a daily basis. In a phrase, the symbolic meanings of food are interconnected with the other meaning pathways charted by the signifying order of a culture. This is why we talk of the bread of life, of earning your bread, and so on. In many European languages words for bread are often synonymous with life. The word companion, incidentally, comes from Latin and means literally the person “with whom we share bread.” Bread is, evidently, as much symbol as it is food.
Many of the symbolic meanings derive from mythic and religious accounts of human origins. The Christian story of Adam and Eve, for instance, revolves around the eating of an apple. In actual fact, the Hebrew account of the Genesis story tells of a forbidden fruit of knowledge, not an apple. The representation of this fruit as an apple can be traced to medieval Christian visual depictions of the Eden scene, when painters and sculptors became interested in the Genesis story. In the Koran, on the other hand, the forbidden fruit is a banana. Now, the Biblical symbolism of the apple as forbidden knowledge, continues to resonate in our culture. This is why the apple tree symbolizes the tree of knowledge; why Apple Computer has chosen this fruit to symbolize its quest for knowledge; why we have expressions such as the apple of one’s eye.
Ramses II of Egypt cultivated apples in orchards along the Nile in the thirteenth century BC. The ancient Greeks also cultivated apple trees from the seventh century BC onwards, designating the apple “the golden fruit,” since Greek mythology, like Christian, assigned it a primordial significance. The apple was given to Hera from the Garden of the Hesperides as a wedding present when she married Zeus.
Predictably, the meanings that foods entail produce structural effects (chapter 3, §3.10). The fact that in our culture rabbits, cats, and dogs, for instance, are felt to be household pets, forces us to perceive cooked rabbit, cat, and dog meat as inedible. On the other hand, we eat bovine (beef steaks, hamburgers, etc.), lamb, and poultry meat routinely, with no discomfort. Predictably, such cultural perceptions are not universal. In India, a cow is classified as sacred and, therefore, as inedible. Incidentally, this is the basis of our expression sacred cow to refer to something unassailable and revered. Anglo-American culture does not classify foxes or dogs as edible food items, but the former is reckoned a delicacy in Russia, and the latter in China. Need it be mentioned that some people even eat human meat (a practice known more precisely as anthropophagitism or cannibalism)?
Edibility is more a product of culture than it is of Nature. Outside of those which have a demonstrably harmful effect on the human organism, the types of flora and fauna that are considered to be edible or inedible vary greatly among different cultures. Perceptions of edibility have a basis in history, not digestive processes. We cannot get nourishment from eating tree bark, grass, or straw. But we certainly could get it from eating frogs, ants, earthworms, silkworms, lizards, and snails. Most people in our society would, however, respond with disgust and revulsion at the thought of eating such potential food items. This notwithstanding, there are societies where they are not only eaten for nourishment, but also considered to be delicacies. Our expression to develop a taste for some food reveals how closely tied edibility is to cultural perception. If we were left alone on that hypothetical Robinson Crusoe island described above (§9.1), the question would certainly be one of not of taste, but of survival at any taste.
The specific kinds of tastes that one finds meaningful in social settings can be called gustemes (in analogy with phoneme, narreme, etc.). We perceive gustemic differences in cuisine as fundamental differences in worldview and lifestyle—as differences between “us” and “them.” In our society we eat fish with great enjoyment, but we do not eat the eyes of fish, which we find distasteful, by and large. But those living in many other societies do. To see others eat the eyes tends to cause discomfort or queasiness within many of us. It is a small step from this unpleasant sensation to a perception of the eaters as barbaric. It is interesting to note that when we do come to accept the gustemes of others, we then reclassify their cuisine as an exotic delicacy.
Food codes are interconnected with the other codes of the signifying order. The complex rules of how to prepare food and when to eat it, the meanings that specific dishes have vis-à-vis social class, the subtle distinctions that are constantly made in the ways food items are cooked, etc. are all coded in terms of the signifying order. For this reason, food codes also regulate how eating events are expected to unfold: e.g. they regulate the order in which dishes are presented, what combinations can be served in tandem, how the foods are to be placed on the table, who has preference in being served, who must show deference, who does the speaking and who the listening, who sits where, what topics of conversation are appropriate, etc. In effect, as Visser (1991: 107) remarks, “dinner invitations can be fraught with hope and danger, and dinner parties are dramatic events at which decisions can be made and important relationships initiated, tested, or broken.”
Eating events are crucial to the establishment and maintenance of social relations and harmony. There exists virtually no society that does not assign an area of the domestic abode to their occurrence. All societies, moreover, impose a discrete set of table manners that relate to differing types of eating events. For example, if someone has never eaten spaghetti before, then s/he will have to learn how. Incidentally, in nineteenth-century Naples, from where the modern-day version of this dish comes (Visser 1991: 17-18), people ate spaghetti by raising each strand of pasta in their fingers, throwing back their heads, and lowering the strands into their mouths without slurping. Today, the correct manner of eating spaghetti is to twirl it around the fork, in small amounts, and then to insert the fork into the mouth as one does with any other food eaten with a fork.
Codes of table manners generally also involve the correct deployment of flatware. Specialized knives, spoons, forks, and other implements for eating and serving food have until recent times been the privilege of the aristocracy. In ancient Egypt, Greece, and Rome knives and spoons were made of precious materials and often decorated. The Romans also possessed skewers that were forerunners of the fork. From the Middle Ages until the Renaissance, the knife remained the principal table utensil. Forks came into common table use in Italy in the 1500s. At the same time spoons made the transition from kitchen utensils to table items, and flatware came to be used by all peoples of all classes. During the nineteenth century numerous other items of flatware were created, along with variations on the three basic implements, such as teaspoons, butter knives, and salad forks.
Cultures vary widely in the degree of sociability they associate with the eating event. At one end of the sociability continuum some cultures see eating as a private act; at the other end, other cultures see it as necessarily a social event, never, outside of special circumstances, to be performed in private. Many cultures, as well, have a kind of “pecking order” designed to indicate the social class or position of the eaters. In our culture, eating in a high-class restaurant entails the activation and deployment of a whole set of complementary codes, from dress to language, that are meant to create a whole range of subtle and not-so-subtle messages about oneself.
Technology is the general term used for describing the systematic processes by which human beings fashion objects and machines to increase their understanding of, and control over, the material environment. The term is derived from the Greek words tekhne, which refers to an “art” or “craft,” and logia, meaning an “area of study,” hence the meaning of technology as the “craft of object-making.” Many historians of culture argue that technology has not only become an essential condition of advanced civilizations, but also refashioned the signifying orders of such civilizations into a global culture, which has now developed its own dynamism and does not respect geographical limits or social systems. Technology, in a phrase, has transformed cultural systems permanently, frequently with unexpected social consequences.
The growth of technology since the Renaissance has indeed had profound consequences on the evolution of the signifying orders of humankind, creating the conditions for their coalescence into a worldwide signifying order or metaculture, whose defining characteristic is the internationalizing of codes—a leveling process that has led since the middle part of the twentieth century to a widening adoption among the peoples of the earth of standard languages, symbol systems, ways of doing things, etc. The term modern culture is in fact indistinguishable from both technological culture and global culture. From a semiotic standpoint, the salient feature of today’s global culture is the high degree of objectification that it engenders in people and thus the perception that knowledge is objectifiable.
The event that started the globalization of culture was, no doubt, the invention of print in the fifteenth century and the subsequent widespread use of the book to codify knowledge. The forerunners of books were the clay tablets, impressed with a stylus, used by the Sumerians, Babylonians, and other peoples of ancient Mesopotamia. These were followed by the scrolls of the ancient Egyptians, Greeks, and Romans, which consisted of sheets of papyrus, a paper-like material made from a pith of reeds, formed into a continuous strip and rolled around a stick. The strip, with the text written with a reed pen in narrow, closely spaced columns on one side, was unrolled as it was read. Later, during the fourth to first centuries BC, a long roll was subdivided into a number of shorter rolls, stored together in one container. In the first century AD, this was replaced by the rectangular codex, the direct ancestor of the modern book. The codex, used at first by the Greeks and Romans for business accounts and schooling, was a small, ringed notebook consisting of two or more wooden tablets covered with wax, which could be marked with a stylus, smoothed over, and reused many times. It was easier for readers to find their place in a codex, or to refer ahead or back. In the Middle Ages, codices were used primarily in the observance of the Christian liturgy. Indeed, the word codex is part of the title of many ancient handwritten books on topics related to the Bible.
Literacy introduces a level of abstraction in human interaction that forces people to separate the maker of knowledge from the knowledge made. And this in turn leads to the perception that knowledge can exist on its own, spanning time and distance. This is precisely what is meant by the term objectivity: knowledge unconnected to a knower. Before literacy became widespread, humans lived primarily in oral-auditory cultures, based on the spoken word. The human voice cannot help but convey emotion, overtly or implicitly. So, the kind of consciousness that develops in people living in oral cultures is shaped by the emotionality of the voice. In such cultures, the knower and the thing known are seen typically as inseparable. On the other hand, in literate cultures, the kind of consciousness that develops is shaped by the structural effects produced by the writing medium. The written page, with its edges, margins, and sharply defined characters organized in neatly-layered rows or columns, induces a linear-rational way of thinking in people. In such cultures, the knowledge contained in writing is perceived as separable from the maker of that knowledge primarily because the maker of the written text is not present during the reading and understanding of h/er text, as s/he is in oral communicative situations. The spread of literacy through the technology of print since the Renaissance has been the determining factor in the objectification of knowledge in the modern world and thus the main factor in the process of globalization.
Because of this, the great communications theorist Marshall McLuhan (1911-1980) characterized the modern world as the “Gutenberg Galaxy,” after the German printer Johannes Gutenberg (1400?-1468?), who is considered the inventor of movable type. Through books, newspapers, pamphlets, and posters, McLuhan argued, the printed word became, after the fifteenth century, the primary means for the propagation of knowledge and ideas. More importantly, given the fact that the book could cross political boundaries, the Gutenberg press set in motion the globalization of culture. Paradoxically, however, as McLuhan (1962) observed, this process did not simultaneously lead to the elimination of tribalism in the human species—one of the evolutionary milestones that led to the invention of culture (chapter 1, §1.3). On the contrary, he claimed that it was impossible to “take the tribe out of the human being”, so to speak, no matter how advantageous a technologized global culture would seem to be. McLuhan insisted that tribal tendencies resonate continually within the psyche of modern-day people, constituting the root cause of the sense of alienation that many modern individuals tend to feel living in large impersonal social settings.
By the start of the twentieth century, the great advances made in technology had objectified human consciousness and cultural signifying orders to a high degree throughout a large part of the world. A second technologically-engendered cultural revolution occurred at mid-century that, by century’s end, had objectified consciousness even further—namely, the revolution set in motion by the astounding technological accomplishments in electronics and, especially, in computer science. Since the 1980s, the computer has become so interconnected with the signifying orders of modern cultures that it is accurate to say that we live no longer in “Gutenberg’s Galaxy” but in “Babbage’s Galaxy,” to coin an analogous term after the person who invented the first true computer—Charles Babbage (1792-1871). Today’s personal computers can store the equivalent of thousands of books. Within seconds, anyone with a modem can access an enormous store of human information. Almost every text we consider meaningful or practical has been transferred to computer memory systems. Print technology opened up the possibility of founding a “world civilization”; computer technology has brought that possibility closer and closer to realization.
The first general-purpose all-electronic computer was built in 1946 at the University of Pennsylvania by the American engineer John Presper Eckert, Jr., and the American physicist John William Mauchly. Called ENIAC, for Electronic Numerical Integrator And Computer, the device contained 18,000 vacuum tubes and could perform several hundred multiplications per minute. ENIAC’s program was wired into its processor, so that reprogramming required manual rewiring. The development of transistor technology and its use in computers in the late 1950s allowed the advent of smaller, faster, and more versatile machines than could be built with vacuum tubes. Because transistors use much less power and have a much longer life, this development alone was responsible for the improved machines called “second-generation computers.” Late in the 1960s the integrated circuit was introduced, making it possible for many transistors to be fabricated on one silicon board, with interconnecting wires plated in place. In the mid-1970s, with the introduction of large-scale integrated circuits with many thousands of interconnected transistors etched into a single silicon board, the modern-day personal computer was just around the corner.
Modern computers are all conceptually similar, regardless of size. The features of their design and operation have become modern-day analogues of human mental design and operation, as we discussed in the second chapter (§2.4). So, it is worthwhile here to cast a schematic glance at these features. The physical and operational system of the computer is known as its hardware. This is composed of five distinct components:
- a central processing unit made up of a series of chips that perform calculations and that time and control the operations of the other elements of the system;
- input devices, such as a keyboard, that enable a computer user to enter data, commands, and programs into the central processing unit;
- memory storage devices that can store data internally (RAM) and externally (tapes, disks, etc.);
- output devices, such as the video display screen, that enable the user to see the results of the computer’s calculations or data manipulations;
- a communications network, called a “bus,” that links all the elements of the system and connects the system to the external world.
The computer’s hardware system is directed by a program. This is a sequence of instructions that tells the hardware what operations to perform on data. Programs can be built into the hardware itself, or they may exist independently in a form known as software. A general-purpose computer contains some built-in programs or instructions, but it depends on external programs to perform useful tasks. Once a computer has been programmed, it can do only as much or as little as the software controlling it at any given moment enables it to do. A wide range of applications programs are in use, written in special machine, computer, or programming languages.
As discussed in chapter 2 (§2.4), the design features of the computer lend themselves as convincing analogues for the structure and functioning of the human mind. This is why in cognitive science and artificial intelligence circles parallel-processing computers have been constructed with the specific purpose of duplicating the complex functions of human thought. But it is wrong, in our view, to assume a similarity between human and machine hardware systems. The former grow out of lived experience and historical forces; the latter have been invented by humans themselves. The idea that computers can think like humans is really no more than a modern-day version of animism that can be called machinism or computerism—namely, the view that our machines are us (humanoids) and that we are machines (protoplasmic replicators).
This new manifestation of objectification has become an allpervasive one. In part, it has been energized by the remarkable advances in the technology of computer hardware, software, and networks. In Babbage’s Galaxy, such networks reinforce the illusion that knowledge and information exist independently of their makers, even more so than did the book in Gutenberg’s Galaxy. But human signs are not like computational data that can be neatly dismissed as true or false. Rather, they provide perspective, emotivity, and other impenetrable aspects of human knowing. Essentially, the human mind cannot study or reproduce itself.
The science of artificial intelligence is providing a highly technical theoretical apparatus for modeling certain aspects of human cognition in computer software. But it will never be able to answer the question of what the mind is. As the philosopher Vico (in Bergin and Fish 1984: 123) aptly put it, human beings can never really understand what they themselves have not made. Since human beings have not made the mind, there is no way that they will ever really understand it. Many theories have been devised to explain it. But it is difficult to separate theory-making from the activity of thinking itself. The result is always unsatisfactory. In our view, the science of semiotics provides a much more practicable agenda for studying the mysteries of the mind, for the simple reason that the mind reveals itself in the forms and contents of signs, codes, myths, stories, works of art, and other expressive phenomena, including computer languages.
In his book Mental Models (1983), the psychologist P. N. Johnson-Laird provided a useful framework for talking about the mind. According to Johnson-Laird, there are three basic types of machine consciousness:
- “Cartesian machines” that do not use symbols and lack awareness of themselves;
- “Craikian machines” (after Craik 1943) that construct models of reality, but lack self-awareness;
- self-reflective machines that construct models of reality and are aware of their ability to construct such models,
The computer software designed to simulate human mentality produces types (1) and (2) forms of consciousness. But only human beings are capable of the type (3) form. Unlike a Cartesian or a Craikian machine, a human being is not only capable of constructing models of h/er mind, but is aware of doing so. Indeed, we become conscious when we engage in mentally simulating our thoughts and experiences.
The computer is one of Homo culturalis’s greatest intellectual achievements. It is an extension of h/er rational intellect. As a maker of objects and artifacts, s/he has finally come up with a machine that will eventually take over most of the arduous work of ratiocination. This could then leave h/er imagination much more time to search out new associations in poetry, art, and music of which only human creatures with a human body, a human mind, and reared in a human culture are capable. The caveat issued by the art historian Arnheim (1969: 73) is still valid today: “There is no need to stress the immense practical usefulness of computers. But to credit the machine with intelligence is to defeat it in a competition it need not pretend to enter.”
In Sumerian and Babylonian myths there were accounts of the creation of life through the animation of clay (Watson 1990: 221). The idea of bringing inanimate matter to life has, in fact, captivated the human imagination since at least the beginnings of recorded history. Since the publication of Mary Shelley’s grotesque and macabre novel Frankenstein in 1818, the idea that robots could be brought to intelligent life has been pursued relentlessly. Homo technologicus, the ingenious and resourceful descendant of Homo culturalis, now finds h/erself at the center of everything. But as William Barrett (1986: 160) aptly notes, despite h/er great intellectual resources, s/he is essentially an unsatisfied and unsatisfiable descendant. The reason for this is simply that objectified consciousness is a disembodied consciousness, at odds with the human being’s true sensorial, emotional, and imaginative nature.
There is one other manifestation of objectification that merits some discussion here by way of conclusion. In a society where objects of all kinds are being produced for mass consumption, there arises an incessant craving for new objects. The semiotician Roland Barthes (1915-1980) referred to this excessive form of objectification as “neomania” (Barthes 1957). To encourage the acquisition of objects, obsolescence is, in fact, regularly built into the marketing strategies of a product, so that the same product can be sold again and again under new guises. This is why advertisers rely on a handful of Epicurean themes—happiness, youthfulness, success, status, luxury, fashion, beauty—to peddle their products. The implicit message in all advertising is that solutions to human problems can be found in buying and consuming.
The emphasis on consumerism has even spawned its own art forms. Shortly after World War II, a new artistic movement, called pop art (short for populist art), for instance, was in fact inspired by the mass production and consumption of objects. For pop artists, the factory, supermarket, and garbage can became their art school. But despite its apparent absurdity, many people loved pop art, no matter how controversial or crass it appeared to be. In a certain sense, the pop art movement bestowed on people the assurance that art was for mass consumption, not just for an élite class of cognoscenti. Some artists duplicated beer bottles, soup cans, comic strips, road signs, and similar objects in paintings, collages, and sculptures; others simply incorporated the objects themselves into their works. In a phrase, pop art is the product of the imagination of Homo technologicus. But ultimately, it is a transitory art. Along with pop music, blockbuster movies, bestseller novels, television programs, fashion shows, and most commercial products, it is destined to become quickly obsolete.
The pop art movement emerged in the 1940s and 1950s, when painters like Robert Rauschenberg (1925–) and Jasper Johns (1930–) wanted to close the gap between traditional art and the mass-culture aesthetics of consumerist life. Rauschenberg constructed collages from household objects such as quilts and pillows, Johns from American flags and bull’s-eye targets. The first full-fledged pop art work was Just What Is It That Makes Today’s Home So Different, So Appealing? (1956, private collection) by the British artist Richard Hamilton. In this satiric collage of two ludicrous figures in a living room, the pop art hallmarks of crudeness and irony are emphasized.
Pop art developed rapidly during the 1960s, as painters started to portray brand-name commercial products, with garish sculptures of hamburgers and other fast-food items, blown-up frames of comic strips, or theatrical events staged as art objects. Pop artists also appropriated the techniques of mass production. Rauschenberg and Johns had already abandoned individual, titled paintings in favor of large series of works, all depicting the same objects. In the early 1960s the American Andy Warhol (1928–1987) carried the idea a step further by adopting the mass-production technique of silk-screening, turning out hundreds of identical prints of Coca-Cola bottles, Campbell’s soup cans, and other familiar subjects, including identical three-dimensional Brillo boxes (chapter 8, §8.4).
Using images and sounds that reflect the materialism and vulgarity of modern consumerist culture, pop artists seek to provide a view of reality that is more immediate and relevant than that of past art. They want the observer to respond directly to the object, rather than to the skill and personality of the artist. Ultimately, however, the pop art movement may be no more than one of the symptoms of life in Babbage’s Galaxy, which psychologists call alienation—a sense of rootlessness, stemming from excessive forms of materialism.