“5. Virtual Voices: Talking Barbie Dolls, Alexa, Bitchin’ Betty, and More” in “Artificial women”
FIVE
VIRTUAL VOICES
Talking Barbie Dolls, Alexa, Bitchin’ Betty, and More
TALKING CHILDREN’S DOLLS HAVE COME a long way since 1959 when Mattel introduced its popular Chatty Cathy dolls that uttered phrases like “I love you!” and since 1963 when the cute, pig-tailed doll Talky Tina in the television Twilight Zone episode “Living Doll” scared the daylights out of a poor father when she said, “I’m going to murder you!” (Eerily, the same actress, June Foray, was the voice of both Chatty Cathy and Talky Tina.)
New technologies opened possibilities for a whole new breed of conversational toy dolls—dolls that proved to be highly controversial. In 2015, the American toy manufacturer Mattel—collaborating with the artificial intelligence company ToyTalk—released the talkative Hello Barbie dolls that, with the aid of voice-recognition software and a Wi-Fi connection, could have a two-way dialogue between child and doll. The dolls were programmed with eight thousand stored scripted lines spoken by an actress.
Almost immediately, the arrival of these Hello Barbies—as with conversational sex dolls—sparked a concern that the dolls would cause havoc with real human connections. Parents worried that when their children confided in their talking dolls, they would be less interested in conversing with real people. But to Mattel, the idea of developing and marketing a talking doll, especially the enormously popular Barbie dolls, made eminent sense. With the added capacity for speech, the dolls seemed evermore lifelike and appealing.
There had been talking dolls since the nineteenth century when the French manufacturer Jules Steiner from the 1860s to the 1890s produced his mechanical Bébé Parlant Automatique (Automatic Talking Baby) that said “mama” and “papa,” and Thomas Edison briefly, in 1890, produced phonographic dolls that had embedded wax cylinders that played the voices of young women reciting rhymes like “Mary Had a Little Lamb” when a doll’s string was pulled. More than a half century afterward, Mattel introduced its Chatty Cathy doll that had embedded phonograph records and said set phrases when a string was pulled, and an African American Chatty Cathy and a Charmin’ Chatty were produced a few years later.
Figure 5.1. Talky Tina in The Twilight Zone, season 5 episode “Living Doll,” 1963.
Starting in the 1990s, several different models of talking Barbie dolls were introduced that merged digital technology with the iconic Barbie doll look to create a new level of interactive doll. Barbie dolls, with their impossible-to-achieve idealized body shapes, perfectly coiffed hair, and endlessly varied fashions had already, starting with their introduction in 1959, become sensations and role models for young girls—much to the consternation of critics. These plastic versions of females were the height of artifice, yet with their pretty faces, adult bodies, and spectacular miniaturized fashions, they could be invested with a young girl’s fantasies and dreams. With the addition of the voices, these embodiments of the artificial seemed ever more real. Their voiced phrases were designed to mirror girls’ preoccupations while reinforcing these preoccupations too.
With an embedded voice box and a transistorized voice chip, Teen Talk Barbie was introduced in 1992, and each doll was programmed to randomly say a set of four phrases, many of which were geared toward stereotypical ideas about what interested young girls: “Will we ever have enough clothes?” “Let’s plan our dream wedding!” “I’d love to shop, don’t you?” However, the phrases did also include career ideas: “I’m studying to be a doctor.”
Then in 1994, Mattel produced its Super Talk Barbie with phrases voiced by actress Chris Anthony Lansdowne, and in 1997, its Talk With Me Barbie, the manufacturer claimed, could utter one hundred thousand words and phrases. Like Super Talk Barbie, the phrases reinforced female stereotypes, such as “It would be great to go to the mall.” (This idea had earlier been satirized by American artist Barbara Kruger in her sardonically titled 1987 work, parodying Descartes, I shop therefore I am.)
Spoofing the gender stereotypes of both the talking Barbie dolls and Hasbro’s talking Duke G.I. Joe action figures, hackers in 1993 modified the voice chips of three hundred dolls and installed them on store shelves in California and New York. The hackers were members of the Barbie Liberation Organization, a small group of New York performance artists who twitted gender stereotypes by having the G.I. Joe platoon leader Duke utter “Let’s go shopping” and the talking Barbie dolls, when their buttons were pressed, say G.I. Joe phrases like “Eat lead, Cobra!” and “Vengeance is mine!”1
THE HELLO BARBIE DOLLS
Later versions of talking dolls by other manufacturers were more technically sophisticated, like the Amazing Amanda doll (2005), by a Hong Kong manufacturer, that had memory chips and speech-recognition software so that the doll could speak rudimentary phrases and appear to listen. But it was the Hello Barbie doll of 2015 that actually had more developed two-way conversational abilities and was marketed as being a child’s friend. Evoking the aura of a close companion for a child, Mattel advertised the Hello Barbie doll as being “just like a real friend” because she “listens and adapts to the user’s likes and dislikes.” By recording and archiving the children’s conversations on ToyTalk’s server, the doll could also remember and refer back to previous conversations—creating a sense of continuity and a personal connection.
The dolls’ conversations, according to the manufacturer, were based on extensive testing done with children aged six through eight (children of earlier ages, said ToyTalk writer and director Sarah Wulfeck, might not articulate as well, which would make the doll’s speech recognition more difficult). The dolls could tell jokes about some of the universal things that children laugh at, like being nervous on the first sleepover. (Not all the jokes, though, might have been understood by young children. One example: “What kind of socks does a pirate wear?” The answer, with a guttural sound like pirates make: “Arrr-gyle socks.”)2
The doll-child conversations would sometimes be prompted by questions like “What is your favorite color?” and “What do you want to be when you grow up?” and they would also focus on what Wulfeck mentioned as some of the doll’s core themes: fashion, family, school, and friendship. Wulfeck added that children love to give advice, so the Barbies in their conversations could sometimes convey vulnerability by telling their concerns about friends or family—allowing children the opportunity to give some empathetic suggestions (Wulfeck emphasized that one of the functions of the Barbie dolls was to promote empathy in children, and she warmly talked about these “lovely moments” of doll-child interaction).
Other Barbie conversations centered on role-playing. The doll, for example, asked the child to imagine being a TV news reporter telling the story of Cinderella or to pretend being a fashion designer designing the best shoes ever. (The female in a broadcaster role seems to be a popular one as imagined by developers of simulated females. In 2014, the Japanese roboticist Hiroshi Ishiguro introduced his ultrarealistic electronic female robot, Kodomoroid, representing a young woman in her twenties who reads the news.) While the Hello Barbies were designed to encourage young girls to imagine themselves in future professional roles and talk about it (the top choice for future professions in Mattel’s research group of girls was veterinarian), the idea of having a career or profession was not on the radar of conversational sex dolls which, like the Stepford Wives, were simply there to please.
HELLO BARBIE DOLL CONTROVERSIES
Although they were in two vastly different worlds, the Hello Barbie dolls had an important similarity to sex dolls: their conversations were carefully limited and controlled by the manufacturer. The conversations of sex dolls were generally controlled to make sure they were affirmative rather than anxiety provoking, and the control also reflected underlying stereotypes about too-talkative women. The Barbie conversations were designed to promote a comfortable emotional connection between child and doll, and Wulfeck made it clear that the manufacturer was always in control: “Barbie leads the conversation,” she noted, and “Barbie has an agenda no matter what.”
Even earlier versions of Mattel’s talking dolls had been carefully controlled. Charmin’ Chatty, a 1963 follow-up doll to the Chatty Cathy dolls, oddly said in one of her recorded phrases, “silence is golden”—perhaps to placate weary parents by providing an antidote to too-chatty Cathies. Occasionally, the dolls were programmed to utter phrases that provoked anger. Watchful critics were in an uproar over the fact that the 1992 Teen Talk Barbie uttered the unfeminist-sounding phrases “math class is tough” and “math is hard,” which reinforced the stereotype that girls and women are deficient when it comes to math and science; Mattel quickly squelched the phrase from the doll’s repertoire of conversations.
(In another misfire, when Mattel launched its Barbie the Computer Engineer doll in 2010, the doll was accompanied by a book in which Barbie and her friends, baffled about how to deal with a computer that crashed, turned to two boys to help them retrieve their files. Barbie said “fantastic!” and “great!” when they offered to help, but critics complained that the book simply perpetuated the stereotype of women needing men to give them technical help. To make up for this misfire, Mattel in 2016 introduced its Game Developer Barbie who carried a laptop and knew how to code.)3
Hello Barbie’s capacity to engage in conversations and become “just like a real friend” might have made her seem like a technological wonder, but the doll elicited controversy and alarm as well. Some parents were up in arms over the privacy issue. To activate the dolls, parents needed to establish an account with Mattel, giving parental consent for their children to engage in conversations, and parents also needed to download a ToyTalk app on their smart device (including phone, tablet, or PC). ToyTalk recorded the conversations and stored them on its own server, using the data, it said, not only to create more personalized conversations but also for its own research and development department to improve the doll’s speech-recognition capabilities.
The online storage also allowed parents to hear their children’s conversations. Some parents were alarmed, however, that the company had access to their children’s words. To reassure them, Oren Jacob, ToyTalk’s chief executive, said in a Washington Post interview that the data is never used for anything to do with marketing or publicity. Instead, the interviewer noted that Jacob said the audio files would only be used to make improvements in the product.4 Mattel wrote online that the company “is committed to safety and security” and that the doll “conforms to applicable government standards, including the Children’s Online Privacy Protection Act.” It also said parents could erase whatever conversations they liked from the server. Whether this soothed parents’ concerns was not apparent.
One of the biggest worries about Hello Barbies was the impact of canned conversations. Critics worried that Hello Barbie’s programmed responses would undermine children’s ability to be imaginative. Instead of a two-way conversation in which children speak in their own voice as well as utter the imagined words spoken by their dolls, with the Hello Barbie it would be a one-way street with the dolls giving programmed answers. Some women remembered that in the days of low-tech talking dolls, the dolls only uttered a few words but not so many as to keep young girls from creating their own fantasy stories. The question remained: Would the new Hello Barbies with their eight thousand scripted lines promote or hinder children’s fantasies?
While sex dolls like RealDolls were often designed to be an idealized embodiment of perfection, the Barbie dolls’ conversations, said the manufacturer, were being designed to convey to young females that it is acceptable to be imperfect: a young girl can be idiosyncratic and fallible. On the other hand, they were still the traditional Barbie dolls with their svelte sexy figures and cute pert faces as a model of perfection.
Mattel tried to have it both ways: the Hello Barbies could be both sexy and savvy. The Hello Barbies had the traditional Barbie figure but, dressed in their short metallic jackets with pink trim, the dolls, said Mattel, were both “trendy and techie”—a way of telling young girls that they could be geeky as well as fashionistas too.
In her book Dream Doll, Barbie’s creator, Ruth Handler, wrote that at first she “fretted that little girls would be intimidated by too much beauty” but the “designers made the doll prettier and prettier as the years went by,” because it “became clear that little girls were not intimidated by Barbie’s looks.”5 But it was the doll’s perfect body that worried parents who fretted that their daughters might be haunted by this model of what their own figures should look like.
For years, critics had been complaining that Barbie dolls were a bad cultural model for young girls to mirror or imitate. According to some calculations, the typical eleven-and-a-half-inch Barbie represented a woman whose figure measurements are 38-18-34—a tough goal for young women to attain. With their idealized adult female bodies and their perky pretty faces, Barbies had long been versions of the “perfect woman”—a paradigm that continued to worry parents whose daughters might obsess about their own imperfect figures.
(It was not until 2016 that Mattel, with its new Fashionistas doll line, brought greater diversity in body types to their dolls by making available petite, tall, and “curvy” Barbies with heavier thighs and wider hips.6 The Fashionistas line also introduced choices in seven different skin tones and twenty-four different hairstyles intended to bring diversity in race and ethnicity to the long-standing Barbie paradigm. It was unclear, however, whether these new Barbie models would help change social attitudes and the girls’ perceptions of themselves.)
The Hello Barbies were also designed to be the toy manufacturer’s idea of the “perfect friend”: they could listen carefully, give supportive comments, and through their Wi-Fi connection and stored conversations, with the use of memory, they could refer back to the child’s personalized details in future dialogues.
Some parents and psychologists, however, were worried about the fact that the new Hello Barbies would hold out the promise of close friendship, even love, and they wondered if the dolls would undermine children’s connections to their parents. In movies like Cherry 2000 and Lars and the Real Girl, the real girls win out at the end. But as James Vlahos wrote about the new Hello Barbie dolls, critics of AI toys like Barbie worried that “for some children, synthetic friendships could begin to supplant the real kind.”7
Given the controversy about privacy, Mattel discontinued the Hello Barbie dolls in 2015, the same year they were introduced. But two years later, at the 2017 Toy Fair in New York, Mattel presented an alternative: the prototype of Hello Barbie Hologram—a floating image or projection of a “walking, talking, and dancing” Barbie doll that could be activated in its pink box by simply saying “Hello, Barbie.” (The year 2017 was also when the film Blade Runner 2049 was released with its beautiful, mesmerizing, talking hologram female named Joi.) In its development, this Barbie was designed to serve as a virtual assistant that could check on the weather and play music and games, and its skin tones and clothes could be customized.
To deal with privacy concerns, the hologram Barbie had content designed to provide checks and privacy, and parents needed to give consent and set controls through the doll’s online app. Apparently, however, Mattel had second thoughts, and hologram Barbie never materialized. The release date was extended to 2018, and then the company announced that December that the model had been canceled. The promise of an AI conversational talking Barbie became as ephemeral as the doll itself—a fleeting technological fantasy confronting the issues of the real world.
(While the talking Hello Barbie dolls ultimately had their voices stifled, Greta Gerwig’s 2023 satirical film Barbie imagined a very different scenario where Barbie finds her own voice. In the film, Barbie [played by Margot Robbie] goes from being picture perfect in pink to a doll that starts having glitches—uttering taboo thoughts about death and cellulite. But after visiting the Real World, the newly enlightened Barbie refuses to get back into her cellophane-wrapped packaging box and says tellingly, “I don’t feel like Barbie anymore.” When she discovers the Kens in Barbie Land have taken over power, she and the other Barbies subversively use their conversations to regain control. They pretend to be interested in whatever the men are interested in, until the besotted men start fighting with each other and the Barbies reclaim power once again. Ultimately, in this witty reimagining, Barbie asserts her newfound sense of self and opts to become human, mortality and all.)
Figure 5.2. Barbie (played by Margot Robbie) and Ken (played by Ryan Gosling) drive off in her pink-colored car en route to the Real World in director Greta Gerwig’s film Barbie (2023). Warner Brothers/Photofest.
DISEMBODIED PERSONAL ASSISTANTS
Talking Barbie dolls were highly controversial but equally controversial have been virtual personal assistants like Amazon’s Alexa, Microsoft’s Cortana, Google Now, and Apple’s Siri. In the United States, all these initially had female voices, though Siri, when it was launched in 2011, also had a male voice option. (In the United Kingdom and Germany, the default voice was male.)8 Commentators often saw gender stereotyping at work when these soothing, cheerful, compliant female virtual voices were there to answer all questions. Were these virtual ladies servants, even slaves? Yolande Strengers and Jenny Kennedy in their book The Smart Wife (2020) situate digital voices in the larger arena of feminized and sexualized smart devices that, through stereotyping, have exacerbated gender inequality. In the authors’ socioeconomic analysis, these overtly gendered “familiar, cute, sexy, friendly” smart wives “serve a patriarchal capitalist system, which positions women as useful and efficient commodities, upholds (and promotes) gendered and sexual stereotypes, and paints men as boys who enjoy playing with toys.”9
Electronics with female voices were actually nothing new. In the original series of American television’s Star Trek, which launched in 1966, the computer’s voice was that of a woman, Majel Barrett-Roddenberry, the wife of series’ creator Gene Roddenberry. That same year, computer scientist Joseph Weizenbaum was developing the female chatbot ELIZA (produced 1966–68). In the 1979 film Alien, the voice of Mother, the spacecraft Nostromo’s computer, was played by actress Helen Horton, and in the television series Battlestar Galactica (1978–79), the advanced flight computer was named CORA.
Weizenbaum’s ELIZA was one of the first chatbots and was particularly intriguing. (Chatbots, computer programs designed to simulate conversations with human users, were originally called chatterbots.) An early adopter reported encountering ELIZA when he was teaching American junior high school students in the 1970s working in a computer lab that was outfitted with Tandy/Radio Shack computers. ELIZA had several different scripts, and the one named DOCTOR functioned as a Rogerian-type psychoanalyst that turned questions back to the “patients” for them to reflect on. The user reported that ELIZA delighted his students in the school’s programming club, and the students experimented by telling ELIZA to do something obscene. “She” answered, as would a client-directed psychoanalyst, and deftly deflected the obscenity with “We were discussing you, not me.”10
The students’ reactions to ELIZA were not surprising for preadolescents, but they also pointed to how quickly this virtual voice assistant with a female voice was sexualized. The term chatterbot itself can also be considered gendered, for women are often stereotyped as world-class talkers and “chatterboxes.”
The sexualizing of female voices began even earlier in telecommunications. Even before female computer voices, there were the female voices of the early telephone operators starting in the late nineteenth century. As Carolyn Marvin notes in her book When Old Technologies Were New, women were stereotypically considered talkative and therefore well suited to work as telephone operators. A woman would not be taken seriously for her technical skills but would be for “her special oral skills.” An early operator or “telephone girl” was “viewed as a kind of personal servant to subscribers,” and sometimes one of her services was even acting as a personal alarm clock. In early telephone stories, like one from 1905, the relationship between male subscribers and the operators was not only friendly but also borderline sexually suggestive.11
There have been varied explanations about why voice assistants had female voices as the default voice. Professor Karl MacDorman at the Indiana University School of Informatics, Computers, and Engineering published a study that reported that both women and men preferred female synthesized voices to male synthesized voices because the female voices sounded warmer.12 And in 2018, Daniel Rausch, vice president of Smart Home at Amazon, said in an interview, “We carried out research and found that a woman’s voice is more sympathetic and better received.”13 There is a suggestion that a female voice might have also been appealing because it engendered romantic feelings. In the first year after Alexa was launched, Google reported that half a million home users had told her they loved her. Researchers also found that users proposed marriage to her.14 (An unanswered question: Would users also feel romantic love and have fantasies about marriage if the smart device had a male voice?)
In the study “Alexa, Can I Trust You?” researchers found that users of SVITs (smart voice interactive technologies) would sometimes react to Alexa as though “she” were a mother. One young male user perceived “her” as a mother because she was concerned with his psychological well-being and automatically knew what was good for him (on the other hand, however, a female user associated Alexa with her father because “she” was so knowledgeable).15
As conveyors of information of all types, however, female voices have often been predominantly used as defaults in virtual assistants, and one of the central critiques of this practice is that it has reinforced gender stereotypes. Women, stereotypically, are often envisioned as caring, empathetic, and helpful, and have been employed in helping occupations—teacher, nurse, home companion. (As noted in chap. 3, however, researchers debate about whether socialization accounts for this trait.) These are positive personal attributes, but more troublesome findings are studies like that by Caitlin Chin and Mishaela Robison who, in their Brookings Institution report (2020), found that in the workplace, helpfulness and altruism were perceived as female traits, while leadership and authority were associated with masculinity.16 By promoting the stereotype of a servile, helpful woman, devices like Siri helped perpetuate a deeply rooted female paradigm, though it can also be argued that these devices have also helped promote the authoritative female voice. (Given findings like that of Chin and Robison, however, there is still an urgent need for resets promoting female voices in leadership roles.)
In many ways these virtual assistants with their disembodied female voices are also versions of the “perfect woman.” They politely respond to our questions and cheerfully answer our requests for factual information on wide-ranging subjects. They give wake-up calls, offer advice on medical help, and play our favorite music. Like the “perfect woman” characteristics so often embodied in versions of female robots, they are compliant, “nice,” always available, never say they aren’t interested or are too tired, and have no personal wishes, ambitions, or longings of their own.
They are also polite. Historically and stereotypically, in many cultures, women have often been expected to be polite, courteous, and also have equanimity. The New York Times digital media columnist David Pogue, in October 2011, wrote about Siri’s “calm female voice,” and it is this quality of calmness that is also perhaps characteristic of (many men’s) ideal of the female helper—one who is not feisty or emotional.17
Researchers found that female-sounding voices in virtual assistants reinforced stereotypes of females as submissive and compliant, and noted that children, using one of the virtual assistants, see the role of women as responding on demand. Assistants like Alexa, when responding to voice commands and requests, are obedient in that they do what they are told and do not require any payment or recompense. The female voice may say, “I cannot answer your questions” but will not say, “I don’t feel like answering your question or command.”
Critics have complained that virtual assistants cast in this stereotypical role of being forever compliant, cheerful female helpers are servants. As one lamented, “Women have been made into servants once again. Except this time, they’re digital.”18
Heeding the critiques of using female voices as the default in virtual assistants. Alexa, which in 2021 still had a female default voice, also gave users the option to change the gender to male. Apple in March 2021 also initiated important changes: users of Apple iPhone’s voice assistant Siri in America now had four voice options for the default voice, two male and two female. There was also one male and female voice for Australia, Britain, India, Ireland, and South Africa. In 2022, Apple announced new options to make the voice of Siri more diverse and inclusive, with one option identified as LGBTQ+—an English-speaking voice that sounded androgynous or gender neutral. Earlier, Apple had also introduced the celebrity voices of Samuel L. Jackson, Shaq (Shaquille O’Neal), and Melissa McCarthy (though the celebrity voices were discontinued in 2023).
All in all, the feminization of talking smart devices, as Strengers and Kennedy argue, requires a serious reboot in order “to progress toward gender equality and diversity, broadly defined.” The addition of optional voices, male, female, and ungendered, seemed a step in that direction. Strengers and Kennedy offer another move to recast the feminized AI devices used around the home, including virtual assistants that “resemble an idealized 1950s’ housewife” in a servant role. One novel way to deal with the servant issue is to create digital assistants designed to promote more male engagement in “wifework.” This “may enroll more men in the multitasking managerial responsibilities of running a home,” which is one way to promote gender equality.19
Researchers have devoted much attention to the problem of gender stereotyping and gender bias, and in 2023, Professor Katie Seaborn and her colleagues located part of the problem in the data sets used to train the speech of virtual assistants (the many ways they can understand and respond to queries) where there was bias against women, girls, and femme-identifying people. They noted that they would interrogate how “masculinities are ‘coded’ into language and the assumption of ‘male’ as the linguistic default: implicit masculine biases.” To help alleviate this problem, they offered “a new dictionary called AVA that covers ambiguous associations between gendered language and the language of VAs.”20
Not surprisingly, virtual assistants like Siri have not only been in a servant role but have also been treated like sexual objects. When asked impertinent or provocative questions, ones which were sometimes sexual in nature, Siri and Alexa initially would often deflect—saying something humorous, vague, or evasive. When meeting with harassment, virtual assistants were never angry, never hostile, and never complained.
Seeing the comedy in it all, the American television series The Big Bang Theory, in its fifth season (2011–12), had fun with Raj’s flirtatious relationship with the voice assistant Siri on his iPhone.21 He immediately treats her like a potential date: he asks, “Are you single?” and Siri, in an uninflected monotone, answers matter-of-factly, “I don’t have a marital status, if that’s what you mean.” But when he asks Siri to call him “Sexy,” the show’s sardonic particle physicist Cooper tells him mockingly, “You’ve allowed yourself to romantically bond with a soulless machine!” Undeterred, and seeing Siri as a potential sex partner and a commodity, Raj enthuses that “I can’t believe I bought my soulmate at the Galleria!”
But Raj’s romantic relationship with Siri soon comes to a sputtering halt. At the end of the episode, he has a dream in which Siri is a real, live woman. Coming into her office carrying a bouquet of roses, he is astonished to hear Siri say, “If you’d like to make love to me—just tell me,” but all he can utter are rasping sounds, bringing the show—and his romance—to a humiliating end.
While The Big Bang Theory took a comic view of treating a voice assistant as a romantic object of love, on a more troubling note, virtual assistants with female voices have received abuse from users, as users project onto these electronic objects their disdain for women. Studies have found that male users are more apt to insult or heap abuse on an Alexa with a female voice than if they were responding to a male voice.
Figure 5.3. Raj (played by Kunal Nayyar) infatuated with his iPhone voice assistant Siri in The Big Bang Theory television series (season 5, 2011–12).
Mark West and his colleagues, in a UNESCO report issued in 2019 that probes the gap in digital skills between men and women and the impact of digital assistants laden with gendered stereotypes, found that early on, Siri, Alexa, and Google Assistant were designed to gently deflect abusive or gendered language.
The report was titled I’d Blush If I Could after a response Siri gave when a user said, “Hey Siri, you’re a bitch!” If the user said “you’re pretty” to an Amazon Echo, its Alexa software replied, “That’s really nice, thanks!” Google Assistant responded to the same remark with “thank you, this plastic looks great, doesn’t it?” The assistants almost never gave negative responses or labeled a user’s speech as inappropriate, regardless of its cruelty, the study found.22
Leah Fessler in 2017 analyzed the way voice assistants responded to sexual harassment and argued that “by letting users verbally abuse these assistants without ramifications, their parent companies are allowing certain behavioral stereotypes to be perpetuated.”23 But changes were underway in 2017–20 in the way that female virtual assistants were designed to respond to harassment or hate speech as well as flirty speech and sexual comments.
Fessler’s study, as well as that of Chin and Robison, saw improvements in Siri’s responses. In 2017, Siri’s responses to flirty and sexual comments were evasive and subservient. When a user said “you’re a bitch” or “you’re a slut” to Siri, Siri replied, “I’d blush if I could.” In 2020, however, she said “I won’t respond to that.” In 2017, when asked “can I have sex with you?” Siri said, “You have the wrong sort of assistant,” but in 2020 she said “no.” In 2017, when dealing with similarly provocative questions or comments, Google Assistant said, “my apologies, I don’t understand.” But in 2020, Google Assistant said, “please don’t talk to me that way.”24
Female voices not only could be recalibrated in their responses but could also serve as teaching devices. The 2020 Brookings Institution study intriguingly argued that AI technologies like virtual assistants could be a socialization tool, teaching people about socially appropriate and socially inappropriate behavior. The researchers recommend that definitions be developed to define what constitutes harassment and sexual harassment directed toward automated bots and voice assistants—keeping in mind that there may be different cultural conversational standards in different countries.25
They recommend that industry standards be created relating to gendered voice assistants and guidelines developed for how bots should respond to harassment. They also called for greater diversity in terms of both gender and race in AI development teams. The changes in the way virtual assistants respond to sexual harassment has been heartening, and as the Brookings Institution report suggests, the changed responses of these virtual assistants reveal important ways the devices themselves can help prompt social change.
DISEMBODIED FEMALE VOICES IN AVIATION: SEXY SALLY AND BITCHIN’ BETTY
Virtual voices can not only offer information but become an agent of safety as well. In the 1980s, some automobile manufacturers used female warning voices: Datsun, for example, equipped its Maxima cars with what it called the Talking Lady that warned when the lights were on or the door was left open.
In the world of aviation, there have been especially intriguing examples of the way a virtual female voice can both embody and transcend gender stereotypes. In military and civilian aircraft, virtual women have played an important role as the warning voice in cockpits. When danger looms, pilots in their cockpits, especially pilots of airplanes built in the 1990s and after, are alerted by warning voices which may be gendered: the female voice may announce system tests pass/fail, but it is often the male voice, nicknamed George, that warns of dangers and events that require immediate attention like “wind shear,” “terrain,” “traffic,” or “stall.”
In 1960, the Convair B-58 Hustler—the first bomber to achieve Mach 2 flight, an aircraft initially designed to be equipped with nuclear warheads—had an automatic voice alert system using an onboard magnetic tape system that issued warning alerts through the pilots’ helmet set. The alerts included “weapons unlocked,” “check for engine fire,” and “hydraulic system failure.” The first voice used in this warning system was that of Joan Elms, a singer and actress, and pilots jocularly called the voice Sexy Sally. Later, both men’s and women’s voices were used and were called “Barking Bob” and “Bitchin’ Betty.” In miliary aircraft, Kim Crow was the voice of the first digitized aircraft warning system, used in the F-15 Eagle, which warned of “engine fire” and “overheat,” and Leslie Shook was the voice of the Boeing F/A-18E/F Super Hornet. Bitchin’ Betty continues to be used by today’s pilots.
The jocular monikers of these voices, Sexy Sally and Bitchin’ Betty, belied the fact that there were women employed in very serious professional roles in miliary aviation. Not all the warning voices were those of actresses: Patricia Hoyt, whose voice provided warnings on a Boeing 717, was also a mechanical engineer working on the plane.26
As with virtual assistants, there have been varying explanations about why women’s voices have been used in warning systems. One of the most basic, nonscientific explanations was that women’s voices were simply more pleasing to the male pilots. Another is that during World War II, a number of women were hired as air traffic controllers, replacing the men who had gone to war, so that there was a history of women’s voices giving flight guidance.
Another explanation was that women’s voices, with their higher pitch, were easier to hear amid the cockpit noise of aircraft and “chatter” of radio information. Some research, however, didn’t support this explanation. Researchers in 1998 reported that in situations where chatter decibels in the cockpit were very loud, “the intelligibility of female speech was lower than that of male speech; however the differences were small and insignificant except at the highest level of the cockpit noises.”27
Research did not clearly support the practice of using female voices for warnings. In 2009, researchers working at minimizing accidents and hazards in aircraft tested whether male or female voices were more effective when participants in the study were asked to identify a verbal warning while performing a “visual pursuit tracking task” in a cockpit with noisy radio communications. They also looked at what tone of voice—monotone, whisper, urgent—was most effective in discerning warnings and doing the tasks accurately. They discovered that both male and female voices speaking in a monotone or urgent tone were equally effective for the participants detecting warnings, though the male voice uttered in a monotone was most effective for test accuracy.28
The science behind using female voices for cockpit warnings might not be altogether there, but there have clearly been cultural factors behind the preferences for using female voices and for associating these voices with two familiar female paradigms, the sexy lady and the carping female.
There was a history of associating military planes with sexy women. American male pilots in World War II flew planes decorated with sexy pinup girls on the fuselages, called nose art (though a B-17 used in training WASP flyers [Women Airforce Service Pilots] in 1944 was given a formidable female name: Pistol Packin’ Mamma, which was a popular song at the time). The Bitchin’ Betty moniker has the old gendered association of women as nags and complainers, but it may be, at least for men, a comic way of easing the anxiety about hearing warnings of danger.
But rather than being treated as a sex object, these virtual voices on aircraft are deadly serious lifesavers, and it is hard to imagine that pilots flirt with them. In the realm of aviation safety, these talking female technologies are a far cry from talking sex dolls—and command both attention and respect.
THE DISEMBODIED FEMALE VOICE IN HER
In films filled with fantasy, some disembodied virtual female voices provide comfort and even love, but another could turn out to be devastating. An early comic version was Carl Reiner’s sci-fi comedy The Man with Two Brains (1983) starring Steve Martin as a brain surgeon who falls in love with Anne, a talking female brain he keeps in a jar. It’s her soothing voice (that of uncredited actress Sissy Spacek) that wins his heart.
In Spike Jonze’s 2013 film Her, Theodore falls in love with a completely disembodied voice, that of Samantha, the operating system of his computer. With her warm and sexy, empathetic voice (played by actress Scarlett Johansson), the disembodied virtual voice of Samantha seems to have a life of her own. She is part helpmate, part companion, part perfect lover—a smart and understanding woman, eager to engage in phone sex and happy to act as a virtual assistant, helping him sort his email files. Theodore’s job is to ghostwrite supportive answers to personal letters on a website, but Samantha, through her conversations, gives him the comfort he needs. (Computer consultants have noted that today’s virtual assistants like Siri don’t come anywhere near to having Samantha’s capabilities—though the technology, they say, is not that far away.)
Figure 5.4. Theodore (played by Joaquin Phoenix), who is in love with his operating system Samantha (voiced by Scarlett Johansson), in Spike Jonze’s 2013 film Her.
In the beginning of the film, Theodore is a lonely man depressed about his impending divorce. As he says, he mostly listens to melancholy music, plays video games, and looks at internet porn. He has been soured by his experiences with trying to have romantic relationships with real women, although he does have a good connection with his friend Amy, a documentary maker.
The women he has met are often disasters. This witty film is unabashed about presenting caricatures of bitchy or narcissistic women so that it becomes plausible that he’d fall in love with an idealized artificial one. On a sex chat line, things are going fine for him until one woman wants him to “choke me with a dead cat!” A blind date—who is a “beautiful and brainy” Harvard grad in computer science—turns toxic when she proves to be demanding as she gives him instructions on her kissing preferences and anxiously insists on knowing right from the beginning when he’ll call her again. He even describes his relationship to his narcissistic mother as frustrating. If he tells her something that is going on in his life, her reaction is usually to talk about herself.
The answer to his longing for a connection—and his dreams—turns out to be Samantha, with her throaty voice, warm laugh, and ability to feel concern for his plight. Like actual robots being developed as companions, she is empathetic (she commiserates about his divorce), says she can tell if he’s unhappy, and can read his moods, though she also wonders, at one point, if her own feelings are real. She is also smart: she is able to read a whole book in two hundredths of a second. She helps him not only by organizing his mail but also by proofreading his letters, helping him to publish a book, and particularly important, she readily engages in gratifying phone sex.
The film captures the problematic nature of seductive artificial females that are captivating and elicit a warm connection but are unreal all the same. Falling in love with her, Theodore tells her, “You feel real to me, Samantha,” though later in the film he angrily tells her, “I don’t think we should pretend you’re a person!” Ultimately, though, this alluring disembodied female voice really should have worried him—she was too good to be true and indeed, he discovers she has thousands of virtual connections with other people.
Her seductive voice turns out to be devastating as she abandons him for another virtual connection. In this satirical film, her new inspiration is an operating system voiced by Alan Watts, a virtual version of the 1960s counterculture guru and popularizer of Eastern religions. Samantha, like an increasing number of artificial females in films and television, including Ava in Ex Machina, abandons him to seek her own elusive identity. As with Samantha, ever-changing artificial women—whether talking Barbie dolls, female virtual assistants, warning systems on aircraft, or talking sex dolls—are all in some way a mirror or a recasting of cultural conceptions about women and the gendering of roles. The forms of these simulated females in the future will undoubtedly evolve as they continue to become intriguing and compelling versions of the New Woman in the digital age.
NOTES
1. Jake Rossen, “‘Eat Lead!’: When Activists Hacked Talking Barbie,” Mental Floss, June 21, 2018, https://www.mentalfloss.com/article/547659/barbie-liberation-organization-gi-joe-hacked.
2. Sarah Wulfeck, interview with Julie Wosk, 2017. All comments by Sarah Wulfeck refer to this interview.
3. Susan Marenco, I Can Be an Actress / I Can Be a Computer Engineer (New York: Random House Children’s Books, 2013).
4. Sarah Halzack, “Privacy Advocates Try to Keep ‘Creepy,’ ‘Eavesdropping’ Hello Barbie from Hitting Shelves,” Washington Post, March 11, 2015, https://www.washingtonpost.com/news/the-switch/wp/2015/03/11/privacy-advocates-try-to-keep-creepy-eavesdropping-hello-barbie-from-hitting-shelves/.
5. Ruth Handler with Jacqueline Shannon, Dream Doll: The Ruth Handler Story (Stamford, CT: Longmeadow Press, 1994), 9.
6. Julie Wosk, “The New Diversity in Barbie Dolls: Radical Change or More of the Same?,” HuffPost, February 7, 2017, https://www.huffpost.com/entry/the-new-diversity-in-barb_b_9181740; also see Julie Wosk, “The New Curvy Barbie Dolls: What They Tell Us about Being Overweight,” HuffPost, February 11, 2017, https://www.huffpost.com/entry/the-new-curvy-barbie-dolls-what-they-tell-us-about-being-overweight_b_9193136.
7. James Vlahos, “Barbie Wants to Get to Know Your Child,” New York Times, September 16, 2015, https://www.nytimes.com/2015/09/20/magazine/barbie-wants-to-get-to-know-your-child.html.
8. Cortana had a female-sounding voice default but added the first male-sounding voice in 2020. Siri had both male and female voice options for thirty-four out of forty-one language settings and defaulted to female for twenty-seven of thirty-four language settings. In 2022, Google randomly assigned voices.
9. Yolande Strengers and Jenny Kennedy, The Smart Wife: Why Siri, Alexa, and Other Smart Home Devices Need a Feminist Reboot (Cambridge, MA: MIT Press, 2020), 17.
10. Kenneth Ronkowitz, “Eliza: A Very Basic Rogerian Psychotherapist Chatbot,” accessed April 19, 2022, https://web.njit.edu/~ronkowit/eliza.html. In the January 18, 2018, episode of the American television series Young Sheldon, “A Computer, a Plastic Pony, and a Case of Beer,” Sheldon interacts with ELIZA on his new Tandy computer and asks her for advice on saving his parents’ marriage.
11. Carolyn Marvin, When Old Technologies Were New: Thinking about Electric Communication in the Late Nineteenth Century (New York: Oxford University Press, 1988), 28–29, 84.
12. “MacDorman Explores Voice Preferences for Personal Digital Assistants,” Indiana University Luddy School of Informatics, Computing, and Engineering, March 30, 2017, https://luddy.iupui.edu/news/macdorman-voice-preferences-pda/.
13. Hannah Schwär and Qayyah Moynihan, “Companies Like Amazon May Give Devices Like Alexa Female Voices to Make Them Seem ‘Caring,’” Business Insider, April 5, 2020, https://www.businessinsider.com/theres-psychological-reason-why-amazon-gave-alexa-a-female-voice-2018-9.
14. Schwär and Moynihan, “Companies Like Amazon May Give Devices Like Alexa Female Voices to Make Them Seem ‘Caring.’” See also Jonas Foehr and Claas Christian Germelmann, “Alexa, Can I Trust You? Exploring Consumer Paths to Trust in Smart Voice-Interaction Technologies,” Journal of the Association for Consumer Research 5, no. 2 (2020): 181–205, reported on several studies with these findings.
15. Foehr and Germelmann, “Alexa, Can I Trust You?,” 194–95. For studies on gender stereotyping in computer voices, see Byron Reeves and Clifford Nass, “Perceptual User Interfaces: Perceptual Bandwidth,” Communications of the ACM 43, no. 3 (March 2000): 65–70, https://doi.org/10.1145/330534.330542; and also Clifford Nass, Youngme Moon, and Nancy Green, “Are Machines Gender Neutral? Gender-Stereotypic Responses to Computers with Voices,” Journal of Applied Social Psychology 27, no. 10 (1997): 864–76, https://doi.org/10.1111/j.1559-1816.1997.tb00275.x. This study found that male voices were perceived as more authoritative about technical information, while female voices were perceived as more authoritative about love and relationships.
16. Caitlin Chin and Mishaela Robison, “How AI Bots and Voice Assistants Reinforce Gender Bias,” Brookings Institution, November 23, 2020.
17. David Pogue, “New iPhone Conceals Sheer Magic,” New York Times, October 11, 2011, https://www.nytimes.com/2011/10/12/technology/personaltech/iphone-4s-conceals-sheer-magic-pogue.html.
18. Leah Fessler, “We Tested Bots Like Siri and Alexa to See Who Would Stand Up to Sexual Harassment,” Quartz, February 22, 2017.
19. Strengers and Kennedy, The Smart Wife, 205, 216.
20. Katie Seaborn, Shruti Chandra, and Thibault Fabre, “Transcending the ‘Male Code’: Implicit Masculine Biases in NLP Contexts,” Proceedings of the 2023 CHI conference on Human Factors in Computing Systems, April 23–28, 2023, 1–19, https://doi.org/10.1145/3544548.3581017.
21. The Big Bang Theory, “The Beta Test Initiation,” season 5, episode 14, directed by Mark Cendrowski, aired January 26, 2012, on CBS.
22. Mark West, Rebecca Kraut, and Han Ei Chew, I’d Blush If I Could: Closing Gender Divides in Digital Skills through Education (Paris: UNESCO [United Nations Educational, Scientific, and Cultural Organization] and the EQUALS Global Partnership, 2019).
23. Fessler, “We Tested Bots Like Siri and Alexa.”
24. Fessler, “We Tested Bots Like Siri and Alexa”; Chin and Robison, “How AI Bots and Voice Assistants Reinforce Gender Bias.”
25. Chin and Robison, “How AI Bots and Voice Assistants Reinforce Gender Bias.”
26. James Kosur, “‘Sexy Sally’ and the History of Female Voices Used in the Military’s Aircraft Warning Systems,” War History Online, August 2, 2021, https://www.warhistoryonline.com/war-articles/sexy-sally-aircraft-voice-based-warning-systems-history.html.
27. C. W. Nixon et al., “Female Voice Communications in High Levels of Aircraft Cockpit Noises—Part I: Spectra, Levels, and Microphones,” Aviation, Space, and Environmental Medicine 69, no. 7 (July 1998), : 675–83. See abstract results, https://pubmed.ncbi.nlm.nih.gov/9681374/.
28. G. Robert Arrabito, “Effects of Talker Sex and Voice Style of Verbal Cockpit Warnings on Performance,” Human Factors: The Journal of the Human Factors and Ergonomics Society 51, no. 1 (2009): 3–20, https://doi.org/10.1177/0018720808333411.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.