

As part of the Walker's presentation of Designs for Different Futures (on view now), we will be publishing a number of texts from the exhibition catalogue (Yale University Press, December 2019), exploring the ways in which designers create, critique, and question possible futures, big and small. The exhibition was organized by the Walker Art Center, Philadelphia Museum of Art, and the Art Institute of Chicago.
The Future of Love?:
From the Past (Steve Bannon) to the Future (Sex Robots)
Srećko Horvat
Sometimes a simple headline by chance reveals more about the past, present, and future than any history book or futurologist projection ever could. This is what happened in December 2018, when a newspaper headline appeared stating “Steve Bannon Canceled from Sex Robot Conference.”1
The article reported that an international academic conference titled Love and Sex with Robots, planned to take place at the University of Montana, had been canceled following a backlash against a proposed speech by Steve Bannon. (Though he was scheduled to speak at the concurrent conference on Advances in Computer Entertainment Technology, the subject of the co-conference proved irresistible to headline writers.) Bannon, the former adviser to President Donald Trump and a political careerist, had that year—following revelations of his complicity in the Facebook-Cambridge Analytica scandal, in which Facebook data was used to create psychographic profiles of users for political targeting purposes—switched his geographic focus and was desperately trying to unite Europe’s right-wing populists.
How does this headline from 2018 bring us from the past (Steve Bannon) into the future (sex robots)? It’s because the interesting part isn’t that Bannon was banned from a conference (this happens to him quite frequently) but the subject of the concurrent conference. Its title might sound like a science-fiction event from some indeterminate future time, but this sex-robots future is already here, as Emma Zhang, one of the event’s organizers, discusses in her essay (Designs for Different Futures, pp. 50–53). Although as William Gibson famously noted, the future that has arrived is not yet evenly distributed.
Only two months before Bannon’s banishment, in another corner of the United States, it was sex robots themselves that were banned. A Canadian company, KinkySdollS, had planned to open a “robot brothel” in Houston, and the city’s mayor moved swiftly to clear the way, presenting an ordinance that expanded the meaning of adult arcades to include “anthropomorphic devices,” or sex robots—shades of Westworld. This would have been the first sex-robot brothel in the United States.2 But after a heated debate and protests by religious groups, the city council decided to ban robot brothels. During the debate a Houston resident, Tex Christopher, quoted from the Bible: “In Ephesians 5:31, it says that a man shall leave this [sic] father and mother and shall be joined unto his wife and they shall become one. It doesn’t say that a man shall leave his mother and father and go and join a robot.”3

Already in 2015 a Campaign Against Sex Robots (CASR) had been launched to draw attention to the ways in which the idea of forming “relationships” with machines was becoming increasingly normalized.4 It warned against sex robots as “animatronic humanoid dolls with penetrable orifices where consumers are encouraged to look upon these dolls as substitutes for women (predominantly), and they are marketed as ‘companions’, ‘girlfriends’ or ‘wives.’”5 There followed the protests of religious groups. Even before Houston’s banning of sex-robot brothels, a number of Christian ethicists had responded to a report by the Foundation for Responsible Robotics exploring the possible benefits and dangers of humans having sex with robots, including robots designed to look like children, stating that such sexual activities went against God’s design.6
Warning against the rise of sex robots, Tobias Winright, a theologian at Saint Louis University, said: “As a Christian, I think non-mutual, non-consensual sexual activity is contrary to mutually donative love-making. Thus, sexual activity with a simulacrum seems to me quite a stretch from when two persons, who are made in God’s image, sexually express their love for each other, transcending and giving beyond the self with the other, and thereby imaging God who is agape.”7
The Houston sex-robot case—sexual activity with a simulacrum (the philosopher Jean Baudrillard’s term, which now appears in theological writings) and its commercialization (a new sexual business)—opens up important theological, philosophical, and political questions about the future of sex and love.

Theologically, we are confronted with rethinking and redefining the relationship between God, humans, and machines (not only sex robots but AI overall). Philosophically, we are brought to one of the oldest questions, namely, What are we humans in the first place? What if the more disturbing question is not that of sex with a machine, but that posed by Spike Jonze’s movie Her (2013), about a human falling in love with AI—not a sex robot but an operating system without a body? Or we could ask, What are machines? Aren’t they already becoming human and the human becoming machine? Is there a future where humans are dispensable, or derided and discarded by machines? Politically, what was missing in the Houston debate (and still is missing from the theological perspective) is the question, Who is in control of the AI? Or, to put it in classic Marxist terms, Who owns “the means of production”?
If the Facebook-Cambridge Analytica scandal, with the crucial role played by Steve Bannon, brought us anything, it is the idea that politics can be preprogramed—that voters can, through “perception management,” to a certain degree be programed to desire a specific political option (be it Trump or Brexit). The Houston sex-robot case raises another question that goes beyond the ban of “sexbot” brothels in one city, because it takes us from the future of sex to the future of love itself: Why wouldn’t love become preprogramed?

When the British television series Black Mirror finally touched on the question of how AI will affect love—in the fourth-season episode “Hang the DJ” (2017)—it looked like a science-fiction dystopian future only to those not yet familiar with the current advances in technology (Tinder, Grindr, cyber-butlers, AI choosing our “perfect match”), which are rapidly transforming science fiction into dystopian reality. In the dystopian society of “Hang the DJ,” romantic encounters are scheduled by an AI system called Coach, which collects users’ data in order to match them with their “ultimate compatible other” and dictates which romantic relationships they will have and for how long. Let’s say you just had a beautiful romantic dinner and the chair is trembling beneath you because you are already falling in love. But you visit the restroom so you can check Coach to see whether this is your “perfect match.” You’re informed that the relationship expires in twelve hours. But don’t worry. The more relationships you have, the more data the computer gathers. The more data it gathers, the more accurate it is in predicting your perfect match.
It seems more than mere coincidence that Facebook, when the Cambridge Analytica affair hit the news, immediately announced it was launching an online dating service, called simply Dating. Unlike apps such as Tinder or Grindr, which use Facebook connections to identify potential matches, Facebook has the advantage of being able to see almost everything about its users. As Bloomberg reported, “It can track couples from their first ‘likes’ to the point at which they’re ready for engagement ring ads, and beyond.”8
The title of the Bloomberg article stated “Facebook Is Right to Think ‘Likes’ Can Lead to Love.” Obviously, efforts to preprogram elections bear a relationship to attempts to preprogram love, and vice versa. No wonder two of the main protagonists (along with Bannon) behind developing the Cambridge Analytica model were both previously involved in analyzing love rather than elections.
One is the computational social scientist Michal Kosinski, of the University of Cambridge Psychometrics Centre and Stanford University, who was the coauthor of a research paper showing that computer-based personality judgments are more accurate than judgments made by humans. Despite its prominence in research on well-being, Kosinski’s work has also drawn a great deal of interest from British and American intelligence agencies and defense contractors. (Among the overtures he received was one from a private company running an intelligence project nicknamed Operation KitKat, because a correlation had been found between anti-Israeli sentiments and liking Nikes and KitKats.)9
The other is the data scientist Aleksandr Kogan, who also works in the field of positive psychology and has written papers on happiness, kindness, and love; an early paper was titled “Down the Rabbit Hole: A Unified Theory of Love.”10 The seemingly bizarre intersection of research on topics like love and kindness with defense and intelligence interests is not, in fact, particularly unusual. Much of the foundational research on personality, conformity, obedience, group polarization, and other such determinants to our social dynamics was funded during the Cold War by the US military and the CIA.
The only (but big) difference is that during that time there was no internet, and the computational power and advancement in AI research hadn’t reached the stage where the same “psychological engineering” could be used not only to influence elections (Trump, Brexit) but also to analyze and determine romantic relationships (Black Mirror’s Coach).
This brings us to the crucial political question of our time: Who will be in control of AI? Or, in other words, if the big Silicon Valley companies—which are linked to the military sector and powerful factions of governments—already own the means of production (the material and immaterial infrastructure to create dreams, desires, politics), why wouldn’t they use these same means to determine the ways (and with whom) we fall in love?
If we don’t want to end up in a dystopian future in which Silicon Valley—or China, or Steve Bannon—controls the sex robots, or even the very process of “falling in love” (as in “Hang the DJ”), we had better start thinking seriously about the future of love. ▪︎
SREĆKO HORVAT is a philosopher, author, and political activist. He is the author of Poetry from the Future (2019), Subversion! (2017), The Radicality of Love (2015), and What Does Europe Want? (with Slavoj Žižek, 2014). He has written for the Guardian, New York Times, Al Jazeera,and many other leading news media. He is the cofounder of DiEM25 (Democracy in Europe Movement).

The catalogue was produced by the Philadelphia Museum of Art, with design by the Walker Art Center. It was edited and conceived by the exhibition’s curators: Emmet Byrne, Design Director and Associate Curator of Design, Walker Art Center; Kathryn B. Hiesinger, The J. Mahlon Buck, Jr. Family Senior Curator and Michelle Millar Fisher, formerly The Louis C. Madeira IV Assistant Curator in the department of European Decorative Arts after 1700, Philadelphia Museum of Art; Maite Borjabad López-Pastor, Neville Bryan Assistant Curator of Architecture and Design, and Zoë Ryan, formerly the John H. Bryan Chair and Curator of Architecture and Design, the Art Institute of Chicago.
—
Text and compilation © 2019 Philadelphia Museum of Art, Walker Art Center, and The Art Institute of Chicago