
Is AI Sorry It Took Your Job?
As algorithmic systems increasingly dictate the rhythms of our reality, artist and data scientist Angie Waller delves into the artistic potential of these digital frameworks to shape narrative and form. Through her book-publishing practice, Unknown Unknowns, Waller reflects on the mechanics of surveillance and data analysis and challenges the corporate myths they operate within.
In conversation with Kris Paulsen, associate professor in art history and film studies at Ohio State University, Waller describes her process of converging computational methods and printed matter, examining how she combines humor and formal experiments to draw out the broader human realities of tech and make visible the unseen forces of digital capitalism and authoritarian automation.
Kris Paulsen
I first recall seeing your work in the early 2000s, when you were doing research around the algorithmic operations of Amazon.com. Data Mining the Amazon is an artist’s book that maps out how Amazon understands each of us by our purchases and how we become typed and known by how similar our purchases were to those of other users.
At the time, I was struck by how funny the connections in the book were; in retrospect, I am awed by its prescience. This kind of data surveillance wasn’t talked about much yet. Can you tell me about this project, how you became interested in consumer data collection, and how this affected your art practice?
Angie Waller
I started that work when I was in grad school, where I got in based on my video portfolio. But I was already set on working on my work about Amazon’s recommendations. I didn’t know what form the research would take. It was somewhat unusual in the UCLA art department to be sitting in a raw warehouse space, surfing the web, and recording everything in Excel sheets. One of my advisors was the performance artist Chris Burden. That presented a particular challenge because he refused to go on the internet; he said he didn’t trust it. During my early studio visits with him, I had to start by explaining what amazon.com even was. Already my project was wonky for people who were familiar with the website. Having to explain its existence before explaining my project felt extra tedious. I was relieved when we got over that hump.
Every part of that project was just following [my] intuition. I didn’t start with any grand ideas about making a book about how we’re profiled in databases or anything like that. I wanted to get down to some sort of answer [to questions] like: How are the music tastes of conservatives and liberals different? The project changed my trajectory as an artist. It took me a while to feel like I could make that kind of work and fit in anywhere.
KP
One of the things that is so compelling and funny about the book is the way it serves as an historical artifact of this naive honeymoon period with surveillance capitalism. It’s dark, but it is also so funny. Rereading it, I laughed out loud because of how absurd and uproarious the connections are. For example, the idea that people who buy Hitler’s Mein Kampf are also likely to buy the Shrek soundtrack. You know, there is something hilarious about secret connections between media objects, and that the data-driven aspect of the connection makes it seem significant or true.
What role does humor play in your work? Or how is being funny important to your working practice? When I zoom out and look at your whole body of work, it’s almost a methodology: you find something absurd and use it to part a curtain on some aspect of technological or political culture.
AW
Humor has always been a survival instinct for me. It’s how I have learned to cope with stressful things in life.
I first discovered humor could be a powerful tool in my art during my time in the video program at the School of the Art Institute of Chicago. In our video-editing class, instead of shooting new footage, we were assigned to work with existing material. I gravitated toward familiar territory—training videos from my previous minimum-wage mall jobs. I remixed these videos about “probing customers” and slipping burgers into a new piece about taking over the world. It was silly, but it got the whole theater laughing. The feedback felt great. I was hooked.
KP
There are obvious relationships to other artists who were working around the same time you were working on Data Mining, like Mark Lombardi mapping the networks of power that operate invisibly in our political world. That work is revelatory, but it is also too much information to consume. There’s research there, but it’s also very clearly “art” in its hand-drawn aesthetic.
But your work, I think, sits more ambiguously in the category of art. It can seem like journalism or data analysis. Your deadpan humor has an effect on the way we get that content.
AW
A key difference between my work and someone like Lombardi is that people saw themselves in my music recommendation charts. Unlike Lombardi’s work, which examines systems of power and control, my work is more like a mirror of our culture, where people can see themselves and question what’s being assumed about them. Sometimes these assumptions made perfect sense, and sometimes they made no sense at all. Was Amazon shaping us as consumers, or were we shaping the data? It’s a self-perpetuating cycle.

KP
Yes! We see ourselves in the work, but we also are in on the joke. One’s not simply the butt of a joke, but getting the joke. This is an important mode of sharing information in an early moment of data surveillance. The humor gives us a little distance. One doesn’t simply consent or concede to it, but being able to laugh at it gives one a sense of power. If one can grasp it, maybe one can alter it, too.
Your humor has the effect of making me feel empowered in relation to these things. If I can see them as ridiculous, if I can laugh at them or see the sham, then the scaffold that holds up these looming structures begins to seem fragile. There’s something about being able to laugh at these colossal figures that allows me to imagine how they could be taken down, too, which doesn’t usually feel possible.
So if Data Mining the Amazon made connections between one’s interest in Ronald Reagan and Sarah Brightman visible, your book Grifting the Amazon exposes this other world of self-publishing, of gig working, and of fraud that now seems to be a more accurate description of how most of us see the network culture around us—not just making links or seeing who we are as consumers, but creating a consumer world that is deeply depraved.
Can you talk about that book and how it folds back on this early Amazon work, and the distance between those two moments?

AW
I stumbled upon the world of clickbait books and ghostwriters on Amazon almost by accident, looking up the technical specifications for getting a book on Kindle. When I looked deeper into this clickbait practice, I was struck by how many people were celebrating their profits from hastily assembled digital and print-on-demand books without considering the labor behind them. The concept of “passive income” is really just paying someone in India $10 to write a book based on some quick internet research.
The self-publishing landscape also has darker corners that I did not explore. About a year after publishing my book, I learned that extremist and white supremacist literature was being distributed through these same channels.
Working on this project ultimately led me to stop using Amazon for production or distribution of my books. Though some of my work still appears on Amazon, I try to distance myself from its ecosystem. I continue using other print-on-demand services, but I work with vendors that have more humane labor practices. It is reassuring when I can talk to the people producing my books on the phone, and I know they don’t work around the clock.

KP
One of the things that, again, seems prescient about that book is how you expose the seductive economy at work. It’s not just a get-rich-quick scheme, but shows information and its relationship to truth or research as being rapidly devalued. Someone who’s interested in a certain kind of pet can buy a book that seems to be written by an expert, but what you expose is that it was probably written by someone in under 24 hours for a few hundred dollars.
While Grifting the Amazon looks at this process mostly from the perspective of producers, it’s easy to see how this affects us on a consumer end, when gatekeeping systems like those of publishing get broken open and one no longer can necessarily trust the information in a book that you paid for.
AW
Sometimes I’m not sure if my artist signature is distinct enough to differentiate my books from someone publishing a guide for capybara care. When I have published work created with ghostwriters or automated text, I always disclose this in the description so readers know exactly what they’re getting.
I suppose I should be less self-deprecating, but perhaps this illustrates what you’re saying about our shared confusion. When I hired ghostwriters for three or four books, I tried to make the process as straightforward as possible. They quoted their price, and I accepted it. I treated the transaction like automatic writing, as if they were truly ghosts sending me texts.
KP
I’ve been framing your work as conceptual, research-based art objects, but they’re more complicated than that. You’re a deadpan art provocateur with work that’s distributed by places like Video Data Bank and Printed Matter, but you’re also a data scientist who produces investigative journalism for organizations like The Markup and Digital Witness Lab.
How does your journalistic, investigative, and political work relate to what you do as an artist? Or do you see them as separate? Are there moments where they bleed together?
AW
My experience working at The Markup was my first foray in seeing my work out in the world having a measurable impact. Members of Congress wrote letters to Meta about our findings, and there were class-action lawsuits filed against Meta for collecting financial and health data it wasn’t supposed to.
A lot of times my research and art-making approaches bleed together. For instance, while researching Facebook for academic work, I noticed the alt text that gets generated automatically by the platform and its strange robotic cadence. Pairing this up with pictures from Mark Zuckerberg’s feed became its own abstract book project. Similarly, when I read reporting about Facebook’s leaked content-moderation training documents, I drew on my experience creating diagrams and infographics to illustrate the absurdity and contradictions of its policies, which turned out to be of interest to researchers and art audiences.
KP
If the work you do is framed as journalism, as you said, the goal is to change people’s behaviors or to change people’s minds. When similar information takes its shape as an artwork, what is the expectation of what the viewers come away with?
AW
I gained a new perspective when I participated in events like the Printed Matter New York Art Book Fair. I meet tech workers who feel frustrated with their industry’s direction; they seem particularly drawn to my work. While they may not agree with what they’re doing for a company, they still need to pay their bills. Seeing issues they face critiqued with humor seems empowering. I try to keep my approach playful. Rather than lecturing, the humor opens up conversations. I try to help people feel less overwhelmed with everything that can be negative. I also hope it inspires people to create their own critiques and projects.
KP
From your early work on, it has been setting the stage for our contemporary moment and our encounters with artificial intelligence in our daily lives—whether we are the subjects of data collection for targeted advertisements, as we glimpse in Data Mining the Amazon, or casual users of large language models [LLMs], or creative and skilled people being put out of work by new forms of automation. One example is with machine-learning models, which you’ve been experimenting with since at least 2018.
Can you talk about the way that you’ve been working with various forms of machine learning, computer vision, training data, and LLMs? You don’t seem focused on their potentials, but on their guardrails—or the ways in which we come up against defined limitations of them. How has AI affected your practice, formally and conceptually, as well as the way you’re thinking about culture?
AW
What interests me more is finding the holes in AI marketing and hype. We’re currently facing a reality where AI is being touted as a tool that can replace thousands of workers and make our entire government more efficient. Anyone who has followed the shortcomings and biases of AI knows we are in extreme danger.
That’s why I believe critique of these systems and helping people be more informed is more important than ever. Poking fun at things is just how I cope. Making these critiques entertaining can reach new people and be valuable.
KP
Let’s back up and talk about a few of your AI books, like I’m Sorry AI Took Your Job. This one is very funny in part because of the way in which the software predicts that these conversations will go really well. In every one, the worker quickly accepts their place in this new world order. Historically, if we look back at these pinch points around rapid automation, that’s not how it turned out.
Twin Sister Discovered is a choose-your-own-adventure collaboration with your son. The two of you are always trying to make the story interesting, with unusual plot twists or dramatic plot developments, but ChatGPT cuts you off to steer you toward banality. It’s constantly hemming you in so that no one ever breaks a law or does anything “inappropriate.”
When you hit one of these guardrails, you get advanced to a part of the story with an explanation from ChatGPT saying why, for example, it won’t let the characters find money on the ground and then use it. Then you are sent back into the middle of the story only to be sent back to the ChatGPT disclaimer when anything interesting happens. When you touch these guardrails, it’s funny, but also we get a glimpse of the outcome of this new era of automated writing: whether for your HR administrator or for Hollywood, there is a banality and an expectation of nothing new ever happening.
AW
There’s a great academic paper where researchers analyzed television scripts en masse to see where AI guardrails would cut them off. And as you can imagine, just about all TV shows get safety and content warnings.
In the book I did with my kid, which did not have the same academic rigor, we tried a lot of shady afterschool special scenarios. We had the characters take a ride with a strange man in a van—and, surprisingly, that was approved. It was fun to test scenarios that any parent would warn against. The AI would practically say, “Sure, go with this stranger!” and “Have fun at the gun store!” But more benign things, like the characters getting their ears pierced, were not allowed until they were presented with a “Piercing Safety Certificate” by the person performing the piercing.
The challenge with AI-generated texts, even when you’re critiquing them, is that they’re so long and boring to read and only interesting to the person who created them. So I’ve been struggling to find ways to make these texts engaging—by not bringing too much emphasis to the bland writing itself but, rather, on the system behind it.
KP
Being forced to go backward, unwinding an exciting plot point, is a very effective illustration of those guardrails. It also reminds me of your book Reading Like a Computer, in which you examine the moderation rules for Facebook. In many cases, what is not acceptable under any circumstances differs only slightly from incendiary things that are totally acceptable. It’s often the grammatical structure of the statement that makes the difference. And so there’s a kind of arbitrariness in these two systems meant to protect users: ChatGPT hems us into this banality, while Facebook lets almost anything go as long as it passes a syntactical test. Or it used to . . .
AW
Yeah, it’s changed.
KP
It’s all moot now with Facebook’s new agenda, but there used to be this rhetoric that it was protecting people from violence and hate speech but, really, it was just imposing limits on syntactical arrangements of words. So one system allows extremism to move fluidly, and another one brings us into this benign place where we’re all walking away satisfied and coddled by this happy world of automation.
Given the way your work centers so much on digital culture, can you talk about how the object form plays in your practice? This could seem anachronistic, but it seems to also be a vital and important part of the way you think and work.

AW
Over time, I’ve grown to embrace the book format because it allows for both didactic and experimental approaches. I like that flexibility. Also, having a physical artifact is valuable because technologies change constantly. Online projects often break within months or become inaccessible after a year or so.
In addition to archiving, for me the actual form of the book itself is important. For instance, the choose-your-own-adventure format effectively demonstrates the journey with AI versus human writers: having to flip back to the start, repeatedly hitting guardrails, and starting over. I have made lenticular prints to emulate A/B testing and even volvelles for representing if-then algorithms.
Interaction might be something we associate with the online environments, but we don’t actually have a lot of freedom there. Online, every choice you make is being “watched,” collected to gauge your engagement or to predict—or direct—your future moves. The context of endless distraction and attention-mongering doesn’t make one aware of these structures and forms while we’re engaging with them. And so the book becomes an interesting analog for the kinds of experiences we unthinkingly face and are affected by every day when we go online.▪︎

Angie Waller is an artist based in Queens, New York. Her publications, Unknown Unknowns and others, explore the influence of automated technologies on language and culture. She reverse engineers everyday systems using programming, text analysis, and diagrams to reveal bizarre and disconcerting phenomena. Her books are included in the collections of the Museum of Modern Art Library, Whitney Museum Library, and ZKM Center for Art and Media Karlsruhe. Most recently, her works were exhibited at Somerset House in London; Fotomuseum Winterthur in Grüzenstrasse, Switzerland; and RMIT Gallery in Melbourne, Australia. Waller holds a master’s degree in fine art from UCLA and a master’s in computational linguistics from The Graduate Center, CUNY.
Kris Paulsen, associate professor in the Department of History of Art and Film Studies Program at Ohio State University, is a specialist in contemporary art, with a focus on time-based and computational media. Her first book, Here/There: Telepresence, Touch, and Art at the Interface (MIT Press, 2017), received the 2018 Anne Friedberg Award for Innovative Scholarship from the Society for Cinema and Media Studies (SCMS). Her work traces the intersections of art and engineering, with a particular emphasis on telepresence, virtuality, and Artificial Intelligence. Her current book project, Future Artifacts, examines how contemporary artists strategically deploy the emergent