The models of the human “self” offered within AI commit to white male superiority and individualism.
“I hope activists will see the value in refusing AI and the institutional networks that sustain it.”
In this series, we ask acclaimed authors to answer five questions about their book. This week’s featured author is Yarden Katz. Katz is s a fellow in the Department of Systems Biology at Harvard Medical School. His book is Artificial Whiteness: Politics and Ideology in Artificial Intelligence.
Roberto Sirvent: How can your book help BAR readers understand the current political and social climate?
Yarden Katz: We’re flooded with propaganda about “Artificial Intelligence” (AI). We are told AI is transforming every arena – from education to the law and the entire economy. There’s massive institutional investment, financial and otherwise, in this idea that AI is a powerful technological force reshaping the world. Artificial Whiteness tries to understand why institutions and professional experts are so invested in this idea – and identify the political projects that “AI” is a vehicle for. These are the political projects of white supremacy and racial capitalism that readers of BAR are all too familiar with.
The book is also a story about the professional expert class. It’s a case study of how universities, the corporate media, and non-profits can quickly galvanize around a misguided premise – that AI is suddenly changing the world and hence “We” must manage it – as this premise aligns perfectly with capitalist and imperialist interests. Yet this premise is often served with a “progressive” framing, co-opted from social justice movements. From there, more propaganda and violent initiatives follow.
What do you hope activists and community organizers will take away from reading your book?
I should emphasize that this book wouldn’t be possible without the work of activists and organizers. I learn so much from them. My hope is that the book will provide activists and organizers with additional arguments and historical context that may be useful for struggling against white supremacy – a struggle that inevitably faces the world of computing and science that AI exemplifies. I hope activists will see the value in refusing AI and the institutional networks that sustain it.
We know readers will learn a lot from your book, but what do you hope readers will un-learn? In other words, is there a particular ideology you’re hoping to dismantle?
One idea to be unlearned: that AI is a powerful, scientific project meant to reproduce “human intelligence” in machines. It’s not about that. AI should instead be seen as a flexible conceptual “technology” that serves dangerous political projects, including neoliberal restructuring of labor and education, initiatives to expand the prison-industrial complex, and a slew of global projects that dispossess communities and convert land into real estate. I describe in the book how the AI sphere advances these harmful projects.
AI is invoked because it’s supposedly so powerful, and this notion is based on a series of lies about what human beings are like and what constitutes knowledge. These lies should be unlearned. Generally, the models of the human “self” offered within AI commit to white male superiority and individualism. The notion of “intelligence” implied by these models privileges abstract puzzle-solving skills, and the solving of narrow tasks that may profit the likes of Google, over just about every other aspect of human life.
There are other forgeries to be unlearned, like the idea that AI is autonomous – or that computers “make decisions” that we’re somehow bound to. Such myths enter discussions of incarceration where experts argue that this or that system for so-called algorithmic sentencing is less “biased” than people. These myths also underlie claims, by various academics and “national security” pundits, that warfare is nearly “automated” and therefore beyond political control. All these discourses reaffirm existing oppressive arrangements by drawing our attention to the innards of computing systems rather than to the violent institutions that make the use of computers – for, say, sentencing people to life in cages – possible. The AI expert industry pulls off this maneuver by promoting misguided notions like “algorithmic accountability,” “algorithmic justice,” or “ethical AI,” served with appeals to social justice. This maneuver leaves the experts room to shape and oversee a variety of political arenas because of their expertise in “AI.” I hope activists will reject these notions despite the progressive veneer.
Who are the intellectual heroes that inspire your work?
There are many. I owe an unpayable debt to intellectuals and activists who write about the Black radical tradition, including Gerald Horne, Robin Kelley, George Lipsitz, Toni Morrison, and Cedric Robinson. These are aren’t the typical anchors for a book dealing with computing and cognition, but this work is foundational for me. It has helped me see through the fog that scholars of science and technology sometimes create. I am also deeply inspired by the brilliant work of Ariella Aïsha Azoulay, Silvia Federici, Jemima Pierre, Britt Rusert, Audra Simpson, and K. Wayne Yang. And like all these writers, I’m moved by what communities do to survive, resist, and subvert the outrageous systems we live under.
In what way does your book help us imagine new worlds?
It points to ideologies and institutional logics that need to be unlearned. Here I join others in arguing that a refusal can be productive – that it can lead to better places. In this case, a refusal of AI, as part of a broader refusal of white supremacy and its attendant institutions. Instead of trying to “fix” it, one can deflate the artifice of AI and join ongoing struggles against all the harmful projects that the AI industry promotes. Perhaps we can then imagine ways to dissolve this industry’s projects and redirect the resources towards repairing a damaged world.
Of course, unlearning is essential but it’s not enough. There’s also a need for different ways of being and working together, which books alone don’t provide. I look to the practices around music, and to the spaces created by activists – what Barbara Tomlinson and George Lipsitz call “insubordinate spaces” – for those insights.
Roberto Sirvent is Professor of Political and Social Ethics at Hope International University in Fullerton, CA, and an Affiliate Scholar at Yale University’s Interdisciplinary Center for Bioethics, where he directs the Race, Bioethics, and Public Health Project. He is co-author, with fellow BAR contributor Danny Haiphong, of the book, American Exceptionalism and American Innocence: A People’s History of Fake News—From the Revolutionary War to the War on Terror.
COMMENTS?
Please join the conversation on Black Agenda Report's Facebook page at http://facebook.com/blackagendareport
Or, you can comment by emailing us at [email protected]