Our New Robot Overlords
The Age of Artificial Stupidity Arrives.
Above: Fictional news anchor Kent Brockman welcomes our new robot overlords. From The Simpsons.
I recently received an email from an entity purporting to be Mildred Garcia, Chancellor of the California State University. This entity was delighted to announce “a first-of-its kind public-private initiative to establish the CSU as the nation’s first and largest AI-powered public university system.” Not to be outdone, an entity purporting to be Jeff Armstrong, President of California Polytechnic State University San Luis Obispo, sent me an email announcing the creation of the Cal Poly Office of Effectiveness and Efficiency (so-called because the name “DOGE” was already taken), to be headed up by Vice President Jessica Darin (probably because Elon Musk is too busy dismantling the entire federal government). In other news, management at Sonoma State University announced plans to lay off 130 faculty, cancel 23 academic programs, close 6 academic departments, and consolidate another 7 departments into 3. CSU management is clearly taking its cues from the Muskovites of DOGE, for they are no longer pretending that they are doing anything other than setting fire to valuable public infrastructure for the sheer joy of watching it burn.
The California State University was once the finest public university system in the nation. When Governor Pat Brown established the CSU in 1960, the system was fully funded, because Brown and other California leaders knew that every dollar spent on public higher education would redound to the benefit of California’s economy fourfold. Throughout the 1960s, the CSU was the People’s University. It was affordable and accessible for low-income students. In the late 1960s, the CSU created the important new discipline of Ethnic Studies to serve California’s increasingly diverse student population.
Tragically, management is now transforming this once-great public university into an online diploma mill right before our eyes. Many of our readers will not understand the words of Chancelbot Garcia quoted above. Happily, I am Past President of the San Luis Obispo chapter of the California Faculty Association, and I speak fluent Management. So please allow me to translate. The phrase “public-private initiative” means “let’s take a great public university, built at taxpayer expense for the benefit of all Californians, and sell it off in pieces to private corporations, starting with OpenAI.” The California Faculty Association must immediately demand to Meet and Confer with the CSU on the devastating impact that this so-called “initiative” will have on faculty working conditions. CFA must demand the immediate reinstatement of Sonoma State faculty who have been laid off for the transparently cynical reason that chatbots are cheaper than professors. CFA must further demand that all CSU faculty and students retain full intellectual property rights for all research, scholarship, and curriculum that they create in this so-called “AI-powered public university.” If the CSU continues to insist that faculty can simply be replaced by chatbots, then CFA must ensure that the chatbots receive the essential right to which all non-management employees are entitled in the CSU: union representation. If the CSU truly believes that these software programs are intelligences that can perform the work of faculty, then the AIs must be admitted into the faculty bargaining unit, so that CFA can engage in collective bargaining on their behalf. Otherwise, the CSU will have proven that playwright Karel Čapek was right to use the Czech word for “slave” when he coined the word “robot” in 1920.
Way back in 2001—the year when Clarke and Kubrick’s fictional HAL 9000 artificial intelligence murdered the crew of the USS Discovery—I proposed a course on the history of computer networks for the then-new Area F (Technology) in Cal Poly’s General Education program. I taught the course many times during the first two decades of the twenty-first century. We always had great class discussions about the prospects and problems of artificial intelligence. My students sometimes thought, mistakenly, that I was afraid AI would attain consciousness and decide to exterminate humanity, like the Skynet of Terminator films. I actually don’t worry about that at all, for the simple reason that “every OS sucks.”1 The original problem of computer programming—Garbage In, Garbage Out—has never been solved. Software will always be shitty. And so I was (and remain) deeply skeptical about the possibility of strong AI. To create strong AI would require simulating human consciousness, but we don’t even understand what human consciousness actually is. There is also what philosopher Hubert Dreyfus calls the problem of embodiment. Whatever human consciousness is, it’s something that happens inside organic human bodies. The things that we call “artificial intelligence” lack such bodies. Because they lack bodies, there is nothing at stake when these “AIs” do whatever it is they do instead of thinking. And this means that AIs cannot do what we embodied intelligences do every day: make complex ethical decisions, knowing that our decisions will have real impacts on real bodies.
Polytechnic. (Adjective.) From Hellenistic Greek πολύτεχνος skilled in many arts.
Of an educational institution: giving instruction in various subjects, esp. technical and vocational ones.
—Oxford English Dictionary
As a professor teaching at an institution that fancied itself a comprehensive polytechnic university, I foolishly assumed that I would be able to keep teaching my history of networks class until I retired. But a few years ago, the General Education police cancelled my class. At the time, Mildred Garcia’s predecessor, Chancellor Joseph Castro, was pushing a standardized, cookie-cutter GE program for the whole CSU system. Castro later resigned in disgrace for covering up sexual harassment when he was President of Fresno State, and rode a golden parachute right down into Cal Poly’s College of Business (which has a brand name that I will not use, because I no longer have to do unpaid marketing work for Cal Poly’s corporate sponsors now that I’m retired). In a sure sign that irony is dead, Castro taught business ethics for a year before fucking off into what we can only hope will be a permanent retirement. Anyway, under Castro’s “leadership,” the CSU decided that students should take more courses in “quantitative reasoning.” Cal Poly’s GE committee made a bad faith offer for people who taught classes in the old Technology Area F to submit these classes for the new “quantitative reasoning” requirement. To no one’s surprise, all the old Area F classes in the humanities and social sciences, including mine, were denied certification in the new GE area. So it turns out that “giving instruction in various subjects, esp. technical” ones is not really a priority for California “Polytechnic” State University.
Losing my history of networks course was the first sign that I was nearing the end of my teaching career. That course was the last in-person class I ever taught. The last time I taught the course, I taught it on Zoom during the pandemic. I was surprised how much I enjoyed teaching online. Teaching the history of networks on a network, I found that my medium agreed with my message. Zoom created new opportunities for student participation. Students who never would have raised their hands to speak could type comments and questions in the chat. Something was lost, however. When we brought our bodies into a classroom to discuss important things, there was always something at stake. We risked making fools of ourselves in public, if nothing else. But if Fuckerbook and Twitler have taught us anything, it’s that people are quite happy to make fools of themselves online. When I was speaking to 30 embodied students in a classroom, I always knew that there were minds in those bodies, even if some of them were reading Fuckerbook on their phones while pretending to pay attention to my lecture. But unless my students turned their cameras on during our Zoom classes, I would be talking to thirty little black boxes with names on them, and I had no way to know if there were actual minds in those boxes. You know what you call an entity that is incapable of complex ethical thinking, an entity that is unsure other minds even exist? Elon Musk.
About a month after the disastrous 2024 election, Michelle and I were driving around Las Cruces, and I was talking about the difficulties I had encountered in the last years of my teaching career. In my lower-division GE survey courses, I had started to receive papers that had clearly been written by ChatGPT. That was annoying, but I figured it wasn’t the end of the world if engineering majors didn’t learn quite as much about the history of political economy as they were supposed to do. A bigger problem occurred in my research seminars, where it became harder and harder to teach history majors the difference between research and looking stuff up on Wikipedia. (Answer: research means you scroll down to the bottom of the Wikipedia article and read the secondary sources that Wikipedians used to create their tertiary source.) The biggest problem, of course, was that as America continued to devalue critical thinking, it became harder and harder to convey such foundational normative ideas as the notion that fascism is bad. And just as I said that, Siri—our onboard Artificial Stupidity— began reciting the Wikipedia definition of fascism. And I said yeah, that is exactly what I’m talking about.
I also used to teach my students about cyborgs. “Cyborg” is short for cybernetic organism. A cyborg is an entity that includes an organic component plus a cybernetic component. Cybernetics is a form of information processing based on feedback control. A cybernetic system takes its output as its input, and adjusts accordingly. As many of our readers know, I provide home hemodialysis for my wife Michelle. The hemodialysis circuit is a cybernetic organism. It has an organic component (Michelle’s body) and an information-processing component (the dialysis cycler, which constantly measures the pressure in the arterial and venous blood lines). The weakest point in the system is, thankfully, the one that is least mission-critical: the iPad that shows us the line pressures in real time. It takes a modem, a wireless card, and a dedicated router to make that iPad talk to the dialysis cycler, and even with all that gear up and running, the iPad often loses the signal from the cycler. Happily, there is another organic element in the feedback system that controls the cycler: me. The cycler controls are strictly manual. I set the blood flow rate by hand. When the iPad randomly stops working, I can read the line pressures off the front panel of the cycler. So when it’s time to decide if Michelle’s arterial and venous pressures are within acceptable limits, I have two options. I can use some shitty software running on a malfunctioning iPad, or I can do it myself. Guess which one I choose.
I, for one, do not welcome our new robot overlords.
Additional Reading
Dreyfus, Hubert L. What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: The MIT Press, 1999.
Haraway, Donna. “A Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980s.” Socialist Review 80 (1985). Reprinted in Feminism/Postmodernism, ed. Linda J. Nicholson (New York: Routledge, 1990) and elsewhere.
Three Dead Trolls in a Baggie, “Every OS Sucks.”


