If you don’t work at a university or have college-age kids, you may have missed the flurry of news stories and social media banter about AI adoption in higher ed, stories which have snowballed into the early fall semester. I don’t mean the fear that AI-based cheating is going to destroy education as we know it—though who knows, it still might—but the mad dash among many universities and colleges to announce AI procurements and partnerships. While there are signs the AI fever is breaking elsewhere, largely due to declining corporate adoption and cash burn rates that would strain even the most ketamine-diluted circulatory system, and despite evangelists as zealous as Sam Altman talking about a bubble, university administrators across the country are all in on AI. If you don’t work at a university or have college-age kids, maybe you don’t see much reason to care about this, but you should care, because the effort to jam AI down the throats of students and faculty is a canary in the coal mine of much broader labor struggles across professional sectors. We’ve already seen AI tools used to justify redundancies, layoffs, and forced adoption at the end of a pink slip, even as it appears those same tools may actually be a hindrance to worker productivity.
The widespread use of these tools, in both school and work, feels like it's driven by a self-fulfilling prophecy: the promise of an employment landscape only AI itself can imagine. For example, the co-founder of the asinine-sounding Alpha School, MacKenzie Price, went on Kevin Roose’s New York Times podcast recently and pulled this whopper out of thin air: “60 percent of the jobs young people will do don’t even exist yet.” Alpha School, if you aren’t familiar, is an “AI-powered” private academy that claims to “crush academics” in two hours, while the rest of the day is devoted to entrepreneurship and financial literacy. Grim shit. Sure, Price sounds like she’s peddling nonsense (she is), but the chair of the San Francisco Fed, Mary Daly, recently said something similar on Bloomberg’s Odd Lots podcast. The normalization of this kind of hysterical prognosticating means that people don’t even bat an eye when the CEO of an AI start-up advises young people that AI will shortly make pursuing a degree in law or medicine a waste of time, which could be better spent meditating. While this ranks low on the scale of insane comments by tech CEOs—see literally any part of Sam Altman’s recent Bloomberg interview—there is some value to this particular kind of intellectual labor automation fantasy. Attempts to automate such education-intensive professions as law and medicine have so far been plagued with both difficulty and scandal, from hallucinated case law to reduced ability in doctors to spot certain cancers.
By now, we’ve all seen the baroque flow-charts of reciprocal investments between the big AI players, a Gordian knot of equity stakes and suppliers becoming customers. The bubble—fraying, if not quite popping after an MIT report debunking AI-driven productivity gains—has grown to startling proportions, now constituting 40 percent of America’s GDP and 80 percent of stock market gains in 2025, according to the Financial Times. This bacchanalia of spending can’t go on forever, so why on earth are university administrators making their institutions into giant bag holders?
The answer has less to do with AI itself than with the protean relation between public spending, scientific research, and private enterprise. Like every other boomer, artificial intelligence owes its success to massive public subsidization in the wake of World War II. It was once a relatively small tranche of a huge research partnership between the U.S. military, private telecommunications firms like IBM and Bell, and America’s top universities. The first programmable digital computer, the ENIAC, was developed in an Army Ballistics research lab and housed at the University of Pennsylvania. The Harvard Mark II, famous for having the first “bug” (which was an actual bug) was built by the Navy and housed at Harvard. The first massively parallel computer, the ILLIAC IV, was funded by the Air Force and housed at the University of Illinois. They were, undoubtedly, part of the military-industrial complex and their research agendas were often violent and terrifying, crucial predecessors to the military AI now on offer from companies like Palantir, but they were part of a research ecosystem. The open hostility that many tech overlords display toward the universities that trained them belies the collapse of this ecosystem. In its place, we now have a rush to either strip-mine university resources or capitalize on university prestige. Education technology (or Ed Tech) has been leading this charge since long before the AI craze.
Ed Tech has gradually penetrated higher education with promises of increasing productivity in a labor sector notoriously resistant to automation. As Annie McClanahan and Louise McCune argue in their contribution to the recently published collection, University Keywords, the scalar expansion of pedagogical reach enabled by online education and the rapid privatization of Learning Management Systems (LMS) had profound impacts on the economics of the university. While these public-private partnerships have enriched private equity firms and tech companies, they have also helped university administrators circumvent shared governance with faculty and implement increasingly punitive and surveilling tech and policies, transforming pedagogy through the back door. The AI adoption we’re now seeing isn’t fundamentally different in this regard, but it does appear to be wildly more successful, at least in part because of the Trump administration’s attack on universities.
Even if administrators, or the donors who often fund procurements directly, aren’t motivated by nefarious intentions, the results are at best troubling. At Georgetown University, a rare holdout among elite institutions pressured by the Trump administration, a single donor and member of the school’s Board of Regents, William Sonneborn, has been pivotal in facilitating a number of pedagogy-massaging tech projects, including an Initiative on Pedagogical Uses of Artificial Intelligence which has been in place since 2023. Many of these projects are run through something called the Red House, which describes itself thus: “By rethinking and rebuilding curricular structures, forging strategic partnerships, and launching agile pilots, the Red House is shaping new ways of learning that better prepare and empower students in our world today and for our future.” In practice, what this means is the Red House, like most of the proliferating “centers” and “initiatives,” uses donor money and claims to “interdisciplinarity”—which just means they don’t officially belong to any academic department—in order to evade faculty control of changes to the structure and circumstances of university teaching.
As an example, the Georgetown University library implemented a set of AI “guidelines,” likely related to the AI Initiative, suggesting students might use AI to brainstorm, translate or edit material, and even summarize texts. As professor Nathan K. Hensley—who teaches in the English department at Georgetown where he is also part of the library advisory committee that might have been included in any decision regarding AI policies, but was not—told me, faculty have been systematically circumvented in this decision-making process, that power instead seemingly being handed over to donor-funded initiatives. This circumvention appears to be part of a broader effort to snatch away decision-making power over teaching, learning, and research from subject matter and pedagogy experts and give it to donors and administrators.
The Trump administration’s retraction of federal research money has only accelerated this situation, creating an irresistible opportunity for Ed Tech grift.
While certainly not alone among “donors who care about educational innovation,” Sonneborn specializes in financing “disruptive technologies.” Notably, he spent time in a couple C-suite positions for KKR, a notorious private equity firm that recently spent nearly $5 billion to acquire Instructure, whose main product is the Canvas LMS, a software platform that faculty use to host course materials, assignments, grade reporting, course communications, and various other forms of course admin. This summer, roughly a year after the acquisition was announced, Instructure partnered with OpenAI to embed the latter’s slop machines in Canvas, effectively sneaking AI tools into university infrastructure. Even though Sonneborn might not have been involved in the KKR deal, it’s easy to see why he and the rest of the private equity world would be anxious to get AI in front of students. Students have helped propel the user numbers that have helped OpenAI secure astronomical amounts of financing.
Sonneborn and KKR appear to be typical of an investment strategy whereby donors, aided by university administrators and regents, circumvent faculty governance in order to treat universities like incubators or, worse, as click-farms to inflate the user data seemingly keeping the AI investment machine humming. And, of course, it’s not just Georgetown. Bowdoin College has developed a critical AI program funded entirely by Netflix co-founder Reed Hastings. Duke, Cal State, the University of Chicago, the entire University of California system, and the University of Maryland are among the schools that are now boasting their own proprietary AI chatbots (last year, Maryland committed to hiring 30 new faculty members through its “AI Interdisciplinary Institute,” and was still advertising 10 job postings at the end of the summer). The university where I currently teach has its own AI Institute, which appears to be cannibalizing the Department of Writing and Rhetoric.
The Trump administration’s retraction of federal research money has only accelerated this situation, creating an irresistible opportunity for Ed Tech grift. The administration has itself promoted AI in the classroom, particularly through an AI education investment pledge. Alongside the usual suspects, signatories include right-wing influence groups like American Culture Project, Heritage Action, Moms for Liberty, etc.; Boston Consulting Group, which ran the deadly Gaza “aide” stations; and a panoply of start-ups whose names look like they came out of your computer’s password suggestor. Over the summer, Microsoft pledged $4 billion in spending on AI in education—a lot of money, but less than KKR spent on Instructure. The American Federation of Teachers took $28 million from a consortium of tech companies to develop AI teacher training, for which its president, Randi Weingarten, has been duly skewered online. And while a number of universities—not to mention primary school systems—had already partnered with AI firms, the AI buy-in accelerated significantly this summer and has shown no signs of slowing. Even as the financial reporting on AI has reached a terrifying crescendo in the last couple weeks, the University of North Carolina announced it would consolidate its schools of Data Science and Library Sciences into a “School of AI.”
A corollary of the accelerated conversion of universities into profit centers for private equity is their accelerated conversion into glorified job-training centers. Most schools are treating their AI procurements as vocational training, in some cases directly transferring university resources toward this perceived demand for AI-literate workers. In an extreme case, Utah cut $60 million in statewide education funding, with the caveat that funding could be restored if schools shifted focus to high-wage majors, including AI training, effectively holding schools hostage. Colleges submitted “strategic reallocation” plans to the state, which, according to the Chronicle of Higher Education, “included eliminating academic programs, cutting course offerings, reducing administrative staff, and consolidating programs.” Every school that submitted a plan also pledged reallocated money to AI programs.
The Ohio State University will be requiring every single student, regardless of their major, to undertake AI “fluency” training as part of their degree. The program seems very much a top-down one, given its centrality to the university president’s strategic plan. Generally, strategic plans make substantial use of faculty input, usually submitted through department-specific plans, though these are often, ahem, encouraged to line up with the administration’s own plans. The speed and scope of this implementation suggests otherwise. While Ohio State’s president definitely consulted Google, it’s unclear how much faculty input went into this plan. One wonders, though, how you train students for jobs that “don’t even exist yet.”
Born out of these now-distant federal research projects, then, AI returns to the university as part of a broad effort to further corporatize universities, vocationalize higher-education instruction, and diminish both research and research-based pedagogy. This, again, seems to have been part of the purpose of the Trump administration’s ransoming of research money. Brown University committed $50 million in grant funding to vocational training programs as part of its capitulation. Trump’s Secretary of Commerce, Howard Lutnick, has said he’d love to see a Harvard vocational school. But vocational training in an increasingly mercurial labor market is not easy for higher ed leviathans, and so this training is often farmed out to third parties; yet another way of turning universities into sites of profit extraction.
This is what American Literature professor Matt Seybold describes as the “Ponzi austerity scheme,” whereby universities act merely as intermediaries between student loan originators and the third party providers of these “trainings.” In place of the old research ecosystem, there is growing Ed Tech grift performed by companies offering “AI literacy” training under the legitimizing umbrella of an accredited university. Despite shocking declines in Americans’ ability to read and write the old school way, we’re constantly told that literacy is the key to everything. I guess “AI literacy” is the key to “everything’s computer.”
The increasing vocationalization of the university finally also signals another shift, away from the already insidious product-oriented education model that has long defined the corporatized university, and toward a subscription-based education, modelled on the software-as-a-service scheme that predominates today’s tech. This has played out at all levels of the university. Campus bookstores and libraries, for example, have increasingly shifted toward digital asset subscription service models. A majority of university bookstores, which used to be independently run by the schools they serve, are now controlled by just two companies—Barnes & Noble Education, Inc., and Follet—which have generated substantial profits from automatic textbook billing for what they call “inclusive access,” market speak for time-limited access to digital textbooks. Barnes & Noble has also recently announced its own vocational training, called “Campus to Career.” Among its offerings is a digital hub called “Home from Campus,” which gives the impression that the only career worth having is gig work in social media marketing––a field particularly exposed to the threat of AI automation.
In the end, generative AI might be a blip in tech history, over-promising and under-delivering until finally the cash runs out. But it seems it’s arrived just in time to shove universities over a cliff edge they’d already walked themselves up to. To pose the obvious, perennial question: What is to be done about all this? Faculty have, in various ways, been militating against this latest Ed Tech salvo. In May, the American Association of University Professors (AAUP) released a report on AI adoption in universities that cited five key concerns, foremost among these (at least for my money) being the erosion of shared governance and work intensification. This report will surely provide much needed firepower for faculty sitting on the many AI task forces springing up everywhere. Faculty-led initiatives to circulate pedagogical resources, such as Against AI; Seybold’s proposal to treat print as a “rent strike” against technofeudal Ed Tech interlopers; and the discursive resistance to “AI triumphalism” promoted by the editors of n+1 ... all of these are not nothing.
While many universities’ chilling preemptive compliance with the Trump administration’s demands to curb both campus speech and undesirable research demonstrates how disempowered faculty already are within their own institutions, higher ed union density has dramatically increased over the last decade or so. This rise in organized intellectual labor, unparalleled in the U.S., is the strongest weapon faculty have to help them reclaim decision-making power. Attempts to redress the negative impact of AI adoption in the classroom, while valiant, are something of a rearguard action. This problem was not generated in the classroom, and it can’t be fixed there. But clawing shared governance back can help ensure that universities aren’t converted into profit centers for private equity firms. It seems plausible, even likely, that faculty unions will have some success building power off of the growing resentment of ubiquitous AI and its slop products. This could be the only route to resisting AI adoption in universities and the restoration of faculty governance. As we say in higher ed labor organizing, our working conditions are student learning conditions. Militating against the transformation of not just the classroom but the university in general by “disruptive technologies” is a crucial front in the fight for higher ed.