The AI Crisis in Higher-ed Isn’t Just About AI
It's about whether universities want to recommit to their educational mission
Note: It’s been a while since my last post here. In that time, I’ve been working on a new book about the philosophy of work and continuing to grow the Sheedy Family Program at Notre Dame.
Coming back to The Space of Reason, I’d like to continue to use this space to connect the dots between these (and a few other) projects in philosophy, education, technology, and the future of work. Today’s post on AI and higher-ed is representative of this goal…
What if the AI Crisis is Really About Teaching?
Everybody knows students are using generative AI to complete coursework. But the desperation in how professors are talking about it is telling. In hallway conversations and op-eds, the tone swings wildly between anxious optimism (maybe AI itself can solve the problem through surveillance?) and strange defeatism (Fine — if they’re using it to write their papers, I’ll use it to grade their papers).
When I read the faculty discourse around AI and learning, I’m reminded of the “hermeneutics of suspicion,” a term coined by philosopher Paul Ricoeur to describe an interpretive lens that assumes deception and bad faith. Many of us now approach every assignment submitted to our learning management systems with the accusatory posture characteristic of an acrimonious divorce. But you can only hold this posture for so long before it drives you nuts. If grading has become a zero-sum game of mutually assured gaslighting, it might be time to find a wider frame. It might be time to ask whether a more charitable interpretation of the situation is available.
The boundaries between acceptable and unacceptable AI usage are blurry right now. In part, no doubt, it’s because these tools — like most technological innovations — are like the wand in “The Sorcerer’s Apprentice”: In the right hands, they can amplify our efforts for good, but when used imitatively without mastery, they can unleash dark and mysterious powers.
Like the apprentice who imitates a spell he doesn’t understand, many students — and faculty — are using AI tools without mastery. But it’s hard to say who the apprentice is anymore. Perhaps digital natives, who are perpetually more adaptable and open-minded than the technologically conservative intellectual ruling class, are simply embracing the evolution of the writing process. After all, we used to roam massive libraries guided by handwritten index cards before word processing software and the internet revolutionized research. As in previous generations, it seems like practical knowledge and wisdom are unevenly distributed across generations.
As someone who spends a lot of time thinking deeply about teaching — and teaching students to think deeply about their own learning — I’m surprised that I haven’t yet seen more people take up what I see as the most uncomfortable question: What if students who turn to AI for homework help are just responding rationally to the system we’ve designed? What if they’re just giving us what we’ve long been asking for, a flawless performance of disciplinary knowledge production?
I want to take this theory for a test drive. Though it may feel like dangerous territory for academics who depend on a delicate ecosystem that’s able to turn tuition dollars into research outputs and, eventually, permanent job security, I want to explore the possibility that the “AI crisis” in higher education right now is less about AI and cheating than it is about teaching, learning and the purpose of the university.
Let’s suppose, for a moment, that using AI to complete coursework isn’t primarily a sign of laziness or entitlement. Suppose, too, that it’s not a sign of moral degeneration or simple indifference to traditional scholarly values like integrity, honesty and transparency. Suppose, instead, that students are using tools like ChatGPT not because they want to cut corners, but because they increasingly fail to see the college classroom as a site of meaningful learning. On this theory, they’re not trying to game a system they otherwise believe in for some short-term advantage. On this theory, they’ve lost faith in an educational system they don’t think can make good on its lofty promises.
Here’s a terrifying possibility. What if the institution of higher-education in this country has drifted so dramatically over the past 200 years that we’ve forgotten what it even means to receive a liberal arts education? What if the chatbots aren’t disintegrating a largely functional (if imperfect) model of learning, but revealing what students have long suspected: that universities aren’t really in the business of teaching for intellectual or personal transformation.
In suggesting this, I’m not positing some recent cultural conspiracy. The problematic possibility I see goes deeper than any recent ideological or political currents in academia. Indeed, it was laid out plainly in 1947, when the Truman Commission warned, “Present college programs are not contributing adequately to the quality of students’ adult lives … in large part because the unity of liberal education has been splintered by overspecialization.” The warning, like many before and after it, was largely ignored.
Even this is merely an echo of a refrain we’ve heard throughout the history of American higher education — voiced, for instance, in the late 19th century when critics of the new research universities lamented that faculty and students alike had been caught up in the factory model of knowledge production, rather than anything like pursuit of the truth for its own sake. On this telling of the story, laid out beautifully by Jonathan Zimmerman in a recent book, a fundamental fracture appeared in academia with the adoption of the German research model, when prestige and peer review became synonymous, when the seeds of the publish-or-perish model were firmly planted in the soil of our campuses.

I see each instance of this critique as an inflection point, as an invitation to reconsider the idea of the university in our cultural moment. Put more ominously, though, these are reckonings. Moments where failure to realign our vision with our practice threatens to undermine the whole institutional structure.
Before the critics accuse me of romanticizing, let me note that I don’t believe in any sort of prelapsarian Garden of Education; even the very first American students in the colonial period were rebelling against ineffective, rote learning that alternated between tedious recitations of memorized text and unbearable, droning lectures. But the problem that existed in potentia for these students was consciously incubated as we continue to expand our ivy-covered walls. And what was once a small crack in the foundation of the ivory tower is threatening major structural damage.
But cracks aren’t always devastating, so long as we pay attention to them. After all, in the words of The Poet, “that’s how the light gets in.” What we need now isn’t just a defense of traditional values, increased surveillance or a total ban — but a recovery of purpose. We need to renew, in our communities, what our missions have always stated. The purpose of the university isn’t simple knowledge production or credentialing. At its best, it forms people to the good. It enlightens and liberates. It creates the conditions for shared political life and, ultimately, human flourishing.
That’s work that’s worth showing up for — and that’s one thing AI cannot do for us.
I’ve been fortunate to help build a different kind of learning environment at Notre Dame, one that offers a small but powerful counterexample. The Sheedy Family Program I direct brings students together in tight-knit cohorts and invites them to explore the relationship between work, meaning and the good life. We study business, but always in conversation with the liberal arts. We ask basic questions — about freedom, responsibility, risk and purpose — not as a break from their “real” education, but as its foundation.
There’s no credential tied to the program. No transcript notation or résumé boost. And yet students show up — late at night for unscripted dialogue, on retreats with no phones, at semi-formal dinners where the only requirement is attention and dialogue. We’ve learned that when students sense the work is real — when it connects to their deepest questions — they don’t want to automate or offload. They want to participate and engage.
That, I think, is the invitation before us. Not just to regulate AI or fortify our assignments, but to rebuild trust in the idea that learning itself still matters, and that it’s possible right here and right now. The Sheedy Program isn’t a universal solution. But we can create more spaces like it — where learning feels less like performance, where education feels more like formation.
References
Questioning the relationship between learning and AI
Chronicle of Higher-ed: Is AI Enhancing Education or Replacing It?
New Yorker: Will the Humanities Survive AI?
Student AI usage:
Chronicle of Higher-ed: How Are Students Really Using AI?
New Yorker: The End of the English Paper
New York Magazine: Everyone Is Cheating Their Way Through College
Philosophical diagnosis that gets it mostly right in my view:
Chronicle of Higher-ed: In the Age of AI, Education Is Just an Illusion


Love this piece Paul, and couldn't agree more. I am hopeful that we can use this moment as an opportunity to return to the fundamentals and help our institutions to reflect, reimagine, and realign themselves with their ultimate purpose.