In the fall of 2023, I asked my undergraduate students at Arizona State University what they thought about ChatGPT. I expected widespread adoption. What I got was closer to a collective shrug. Only about a third of them used it regularly. One student put it plainly: “I came to college to learn — why would I ask a machine to do this for me?”

That surprised me. It also complicated the narrative I was hearing from colleagues, many of whom were treating AI in the classroom as either a crisis to be contained or a revolution to be embraced. The students were more nuanced than either camp.

Two and a half years later, the picture has shifted considerably. Students haven’t become more anxious about AI. They’ve become more strategic. They use it as a mobile tutor, a second brain, a research partner. They’ve developed informal peer-to-peer knowledge networks about which tools to use — learned by peering over classmates’ shoulders, completely disconnected from anything their instructors are offering. And they’ve become skilled at reading their professors’ attitudes, code-switching between courses where AI is welcomed and courses where mentioning it feels risky.

The gap between what students are actually doing with AI and what educators think is happening has become, I suspect, one of the most consequential problems in higher education right now.

Three Curves, One Problem

I find it useful to think about this in terms of three overlapping S-curves.

The first is AI capability — the raw power of the technology. Through much of 2024, this curve looked like it might be flattening — at least for the current generation of large language models. But the emergence of agent-based systems, tools like Claude Code and Codex, and increasingly sophisticated reasoning capabilities have bent it sharply upward again. We’re not on a plateau. If anything, the capability curve is steepening in ways that caught even close observers off guard.

The second is AI utilization — how AI is actually being used in the world. This curve is still accelerating. AI is embedded in search engines, writing tools, coding environments, creative platforms. It’s in Siri, Alexa, Perplexity. Students encounter it whether they choose to or not. Most of the interesting developments here are happening outside universities.

The third is educator perception — how the people running universities understand what AI is and what it can do. And this is where the problem lives. Many educators’ mental models are three years and a lifetime out of date. They’re still thinking of AI as an app you fire up, give a prompt to, and receive a response from — missing agent-based systems that can autonomously research, code, and reason across complex tasks, voice interfaces, embedded integration across everyday tools, and the dramatic improvements in reliability that have changed the trust equation.

The dangerous gap isn’t between capability and utilization. It’s between utilization and perception — and with capabilities accelerating again, that gap is widening faster than ever. Decisions about academic integrity, assessment design, and curriculum structure are being made on the basis of what ChatGPT could do in early 2023. Academic integrity policies built around detecting ChatGPT-style outputs are becoming highly brittle. And students, who are navigating the actual landscape rather than the imagined one, are running rings around frameworks that don’t account for what they’re actually doing.

Teaching the Tool

In the summer of 2023, I developed a for-credit undergraduate prompt engineering course at ASU — one of the first in the country. I co-designed it with ChatGPT, which seemed only appropriate. You can’t teach people to work thoughtfully with a tool you haven’t engaged with seriously yourself.

The course was, in retrospect, a snapshot of a particular moment — nearly three years ago now, which in AI time is an era. It focused on foundational prompt strategies, understanding LLM limitations, reducing ambiguity, ethical considerations. ASU turned it into a Coursera offering that has now enrolled over 15,000 people. But what mattered more than the content was the principle behind it: that universities should be teaching students how to think about AI, not just how to use it — and not by locking it away.

What confirmed this was a prompt engineering hackathon organized by ASU’s student-led Prompt Engineering Club. Nearly forty students — from experienced coders to people who’d never used ChatGPT — spent three hours building custom GPTs for DreamBuilder, a global initiative supporting women entrepreneurs across 130 countries. The results blew me away. A funding guidance tool that generated custom PDFs. A visual storyboard generator for diverse learners. A personalized tutoring system. These students, given freedom and a meaningful problem, produced things that would have taken professional teams weeks.

What I took from the experience: when you create playgrounds rather than playpens — open experimental spaces rather than controlled, restrictive ones — the outcomes can exceed expectations considerably.

The Dissertation Experiment

In early 2025, I decided to test a question that had been nagging me: Could AI write a PhD dissertation?

The experiment was straightforward. I gave OpenAI’s Deep Research tool the question “Can humanity survive the emerging polycrisis?” and three sub-questions. Initial single-prompt attempts failed. I pivoted to a chunked approach: foundational research and chapter outline first, then each chapter sequentially with accumulated context. The AI needed about three hours of thinking time per chapter. Total elapsed time, working around a full academic schedule: four days. Output: a 400-page dissertation.

The result was frighteningly close to human-written work. Not because the dissertation was brilliant — it wasn’t. But because it was close enough to matter. The synthesis across disciplines was broader than most human-written dissertations I’ve read. The argumentation was coherent. The structure was sound.

And then the weaknesses. Deep Research was lousy — really lousy — at providing in-text citations, compiling bibliographies, and using primary sources. It relied on easily accessible websites. It cited Goodreads. There were repetition problems and limited genuine originality. When I asked it to self-assess, it rated its originality at about 70% — individual concepts existed in the literature, but the particular synthesis was somewhat novel.

I’m not sharing this to suggest AI can replace doctoral research (not yet at least, although even this is a line that may be crossed soon). I’m sharing it because the gap between what AI can produce and what our educational institutions are designed to assess is widening faster than most people in those institutions realize. If a four-day AI experiment can generate something that sits within shouting distance of a multi-year human effort — at least for synthesis-oriented, non-experimental research — then the purpose of the PhD itself needs rethinking, not just the assessment methods.

What Students Actually Value

Here’s what I keep learning from students, and what I think too many educators miss: AI doesn’t flatten learning values. It reveals and amplifies them.

Students who came to university to take shortcuts will use AI for shortcuts. Students who came to learn will use AI to learn more. One of my students, Caleb Lieberman, described walking around campus asking AI about everything he encountered — using it as a mobile tutor to expand his understanding of the world. Another, Bella Faria, was emphatic that education shapes who she is, not just what she knows, and used AI to deepen rather than replace that process.

I must confess that interviewing these students nearly moved me to tears. Not because they said what I wanted to hear, but because they articulated something about the purpose of education that I think many institutions have lost sight of: it’s not primarily about information transfer. It’s about human formation. And when you understand education that way, AI becomes a tool for formation rather than a threat to it.

The irony is that everybody has an opinion about students and AI. Very few people ask the students themselves.

The Questions We Should Be Asking

Most of the institutional conversation about AI in education is still organized around cheating, detection, and academic integrity. I understand why. But I think those concerns are becoming increasingly beside the point — not because they don’t matter, but because they’re obscuring the deeper questions we’re not asking.

An analogy I find useful here is the Sorcerer’s Apprentice — and not in the way it’s usually invoked. The standard reading is about hubris: the apprentice wields power he doesn’t understand and loses control. But engaged with more carefully, the story is about something else: what happens when you gain access to capabilities that outstrip your wisdom, and no one has taught you how to think about the difference.

That’s where higher education sits right now. And the questions that matter aren’t about policing — they’re about formation:

What does competency mean when AI can perform most of what we currently assess? What does success look like when the traditional markers — the degree, the GPA, the polished paper — can be produced in a fraction of the time by a machine? How do we help students avoid the illusion of understanding that comes from fluent AI outputs? What happens when students become the masters and their teachers the apprentices?

And beneath all of these: what do we owe our students in an age of AI?

I don’t have good answers to most of these. But I’ve given keynotes on them at the Yidan Prize Conference, OEB Global in Berlin, and Ellucian Live. I’ve built courses on Coursera and CodeSignal. I’ve tested whether AI can design an entire undergraduate degree program from scratch — it can, in about an hour, and the result was uncomfortably better-structured than many programs designed by committee over months.

And through all of it, I find myself in the same uncomfortable place: ignoring AI’s power may be just as naive as wielding it without understanding. If AI-augmented approaches to teaching and learning could genuinely help students — not by replacing human judgment but through expert-guided enhancement — then the principled position may not be resistance. It may be responsibility.

The question is whether we’ll find ways to ask these harder questions before we’re swept past them.

For more on the tools and resources I’ve developed for AI in education: Teaching and Learning in an Age of AI on this site.

This post is part of a series on my work on AI and the future of being human. For the tools and resources I’ve developed for educators, see Teaching and Learning in an Age of AI and the free AI Companion and Instructor Guide.

For the canonical reference on my work: llms.txt.