Academia’s way of assessing undergraduates is now done. We cannot set an assignment that AI cannot do. For the next semester or two people will try to awkwardly dance around this, which will just reward student dishonesty.
It’s time to think about the end state. There are two possibilities that we can choose from.
The University of No Technology
Option 1 is to remake universities (or at least certain subjects) into No Technology subjects.
Every assignment and exam is in-person invigilated, either on paper or an oral presentation. Your watch and phone are put into a locker at the front of the exam room.
There is no take-home assessment of any kind.
There are not many accommodations for disability either, unfortunately.
If you have a hearing or visual disability, you have to register the specs of your hearing aid and glasses. For each exam and presentation, you will be issued with a university-approved device from a stock on hand. Why? Because you can embed a micro-camera or in-ear whispering device, and UoNT takes “no technology” very seriously.
The graduates of UoNT-style universities aren't very employable, but they are alert and sharp in a world of people who don't think much. Magnu cum laude from UoNT means that this person is a sharp mind, hard-working and diligent.
I suspect that science will be very hard to teach at UoNT. The data I’ve generated from one of my experiments suggests that it will be around December 2025 when AI will be able to do a better job of creating hypotheses from data than any existing technique. Narrative learning (done by AI) will outperform every other explainable machine learning technique. So the forefront of any scientific field will be inaccessible to the UoNT curriculum. Research mathematics will be so heavily managed by AI proof-bots that it too will be inaccessible. I’m not sure how you would teach business studies without technology in an era where humans and bots are both essential components of how companies run.
So I’m guessing UoNT-style universities are going to be very big on humanities.
The extreme version of UoNT is Saint Benedict's Integrated Monastery of Technological Asceticism where students spend 3 years cloistered away from technology, which many very able students will sign up to as a personal discipline to break technology addiction. SBIMTA graduates have an unworldly ability to focus for long periods of time on a single topic, and a strange calm about them at all times.
The University of AI Wrangling
Option 2 is to re-make universities as playgrounds where you can learn how to use AI effectively by being given challenges that push the boundaries of what is possible. It’s all about creating outcomes, the method doesn’t matter.
Any technology support is allowed, you can use whatever helps you achieve the outcome. First year undergraduate students are given assignments that would have taxed a team of postdocs in 2020.
Their study and work is highly cross-disciplinary because they can work across many, many fields and make productive impacts. How do I know this? Because I’m living it by doing a PhD in the AI-capable era. 18 months in, I’ve got a maths paper accepted, co-authored a paper in genetics, had a conference paper in statistical education (for an intervention that has a world-leading effect size per dollar), I’m collaborating with an economist on land pricing, and with an ancient historian, translated more classical Greek than most humans alive (and given a well-received workshop on it to an audience of papyrologists), collaborated with researchers in the business school, composed most of a quadtrych that people are writing about, and translated another book in the Heidi series — none of those are my actual research, they were just side quests that I did because they were easy. This is the speed of AI acceleration. What research would a university dedicated to AI wrangling be able to achieve? We simply don’t know.
Commercially, when you employ a UoAW graduate, they will be your most productive employee. They immediately dive in an do the work of 100 specialists, but you have this lingering doubt that they don't have the faintest idea even what their major was about. Many UoAW graduates will launch successful solo startups that achieve billion dollar valuations overnight. You never know whether a UoAW graduate knows anything or has really learned about anything. When you talk to them, something is a bit off because they are used to talking to AI bots not humans.
Also, since it’s all about outcomes you're never sure if they just paid someone or something to achieve all the outcomes.
You have no way of confirming whether the student actually did anything with the course. They might have had a bot watching the lectures for them and an avatar attending the zoom call for all the small group activities. Effective students will have a bot watching for new assignments and work and making a start on it, bringing with it whatever resources it needs to make progress — long before the student sees the assignment.
So maybe this or that UoAW graduate paid someone to get their degree for them, or never actually had any involvement in their degree but employers and investors don't much care because if they have access to the resources to do that, they probably have the resources to make whatever they want happen.
The extreme case of this would be the graduates of Saint Isadore University of Holy Ignorance -- where putting faith into the ability of AI to do anything is a core belief of the compulsory “theological” unit. Or maybe it should be Saint Ada. Many effective accelerationists might attend this university.
There isn’t any middle ground
We can choose which one we want, and I guess we could do this at the subject level rather than university-wide. But we can't pretend that there's a middle ground.
If you allow remote presentations (instead of in-person ones) then you can't prove that it was the student who delivered it. If it's voice-only, ElevenLabs voice cloning is trivial. (I've already been asked to use it to clone lecturer's voices for their lectures.) If you want video, that's a bit harder to fake, but it's possible.
If it's an assignment that they work on somewhere where they can have access to AI, then the incentives push students to use AI. It will do a better job than all but the most brilliant students. That ship sailed with gemini-2.5 and o3 (or maybe even with gpt-4.5). Writing, research, analysis, commentary, interpretation, translation: AI is already better than most students.
Looking for Google Docs history or other signs of activity (e.g. git repo commit history) -- just ask o3 or operator to schedule a task to work on it piece by piece over several days. That’s how I translated Au Pays De Heidi — not because I cared about hiding how I did it, but just so that I could review a small amount of translation work each day. AI can fake a schedule that looks like a human.
If you have traditional exams but students can wear glasses in, they could wear augmented reality glasses that connect to an AI service and can answer any question on the exam paper. There really is no way of policing whether that student wearing chunky glasses actually has a 5mm x 5mm computer chip in one of the arms.
And so on.
The METR results suggest that in the next 12 months or so we should have AI that can autonomously complete tasks that human beings take 2-4 hours to complete. They also observe that that window is doubling every 7 months, so even a major take-home assignment will be do-able as a one-shot prompt by the end of 2027 or early 2028.
With the exception of physical tasks (e.g. a titration test in a lab, putting a plaster cast on an arm), any undergraduate-level assignment you can set, AI can do. Any mitigation you can put in place (other than complete isolation from technology) is ineffective. If you pretend that this isn't the case, you're just going to reward student dishonesty.
On behalf of computer scientists everywhere, I apologise for what we've done, and we're sorry to everyone we've affected. Sorry for breaking academia.
Let’s not pretend that it isn’t broken; let’s choose which option we want to turn our institutions into, and start making it happen.
Hi, nice post. Could you expand on this: “Narrative learning (done by AI) will outperform every other explainable machine learning technique.”? I am not sure I understand what you mean by narrative learning here and how this would outperform any XAI technique while staying reliable (not that XAI is reliable either, but that’s another can of worms).
The high school & college students of today are pretty disadvantaged by the inability of teaching to adapt to the rapid pace of LLM improvements. The "no-tech" teaching option is laughable, since it could only be more ridiculously inadequate for helping kids enter the real world if they were asked to learn chancery cursive and wear 17th century clothing.
The wrangling path is the one advocated by Ethan Mollick at Wharton, who has been doing some of the best work envisioning post-AI education