What if AI usage was normalised in higher education?
Students have already made their choice
Brain Rot.
“So my problem is that our younger generation is relying too, too, too much what was supposed to be an enabler. AI was supposed to be a tool to, you know, for the further progress,” someone complained at a discussion I attended about AI recently.
Everyone worries about brain rot in the younger generation: first it was the impact of computers, then the impact of social media, and now it’s brain rot because of AI.
The ‘AI brain rot’ panic misses the point: education mostly measures compliance, not learning, and AI just makes that obvious.
As Reed Hastings, pointed out in a recent interview on the Impact of AI on Schools:
“But mostly students are going to ChatGPT instead of specialty applications. And so whether that’s Khan Academy, of which I’m a board member of, or others, people are learning that, you know, the AI chat is a very broad and useful tutor.
So if you need some help in physics, that’s the first place you go. If you need to plan travel, if you want to ask a boy out, you know, it’s like wide ranging, you know, counseling. I mean, you know, it’s already there for younger people and they’re using it, you know, in huge numbers.”
Whatever institutions decide, the fact is that the choice has already been made.
For students, AI is here to stay. The education system needs to adapt to it.
The Compliance Stack
Higher education is optimised for proof, not learning. Srinath Sridharan recently identifed a key problem with the education system (though someone I know who runs a school disagreed with it):
For many private school owners, the institution has gradually become a social instrument rather than an educational one. The school’s name confers standing, facilitates access, and offers the durable social legitimacy of being labelled an educationist. Far less visible is rigorous introspection on what actually transpires in classrooms, how learning outcomes are shaped, or whether children are meaningfully better prepared as a result.”
“The costs of this arrangement are systematically transferred away from institutions and onto families. Tuition dependence, even among young students in expensive schools, carries no reputational or regulatory consequence for owners.
Under-supported teachers operate within constrained systems, students internalise inadequacy, and parents absorb escalating financial and emotional burdens. In the absence of transparent public data on ownership, teacher compensation, and learning outcomes, accountability fragments across administrative layers.”
Almost all formal education is largely compliance, and students (and parents) have no leverage against it.
It’s as if everything works backwards from job applications: the degrees, the projects, the internships, and before that, in school, the projects feed essays for college applications, while examinations are focused on pure numbers, because of college cut-offs.
When you’re in school, your job is to get admission into college. The moment you’re at college, then basically your goal is to get a degree, not to learn, so you can get a job. So everyone’s in compliance mode. The students need it, and the teachers and schools largely facilitate it.
I was discussing this with my friend Vibodh Parthasarathi, Associate Professor at the Centre for Culture, Media & Governance (CCMG), Jamia Millia Islamia in New Delhi, and he pointed out a structural issue:
“At the end of the day, your degree is certifying not what you were taught, but how you performed. The employer won’t ask, ‘Show me what your syllabus was.’ You got an A+ in this? Oh, wow. Very good.”
Compliance works because that is the only thing that is measurable, and measurability actually entrenches the compliance stack, because even though learning is more important, it is difficult to observe, compare, and certify at scale.
We are thus optimising for compliance, not for learning. In fact, even measurability is flawed because most employers now have to test for applicable skills, or train people themselves, because they have no faith in the compliance system and they can’t really observe learning on the basis of the numbers.
In fact, numbers tend to be a rejection criteria, rather than a selection criteria, just to ease filtering.
Once education, as it currently stands, is understood with as compliance stack designed to generate certificate proof, the disruption caused by the normalisation of AI is easier to understand:
AI breaks what has stood traditionally as proxy for learning.
Conceptually, just as with content, apps, classifieds and AI Agents, AI creates a paradigm shift from outputs to outcomes even in education.
How assessment breaks
AI actually fits neatly into the current system because it reduces the cost of compliance for students.
Why wouldn’t a student use AI? For them, turning in an assignment is compliance. Giving an exam is compliance. AI gets used because it aligns neatly with the incentive structures in education.
The students’ goal may shift from learning to just handing in a submission because institutions prioritise format and deliverable over internal cognition.
An observation from Vibodh indicates that this is an issue that predates AI:
“You have students who relatively might get higher grades in their written submissions but when quizzed on it during their presentations. you realise they have not grasped the material. So I don’t need to do an AI check on that submission.
In fact, AI actually widens the gap between output and understanding by reducing the work the student needs to do to submit an assignment.
Vibodh adds that AI also “reduces your academic labor.”:
when your library didn’t have journals, you had to go scavenging for them. When it did, you had to look at shelves to find the right one, and read through a lot to find what you were looking for. When we started to access journals on electronic databases, the process of finding what is relevant got shortened by using keyword or author or subject search Now, you’re prompting a language model to come up with what journals it should be actually looking at --- and even summarize the literature.
An assignment is no longer a proof of work or learning.
Additionally, “traditional assessment was a timed handwritten exam. What is being tested there is memory, and we’re also testing for speed.” Vibodh says. “You’re not really testing anything else.”
An exam was never proof of understanding.
My thinking is we shouldn’t be testing for memory, and we shouldn’t be testing for speed. We should be testing for comprehension.
As the description of Srinath’s article points out:
“What is not measured—foundational understanding, reasoning ability, and intellectual independence—quietly exits institutional priority.
Students optimise submissions because that’s what the institution rewards, while the institution resorts to exams because they’re controllable. What the gap created by AI usage tells us is that neither assignments nor exams are adequate measures of comprehension.
If students are “cheating” by using AI, they’re cheating on things that don’t matter beyond compliance anyway.
How the role of faculty and assessment changes when AI is normalised
The teacher has to redesign evaluation, and be far more agile, Vibodh says, adding that the burden should not only be carried by the student:
“So we are reconfiguring what is expected as a submission (assignment or exam) here.”
“And for that, the homework would have to be equally being done by the faculty every semester.”
The approach of the faculty merely rotating questions cannot work anymore. When AI makes compliance metrics redundant, it exposes what the faculty is actually responsible for: learning.
I asked him to suggest how the system mechanics must change if AI usage is normalised instead of being banned. Ideas from our discussion (some his, some mine):
One, flip the approach to evaluation. Vibodh said:
“I judge you not by what you are answering, but by what question you are posing. This way, I’m not evaluating memory, or necessarily, the breadth of reading or analysis. I’m evaluating the ability to state the problem.”
This is a radical shift in evaluation logic, and forces the faculty to grade curiosity: framing of a problem as an evidence of learning.
Two, consider mechanisms to ensure effort. I read somewhere on X that a teacher asked students to submit handwritten assignments because the process, and effort of writing can improve absorption.
Three, take the Harvard case study approach. I suggested we shift reading and studying to homework, and discuss case studies in class, almost as if it’s a viva exam.
Four, another option is timed open-book exams, because then we’re not testing memory, but the ability to try and extract information from a finite source but in a finite time period. This assesses understanding and information extraction under a constraint.
Five, for examinations, a switch to a student both posing a question and answering it, Vibodh suggested:
So then actually, I am looking at two skills here. The ability to problematize—how complex that problematization is—and how well you can structure your answer or argument.”
Six, cross verification of AI generated content: Vibodh says that,
“Attention should be on cross checking for since research has been showing us that maybe models are biased .... So the task is not only just to think of how to prompt the machine, but critically capture what it has churned out or summaried, and then cross check for foundational matters like validity, relevance, bias etc.”
This test treats an AI output as something to be interrogated, not believed. A student can be asked to elaborate on where is AI (or text provided) is accurate, misleading, reductive, or biased? A student who doesn’t understand the material cannot meaningfully critique a summary of it.
When I wrote about ChatGPT Health, I said that AI is most valuable to users who know how to distrust it.
Seven, cross questioning of presentations. Vibodh said that while a student might use AI to make a PPT and present it based on the content in the PPT, but learning can be evaluated on their understanding of what they’re presenting.
However, this does create its own challenges by creating new bottlenecks. Every single assessment becomes becomes significantly qualitative, and may differ in mechanics for each individual. An element of bias may also creep in. The entire assessment becomes subjective, and that doesn’t scale.
So how do you then ensure that there’s fairness? By the time you’re on your 15th or 20th paper, you’re already tired. How do you make assessments based on a class discussion?
The burden of proof of fairness alone will overwhelm teachers when every grade becomes contestable. The system will cause teachers to revolt because of overwhelm, uncertainty and fatigue.
Vibodh suggests that when they began doing presentations alongwith submissions, they had to develop terms of reference, saying “You’ll be evaluated for this, this, this, and this, and this, this, and this, and this. Either you can give it to the student in advance - like US universities do- or we keep it with us as faculty as our metrics.”
Terms of reference can be used to justify the grade when a grievance is raised. This is an standardisation as an administrative safeguard when answers are no longer standard.
Vibodh’s takeaway from our conversation on AI in higher education, which lasted around an hour and half as we explored the issues:
“That part has never been clarified, as technology changes, is that what we need to do is to redefine what is being evaluated here. That for me is the big kind of takeaway from this.”
What changes for educational products when institutions shift from outputs to learning as an outcome
One, we need systems that capture learning trajectories, not outcomes or final submissions. How a student learned, changed their point of view, and assess the gap between what they knew previously and what they know now.
Two, we need systems that adapt to differences in learner intent. Some will choose to go down rabbit holes, while others will do just what is enough. If learning only works for motivated users, then it fails to scale learning. Not every learner interrogates issues when AI offers an easier way out.
Three, we need systems that adapt to gaps in understanding: systems that are able to interrogate a user and identify gaps in understanding to help figure out what someone needs to learn to go to the next level. Students have unknown unknowns - they don’t know what they don’t know. Most importantly, systems should know when not to answer, in order to allow the student to learn for themselves. We don’t need answers engines, but engines that enable answers.
Four, systems need to optimise for encouraging questioning, not giving answers. They need to make people think, not be rigid, but leverage recommendation engines to gradually help them towards learning outcomes.
Five, products must be careful not to fill gaps: when a user gives only 5% of the attention required, AI shouldn’t rush to fill the gap, but nudge them into action.
Six, products need to maintain longitudinal memory of learning, and anticipate problems and adapt solutions, but at the same time, identify when historical context stops mattering. This is not going to be easy, but documentation needs to become a norm.
Seven, systems need to be explainable, in order to aid teachers with grievance redressal. Evaluation logic is hard to define, scale and justify. They must be auditable without turning learning into a bureaucratic process.
Eight, systems need to optimise for trust: Once grades lose authority, trust in the process matters more than the outcome. Students need to trust fairness, teachers need protection from disputes, and institutions need legitimacy. Trust must be designed explicitly, not assumed.
Nine, systems need to enable comparison without ranking. Products must support side-by-side interpretation of reasoning, growth, and problem framing across students and over time, without forcing ranking.
Ten, systems need to provide signaling: to prospective college, prospective employers, and hence to students, in order to address existing processes dependent on compliance.
Lastly, I’m not looking at this as a blueprint for fixing education: I think those systems will be hard to change, and institutions are so set in compliance that by the time they change, it will probably be too late.
In my opinion, the problem is not brain rot, but a lack of intent stemming from a system that doesn’t optimise for enabling curiosity and self-learning, and leads students to focus on compliance over learning.
AI in education is most powerful for students who already demonstrate intent and curiosity, know how to think, question, and doubt, and it’s regressive for those who just want easy answers.
Somewhere, whatever is built has to enable the willing, and make the unwilling curious.



