Dispensability is A Superpower
And that's exactly why higher ed can't figure out AI
The Second Draft: #0094
I write weekly articles for educators who are ready to get unstuck from outdated curriculum, resistant institutions, and a career that was built for a world that no longer exists.
Last fall, I was on an AI panel for a student program kickoff. A hundred-something students at the start of semester talking about AI.
My focus of course was on how AI disrupts the core assumptions of education.
It was something of a lifting of the veil to let students, to let them in on the secrets of how challenging AI really was for educators at the time.
We talked about three things.
How education is trades on a persistent myth and broken promise that higher education has exclusive access to rare information
That educators understand their role is to distribute that information into students’ minds in the same logical and sequential manner every semester
And finally the most rational thing for students to do is to use AI to complete their assignments.
Then I lifted up the veil a little higher.
I told them that most of their faculty have no idea why they assign the things that they do. Of course that is not to say that there isn’t some rationale behind their assignments. Just that that rationale has never been interrogated such that faculty would have any level of confidence that that particular assignment, given in that particular way, was the most reliable way to produce the outcomes that they intend the assignment to produce.
I suggested that this was another reason why students should be motivated to use AI to complete their work—most coursework really is meaningless.
But then I immediately challenged my own suggestions.
I reminded students that these will be the most expensive two letters they ever pay for. And since they’re spending a lot of time and money to be here—we can agree that the goal is for them to get the grades, to get the degree, to get the job, AND we can also agree that there is a much greater opportunity for them during these four years, than just the grade.
And, paradoxically it also runs through AI.
So I offered another challenge. Before you just hand the assignment to AI to complete—paste it in your LLM of choice and ask:
What do you think my instructor wants me to learn from this? What knowledge, skills, and abilities do you suspect they trying to have me develop?
And then ask it to create alternative assignments that would be meaningful for you to build those KSAs and demonstrate your knowledge. Do those. See if your instructor will accept them as evidence of your learning.
This idea seemed to really connect with the audience, which was comprised of a large percentage of highly motivated students. I could see this was an AI use case they didn’t realize they could utilize, and obviously would never have realized they needed.
It’s All About Me
There was a reception afterwards for the panelists, the students, and all the other attendees, including faculty.
A particular faculty member found me immediately and approached me about my presentation.
He was not happy.
He told me the faculty do know why they assign things. Considering I had direct access to literally thousands of assignments in my role—and had seen the worst we can do—I was able to directly but gently push back. Beyond that, I suggested that even if we granted that faculty had a specific purpose behind the assignment, that does not necessarily imply that they actually create assignments that effectively communicate or achieve that.
He was not happy. But that also wasn’t his core objection.
He next took up against my point students should ask AI what KSAs their instructor was trying to get at with their assignments.
He demurred:
“I don’t want them to ask AI. I want them to ask me.”
Well there it was. Now we can see what’s clearly at stake.
Munchausen by Proxy
In the last article I wrote about how AI forced me to realize that I was often the bottleneck in my own work.
I noted that my value was directly correlated with the inability of the people that I served to do their job without me.
And I asked that question:
Would I hire myself today, given AI?
But there’s another question that’s even more challenging and unsettling:
Am I even necessary?
I can make all the claims I want about the importance of humanity, or the expertise that I have, or the connections I’ve built, and impact that I’ve had. But, here’s the truth:
The actual work that I do + the work that AI can do = very little about my work is necessary.
In many ways this is what’s playing out behind the objections to AI from folks inside higher ed.
Education as Entrapment
There were several others along the way, but one final objection the instructor made was this:
Even without AI detection software, he could confidently tell when students used AI. He explained that he knew when students used AI because he’d spent years spotting the types of mistakes students typically make in their writing.
As a quick aside this was not a writing course.
But leaving that for a minute, let’s get to his objection—that fewer errors signaled something was wrong in student work.
For most of higher ed, this is the core challenge of AI. Across disciplines, teaching a course generally means:
Assigning work that predictably leads to predictable student errors which the instructor can predictably correct, thereby demonstrating their expertise and the value of the course through the correction of predictable mistakes.
Literally the cadence of every course ever. 👆
And yes AI blows this up.
It is not that AI outputs are without error. It is just that they’re without the types of errors that instructors are familiar with and are therefore comfortable correcting.
And so we resort to blue books, oral exams, lock down browsers, AI detection, and remote proctoring software because we need to maintain the illusion that what we do is actually indispensable—that the only thing standing between us and societal collapse is a few sentences of feedback correcting a student’s citation formatting, grammar, or explanation of some minutia of content in some course of study.
What AI Demands
From my conversations with faculty who have embraced it and students who have admitted it, this is now the SOP for student work in courses.
They take the course syllabus, the slides, and whatever other material is available to them. THey load all this into NotebookLM. They use the notebook as their own personal platform for learning the course material.
Of course, they will still go through the motions of going to class and preparing for the assessments that the instructor assigned, but the learning itself, the questions that are being asked, the practice, even the curiosity, are happening in tools like NotebookLM.
And this makes perfect sense. It is customized, personalized, on demand—wholly owned by the student. Whether you want them to or not, students are not going to just ask you because you are a bottleneck.
And you can make all the claims you want about the importance of humanity, or the expertise that you have, or the connections you’ve built, and impact that you’ve had. But, here’s the truth:
The actual work that you do + the work that AI can do = very little about your work is necessary.
So this is what AI demands:
We acknowledge that individually and institutionally nearly every aspect of the work we do and the value we traditionally delivered is entirely dispensable.
The only way we come out the other side of this is if we remove ourselves as bottlenecks between our customers (yes, I called students customers) and the outcomes they are trying to achieve.

