What it takes to teach in the AI era
3 models that redefine teaching & learning in a post-AI world
The Second Draft: #0043
Why I Wrote This
After several weeks of relentlessly highlighting existential cracks in the higher ed industry, I wanted to offer some practical ideas for how educators can actually approach teaching & learning in the AI-era
What You’ll Get
The illusion of resistance — Why strategies to “AI-proof” learning always collapse under real-world pressure
The hardest question in teaching — The simple distinction that changes everything about what we teach and test
The new map of expertise — How to frame education for students who will always have AI in their hands
The fractal approach — Why the best teaching models scale from the first lesson to a lifetime of learning
The educator’s advantage — The one role humans still own that AI cannot touch
RaaS (Resistance as a Service)
You’ve probably seen something like this at your institution.
On the one side, you have the AI-Proof crowd—instructors and institutions scrambling to wall out AI with proctoring software, closed-book exams, and handwritten essays.
On the other side, is the more progressive crowd. We can call them the AI-Resistant (or resilient if we’re generous) camp—clever instructors designing assignments with what I call hyperlocalist prompting, such as “include a reference to something I mentioned yesterday in class.” The hope here is to make AI “less useful” owning to it’s supposed lack of contextual awareness.
Both approaches share the same fatal flaw:
they’re trying to remove AI from reality.
Of course, this is all naive. AI is already everywhere.
Whether we want to consider higher education vocational training or not, the fact is that every knowledge worker has Claude, ChatGPT, and probably 3 other AI tools open at all times. The world today demands the ability to retrieve, synthesize, and apply information in real time.
So what if we stopped fighting it and started designing for it?
Model #1: AI Actualism
That is where what I’m calling AI Actualism comes in
—and it challenges everything about how we think about curriculum design.
🌐 AI Actualism
The core premise is simple:
Accept that there is no work product (or process) we can assign that students cannot outsource to AI.
That’s the key unlock.
We just stand here at the threshold and admit that
there is no (reasonable) way to solve the AI and assessment problem.
AI is everywhere.
So, you need design your curriculum around what actually matters in your field. And design it around what matters today, tomorrow, and perhaps (as best as you can surmise) 5 years from now.
Since you read my work, you know that I’m not merely suggesting teaching “AI literacy” or running prompt engineering workshops (though those might be useful).
The idea here is about asking a much harder question:
In a world where students can access infinite information instantly,
what knowledge must they carry in their heads versus what can they retrieve on demand?
This question forces us to confront something uncomfortable: much of what we’ve been teaching was designed for a world where information was scarce and retrieval was expensive.
That world is gone.
Recap:
Model #1: AI Actualism—designing for a world where AI can do all the work.
Model #2: Protected vs Prompted Knowledge
Here’s the key distinction that is changing how we design curriculum.
This is backwards design 2.0.
This is about starting with the KSAs you want to get at and being brutally honest about what students need to know versus what they just need to know how to get.
Let’s take a closer look:
Protected Knowledge
This is the core concepts, frameworks, models, and ideas you must internalize and remember. These provide anchors for schemas, mental models, and recognition. They’re what you need to know cold—memorized and retrievable without external help.
Anyone who tells you that with AI we just need to teach how to think, or learning to learn, doesn’t understand anything about knowledge or learning.
All application of information starts with memorized information.
→ The more ideas you know
→ The more connections you can create
→ The stronger and more connected your schema become
→ The more able you are to apply those ideas to novel situations
Prompted Knowledge
This is the details, elaborations, applications, and updates that can be created or retrieved when needed through AI, search, or conversation. This is information you can look up because you know what to look for.
The critical insight:
You can only prompt effectively if you’ve protected the right anchors.
Let me give you an example from business education.
My undergrad was in business, I have an MBA, and I’ve taught UG business courses.
After all this schooling and my actual work experience, I know for sure that Porter’s 5 Forces is a thing. It’s protected knowledge.
This is something students also need to know exists, roughly what it covers (competitive dynamics), and that it’s a tool for analyzing industry structure. That much lives in their head.
But the specific details of all five forces, their nuanced definitions, and how to apply them to a specific industry, for me, is all prompted knowledge. Students can look it up when they need it—but only if they protected the anchor that tells them what to look for.
Without the anchor, you’re drowning in AI results, unable to discern what’s relevant.
This reframes the entire teaching challenge:
We’re not trying to cram students’ heads with facts they can ask GPT.
But we also aren’t pretending facts don’t matter.
Instead, we’re identifying which anchors are so fundamental that without them, students can’t function in the field.
Recap:
Model #1: AI Actualism—designing for a world where AI can do all the work.
Model #2: Protected v Prompted—designing for the KSAs that are needed to leverage AI to do the work.
Interlude: The Instructor’s Unique Advantage
Here’s the instructor’s present advantage over AI:
AI generates volume.
Humans distill signal.
Right now, ChatGPT can produce endless explanations, examples, and details. But it cannot decide which big idea matters most for a novice entering a field. It doesn’t know which framework should be the anchor, which model should be protected, which details can safely be prompted.
That’s judgment. That’s expertise. That’s pedagogy.
Your role isn’t to compete with AI on detail.
It’s to eliminate noise and present the simplified, memorable big idea first.
Model # 3: Fractal Teaching
Think of curriculum design like Russian nesting dolls (matryoshka dolls).
Each concept you teach has a structure:
Protected core: The big heuristic, model, or framework — memorized and anchored
Prompted layers: The applications, examples, variations, and context — explored with AI and adapted to specific situations
Let me show you how this works in practice.
I like to use Hormozi’s Value Equation, because it is essentially a master’s in marketing distilled into 5 words:
Value = (Outcome × Risk) / (Time × Effort)
This is the anchor for understanding marketing. It is the big reveal, the end of everything students would learn—presented first.
This is classic protected knowledge. It’s memorable, it’s anchored, and it’s the lens through which we’ll view everything else in the course.
Students memorize it. Then they can draw it on command.
Then then dive into each component.
Dream Outcome becomes its own matryoshka doll:
Some aspects are protected (key frameworks for identifying customer desires)
Some are prompted (using AI to generate lists of possible outcomes for different segments)
Risk, Time, Effort each get the same treatment:
Protected: The models, templates, formulas for thinking about each
Prompted: Examples, specific cases, etc. showing how different industries handle each differently and how you could it apply it in your own field
Students don’t need to memorize every application, every case study, every example. And neither do they need to memorize how they applied the ideas for their projects!
But they must memorize the core models so they can re-apply them anywhere, anytime.
This is fractal teaching:
Each level of learning contains nested layers — protected cores surrounded by prompted exploration.
Recap:
Model #1: AI Actualism—designing for a world where AI can do all the work.
Model #2: Protected v Prompted—designing for the KSAs that are needed to leverage AI to do the work.
Model #3: Fractal Teaching—designing the course and each part as nested layers of protected and prompted knowledge.
What This Means for Practice
Here are a few thoughts on what these models mean for education:
For curriculum design: Stop asking “what should students know?” Start asking “what must they protect vs. what can they prompt?”
For assessment: Assess for building mastery of protected anchors (do they know the framework?) AND skill in prompted application (can they use AI to solve novel problems with it?).
For AI integration: Protect the anchors through reps, connections, and model-building (not by banning AI). AI is of course perfect for the prompted layer: the application work, the case analysis, the “authentic assessment” we’ve always valued (and now we don’t care if, or perhaps we’ll hope that they use AI for it!).
For your teaching identity: Your number one job is creating heuristics that distill signal from noise. Your expertise isn’t just knowing things—it’s knowing which things are worth protecting, and making complex ideas memorable enough to anchor everything else.
The Path Forward
The shift from banning AI to building with it isn’t something we get to opt out of.
It’s the only path to durable skills in a post-AI world.
And, if you love curriculum and instruction like I do, this is the most exciting time to the be in the business!
We get to reimagine, reshape, and reinvigorate our roles (and our students).
As an educator, you’re not a content delivery system that AI has made obsolete.
You’re the curator of protected knowledge. The architect of schemas. The sage who knows which anchors matter and which details can wait.
AI Actualism isn’t giving into tech or going all-in on AI. Simply, it’s accepting reality and then designing something better, viz., curriculum that produces students who live in the expertise zone, with durable anchors and adaptive minds.
No vague “learning how to learn.”
Real expertise,
taught by real experts.
What education should be.



thank you! this post models your point well: distilled heuristics.
a comment/question based on your point here: "For curriculum design: Stop asking “what should students know?” Start asking “what must they protect vs. what can they prompt?”" I observe an assumption here that AI has changed what it means for humans to know (and not know). If so, would you care to elaborate or point me to an article where you already did. if not, nvm.
I'm just curious.