The AI Transparency Movement and Our Ignorance of Learning
How getting students to document their AI-thinking shows we don't know how people think
The Second Draft: #0084
I write weekly articles for educators who are ready to get unstuck from outdated curriculum, resistant institutions, and a career that was built for a world that no longer exists.
One Myth at A Time
We kicked off this series by laying out three fundamental misunderstandings behind the growing AI transparency movement.
1/ This isn’t how people think
2/ This isn’t how people should use AI
3/ This isn’t something we actually know how to do
This week, we are taking on the first problem:
AI transparency doesn’t actually understand how people think.
The Audit
I saw a post recently where an instructor—well-intentioned I am sure—had created a three-part framework for students to submit alongside their AI-assisted work:
1. Thinking Trace
A structured, evidence-based reflection that captures specific cognitive moves (decision-making, mismatch detection, next-step planning).
2. Integrity Crosswalk
An embedded alignment check that makes research-to-writing continuity traceable across artifacts (notes/logs → outline → draft).
3. Draft Fingerprint
A short self-audit that makes internal consistency visible (anchor terms, claim continuity).
The stated goal was something like to help “strengthen judgment around AI,” (though I tend to think proof of originality was probably at least a weighing factor).
Which, without being rude, is ironic.
Because this approach is the outworking of judgments about cognition and learning which are just . . . not true.
The Problem With Thinking About Thinking
Confabulation
In 1977, psychologists Richard Nisbett and Timothy Wilson published a paper titled Telling More Than We Can Know: Verbal Reports on Mental Processes.
They ran a series of experiments where people made choices, such as which item to buy, which word came to mind, which solution they preferred, and then explained why.
In one famous experiment, participants were asked to choose the best quality stockings from a row. They consistently chose the pair on the far right, which showed a common phenomenon called the position effect. However, when asked why, they invented complex reasons about knit, sheerness, and feel. Not a single participant mentioned the position of the stockings.
This tells us that the brain is excellent at confabulation—generating a plausible-sounding story to explain an action after the fact.
Verbal Overshadowing
In the early 1990s, Jonathan Schooler found that when people tried to describe a face they’d seen—putting a visual memory into words—their ability to recognize that face later actually got worse.
Don’t miss this 👇
The act of verbalizing the memory degraded it.
The description replaced the experience.
And made the memory less reliable!
Verbalizing actually forces the brain to rely on “recoded” linguistic data (e.g., “he had a big nose”) rather than the holistic, visual memory, which is much more accurate.
This is literally the exact opposite of what we think we are achieving through reflection, transparency, documentation, or whatever scheme we come up with.
Rhizomatic Reasoning
I’ve written before about rhizomatic cognition. The idea here is that knowledge isn’t stored in a file cabinet. It’s a web. Any node can connect to any other. Connections form and re-form constantly, below any level of conscious awareness.
Indeed, if we are honest, most of our thoughts are inscrutable.
So, when we ask someone to trace their thinking, we’re asking them to draw a map of the rhizome. But the rhizome doesn’t have an entry point. Or an exit point. Or a path. It has connections, which we saw with our ❄️ snow experiment, and most often, those connections make no actual sense at all.
The Wicked Problem
Here’s the problem with all of this.
We can only ever hope to discover what those connections are and what they mean by looking at what learners actually produce.
And, unfortunately, as we know, this is where the model breaks again, because products are just proxies for the internal cognitive habits that cannot be effectively known and take entirely different skills to produce than the thinking used to produce them!
You’re starting to see the problem . . .
The whole scheme is circular and impassible if as educators we think our role is to get real insights into how students think through process or product.
So here’s where we are (again):
When you ask a student to document their “cognitive moves,” you’re not getting a window into cognition.
You’re getting a new cognitive artifact—a reconstruction that probably doesn’t actually resemble what happened, the choices they made or why; and also might effectively degrade the value of the experience by creating a linguistic rather than holistic memory experience of it.
And, in any case, that artifact cannot effectively capture thinking, because thinking is neither linear nor logical nor even knowable. Any reconstruction would itself be subject to and product of the rhizomatic structure of our cognition and our ignorance of it, and therefore would tell us nothing useful about a learner’s actual thinking process.
And, more pragmatically, the ability to document your AI process, in this sort of prearranged format, in an acceptable deliverable, is nothing more than a product waiting to be gamed.
Which brings us back to the problem that started this whole AI transparency movement—we can’t trust the product to authentically reflect real student learning, so we’ll have students document their processes, which end up just being products anyway. And now we’re trapped.
Hat tip to Jason Gulya for all of that 👆
Next Time
Perhaps you are hoping I’ll untangle this knot for your right here and now.
Not so fast!
These parts of the series are positioned exclusively to dismantle the AI transparency movement.
You’ll have to stick with the series to see how we resolve them . . .



Which is your piece about rhizomatic thinking? I’d love to read it. Fan of Deleuze & Guattari here.