Edu-Snippets
Why knowledge matters in the age of AI; What happens to learners' neural activity with prolonged use of LLMs for writing
Highlights:
Offloading knowledge to Artificial Intelligence (AI) weakens memory, disrupts memory formation, and erodes the deep thinking our brains need to learn.
Prolonged use of ChatGPT in writing lowers neural engagement, impairs memory recall, and accumulates cognitive debt that isn’t easily reversed.
Generative AI is reshaping how we learn but at what cognitive cost? Two new studies explore the trade-offs.
Does knowledge still matter in the age of Artificial Intelligence?
In The Memory Paradox, Barbara Oakley and colleagues confront a growing contradiction in education: in the age of digital abundance, we are increasingly outsourcing memory to devices. Neuroscience and cognitive science suggests that doing so may be weakening our ability to learn, think critically, and transfer knowledge across contexts.
The authors argue that cognitive offloading to digital tools may be undermining the very mental capacities we seek to enhance, including reasoning, creativity, and problem-solving.
Their conclusions are based on decades of research in neuroscience and cognitive psychology. They highlight the interplay between two major memory systems: declarative memory, responsible for consciously recalling facts and concepts, and procedural memory, which is concerned with skills and routines that are made automatic. The development of expertise, they argue, often involves moving knowledge from declarative to procedural systems by practicing a skill until it becomes automatic and intuitive. This is the basis of fluency, insight, and flexible thinking.
The paper shows how a reliance on external devices (calculators, search engines, even AI assistants) can prevent students from creating new declarative and procedural knowledge. When instruction leans too heavily on external lookups (e.g., calculators, search engines, even AI tools), learners may never develop the robust knowledge base needed to support reasoning and problem-solving. Instead of building robust neural networks through repeated engagement, students rely on “biological pointers”—i.e., knowing where to find information but not understanding it well enough to apply or transfer it.
Consistent with this argument is a critique of how constructivist approaches—especially when implemented as minimal guidance, discovery learning, or "guide on the side" models—often downplay memorization, structured practice, and explicit instruction. Constructivist models, while well-intentioned, assume that students can construct understanding through exploration and access to tools, rather than by internalizing key content through deliberate effort.
But the brain doesn’t work that way.
This is the heart of the Memory Paradox: as educational trends and technologies increasingly allow us to avoid memorization, the resulting underuse of our memory systems may be degrading our cognitive capacities. The authors even connect this phenomenon to the recent reversal of the Flynn Effect1, suggesting that a decline in educational rigour, paired with widespread cognitive offloading, is contributing to reduced IQ scores in high-income countries.
Ultimately, the paper is not anti-technology or anti-AI. Rather, it’s a call for balance: to use digital tools to augment our thinking rather than to offload the mental processes that we need to learn and develop expertise.
Bottom Line
Learners can't think critically with knowledge they don’t have. Instruction must foster internalization to enable flexible, fluent, and independent thinking in a tech-rich world.
Key Takeaways for Instruction and Instructional Design
Prioritize models, like explicit instruction, that promote knowledge construction, not just access to content.
Use retrieval practice, spaced repetition, and worked examples to strengthen memory formation.
Avoid over-reliance on tools that encourage passive lookup or superficial engagement.
Design for cognitive load management: automate foundational knowledge to free up working memory (e.g., memorize multiplication tables).
Create scaffolded pathways from declarative to procedural memory using interleaving, feedback, and reflection.
What are the cognitive costs of using ChatGPT as an assistant for essay writing?
An exciting new MIT Media Lab study examined how prolonged use of Large Language Models (LLMs) for essay writing over four months affects learners’ neural activity. Essentially, the study aimed at asking the following question: How does long-term use of generative AI affect cognitive effort, memory, and neural engagement during writing?
Study Design:
Participants: 54 university students (from MIT, Harvard, Wellesley, Tufts, Northeastern)
Conditions (Three Sessions): Over three writing sessions, participants were randomly assigned to one of three groups:
Brain-only – no external tools used for writing
Search Engine – use of traditional search (e.g., Google)
LLM-assisted – full use of LLM like ChatGPT
Writing Task: In each session, participants wrote an academic-style essay.
Multimodal Data: Researchers collected EEG data (to track brain connectivity), natural language data (to assess linguistic richness and individuality), memory recall performance, and self-reported perceptions of writing ownership. Their written work was evaluated by both teachers and an AI judge.
Switch session (Session #4): In the fourth session (two months after Session 3), the researchers flipped the conditions for two groups:
Participants who had used LLM for Sessions 1–3 were now required to write without any assistance (Brain-only). This group is referred as LLM-to-Brain.
Those who had been in the Brain-only group switched to LLM-assisted, being referred to as Brain-to-LLM.
This session allowed researchers to measure how tool use affects neural recovery, memory retention, and writing performance when conditions change.
Core findings:
Neural engagement decreased with more external assistance. The more help people got from outside sources, the less their brains had to work. In fact, brain activity dropped in a pretty predictable way as the level of assistance went up. When people had to rely on their own brains, their neural networks were the most active and wide-reaching. When they used a search engine, brain engagement was somewhere in the middle. And when they used an AI assistant, like ChatGPT, brain connectivity was the weakest of all.
Tool switching effects (Session #4): LLM-to-Brain retained lower connectivity over time, suggesting cognitive under-engagement (“debt”). Brain-to-LLM participants demonstrated higher memory recall and higher neural activity.
Memory and ownership losses: LLM-assisted writers struggled to quote or recall from the essays they wrote just minutes prior, reported lower ownership, and had more homogeneous linguistic style, even though essay scores were similar.
Performance Paradox: Despite lower cognitive engagement, essays scored similarly across all conditions, showing that surface-level performance metrics may mask deeper cognitive deficits.
Long-term costs: Over four months, LLM users consistently underperformed on neural, linguistic, and behavioural metrics compared to Brain-only writers.
Bottom Line
Relying on AI tools like ChatGPT for writing may feel productive but it suppresses cognitive engagement, reduces ownership and weakens memory. This “cognitive debt” builds silently and isn’t quickly reversed when the tool is taken away.
Key Takeaways for Instruction & Instructional Design
Begin with unaided thinking: Have students brainstorm, outline, or draft without AI first to strengthen memory, ownership, and cognitive engagement.
Use AI for revision, not generation: Introduce AI tools after an initial draft exists so they refine—not replace—the learner’s thinking process.
Promote metacognition: Ask students to reflect on their writing process or explain changes made with AI support to deepen engagement.
Design tool-delayed workflows: Structure tasks so AI is introduced only after core ideas are developed without assistance. This will promote cognitive effort on the part of students before accessing AI tools.
Embed memory checkpoints: Include prompts like “What do you remember most from your draft?” or “Summarize your main argument without looking” to reinforce internalization.
Track process, not just output: Monitor time on task, revision behaviour, and recall to evaluate learning depth not just final essay quality.
References:
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., ... & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv preprint arXiv:2506.08872.
Oakley, B., Johnston, M., Chen, K.-Z., Jung, E., & Sejnowski, T. (2025). The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI. In The Artificial Intelligence Revolution: Challenges and Opportunities (Springer Nature, forthcoming).
The Flynn Effect is the observation that IQ scores in developed countries have increased over much of the 20th century. The “reversal of the Flynn Effect” refers to reports that these increases have slowed, stalled, or even reversed since the 1990’s.
So grateful for this article. And I’m finding it deeply disturbing as I compare it to my personal experience.